From: Dr Sebby (drsebby@hotmail.com)
Date: Mon Apr 12 2004 - 23:43:09 MDT
...a very nice post rhino. i would also wager that notions such as these
are the reason that other intelligent life forms outside of our solar system
arent to be found galavanting around in little star-trek style space ships
for more than say 500 yrs or so in their species lifetime. what will we do
when we control everything? even if we cant actually do something, we will
be able to convince our brains that we have. once we completely master
biology and most of physics, what will we do? will sport and competition
still remain? it's a very interesting question. will a species-wide apathy
kick in when we finally fit the last important peices to the grand jigsaw
puzzle together? what would be left? "death moment energy physics(tm)"?
DrSebby.
"Courage...and shuffle the cards".
----Original Message Follows----
From: "rhinoceros" <rhinoceros@freemail.gr>
Reply-To: virus@lucifer.com
To: virus@lucifer.com
Subject: virus: Re:That hell-bound train
Date: Mon, 12 Apr 2004 17:09:08 -0600
< quote from http://www.users.nac.net/bobsabella/HallofFame.htm >
It was a traditional deal-with-the-devil story, about a poor roustabout who
devises a deal seemingly impossible to lose: in return for his soul, the
devil gives him a watch with the ability to stop time at any moment for all
eternity.
As expected, the roustabout is too clever for his own good. He keeps
stalling seeking a moment of perfect happiness worth maintaining for all
eternity. A good job and relative comfort? Not yet. A wife and cute young
children? Maybe, but just a bit longer. And so it goes, until he finds
himself divorced, unhappy, broke again, aging, dying. All too soon there is
no reason to stop time because he is so unhappy that who wants that moment
to last forever?
And then the devil returns, ready to take his side of the bargain...
<end quote>
[rhinoceros] A subtle point in the story was that it is not clear what
"stopping time at a moment for all eternity" means. We can't take it
literally (in the same way we can't take "losing his soul" literally)
because we can't conceive a state outside the flow of time. So, we have to
use our own interpretations of this state of "eternal happiness" (which
actully happened here).
[Jake Sapiens] It seems that this would be a game where the optimist (most
of humanity) is at a disadvantage, always trusting that things will get
better. A pessimist on the other hand, knowing that things can definitely
get worse, may not pick the most optimum point to dwell on, but at least
wouldn't squander his chances hoping for things to get better.
[rhinoceros] Jake took an abstract game view: A desirable eternal situation,
whatever that means. Using optimism/pessimism to evaluate one's
possibilities for chosing the right moment for stopping the watch was an
interesting thought. Perhaps we can also learn one thing or two from the
stock market people (evaluating our past successes and the general climate
and whatever else they do).
[Blunderov] I think I would have been able to choose several moments in my
life where I have thought 'it doesn't get better than this'. Happily, so
far, I have been wrong.
[rhinoceros] I wish I could say the same. In my good moments I always
thought there will be better ones in the future, but I was wrong more often
than not. I think I have started to learn now (a slow learner), but still, I
would definitely lose the bet.
[DrSebby] anyways, i think a reasonable approach would be to recognize that
age would
provide an artificial time limit on the 'best' time to stop the watch. i'd
wager that somewhere in your early 30's you start losing a little bit of the
'edge'...and in your early 20's your so unstable that it would be difficult
to gauge any point as contentment. i would probably force myself to put a
time limit on myself..such as, my 28th b'day or 29th. the real question
would be...would i want to be in a relationship at the time of clock
stoppage? or a cavorting single? and if i owned plants or a dog, would i
have to feed and water them still? and if i still knew Walter Watts, would
i have to continue to transport those strange blue plastic bins across state
lines for him after halloween?
[rhinoceros] Sebby took the most streetwise empirical approach (which is the
most scientific one as well, I think -- isn't it strange?) He took into
account empirical knowledge on human physical and mental condition. He also
felt compelled to provide that "moment of happiness" with some time
duration, but he seemed worried that *change* and *striving for change*
would not fit in.
[Walter Watts] Anything done for an eternity, save discovery, would be hell
indeed.
<thinking that whoever would make the deal below hasn't thought it through>
[rhinoceros] This is similar. Walter will not give up the happiness coming
from change either, but he asks for less than Sebby. The problem is that we
old farts have long ago failed to stop the watch on our 29th birthday, so we
have come to terms with the idea that we are going to miss the action
anyway. Being a peeping Tom for discovery is much better than stagnation in
paradise.
[Kharin] A lot of literature tends to depict states of pure happiness as
being somewhat aimless, a perpetual state of lethargy caused by the absence
of anything to strive for.
<snip>
I think George Bernard Shaw put it well: "Heaven, as conventionally
conceived, is a place so inane, so dull, so useless, so miserable, that
nobody has ever ventured to describe a whole day in heaven, though plenty of
people have described a day at the seaside."
[rhinoceros] The desirability of the award was dissected and questioned
mercilessly here. It seems that a "perfect state" may give us pleasure but
not happiness; *striving* for a "state" is indispensable for the animals
that we are. As Kavafy put it, "Ithaca gave you the wonderful journey;
without her you would never have taken the road; but she has nothing to give
you now."
This discussion reminded me of a recurring issue which sometimes comes up in
transhumanist communities when discussing Artificial Intelligence. The basic
idea is that the great and all-powerful AI of the future will be able to
improve itself by accessing its own programming.
The question is: If the AI has not been given specific goals to strive for,
but it has been equipped with some reprogrammable circuitry with which it
evaluates how "happy" it is with its own actions, what would prevent it from
reprogramming itself to go "wirehead" and live in eternal bliss? Does it
mean that freedom of choice is practically meaningless if there are not at
least some hardwired constraints?
---- This message was posted by rhinoceros to the Virus 2004 board on Church of Virus BBS. <http://virus.lucifer.com/bbs/index.php?board=61;action=display;threadid=30100> --- To unsubscribe from the Virus list go to <http://www.lucifer.com/cgi-bin/virus-l> _________________________________________________________________ Help STOP SPAM with the new MSN 8 and get 2 months FREE* http://join.msn.com/?page=features/junkmail --- To unsubscribe from the Virus list go to <http://www.lucifer.com/cgi-bin/virus-l>
This archive was generated by hypermail 2.1.5 : Mon Apr 12 2004 - 23:45:01 MDT