Recall the movie Transcendent Man
. . . What is your overall opinion?
. . . What are Kurzweil's achievements?
. . . Is he a nice guy?
. . . Is he healthy?
. . . What is his opinion on the 4 curves?
. . . . . . linear
. . . . . . exponential
. . . . . . S
. . . . . . plateau
. . . What is his relationship to his father?
. . . . . . goals?
. . . . . . importance?
. . . . . . suppose someone made a movie about you
. . . . . . . . .would it feature your father as much?
. . . Is the movie Transcendence based somehow on this movie?
. . . . . .Let's check the trailer and decide
Ray Kurzweil is also
. . . an inventor
. . . author of
. . . . . . The Singularity is Near
. . . . . . . . . Penguin Books, 2005
. . . . . . and some other books
. . . a Singularitarian
. . . co-founder of Singularity University
Singularitarians (and Singularitarianism)
. . . Related movements include
. . . . . . Transhumanism
. . . . . . Extropianism
. . . . . . Immortalism
. . . Singularitarians hold that
. . . . . . a general technological singularity will happen
. . . . . . specifically, an AI singularity will happen
(like in Vernor Vinge's article)
. . . . . . it will happen soon "enough"
. . . . . . . . . that means it will affect you
. . . . . . . . . (perhaps not me)
. . . . . . . . . Kurzweil estimates 2045
. . . . . . we should take action, therefore
. . . . . . . . . try to make it good!
. . . . . . . . . . . . since it could perhaps be bad
. . . . . . . . . . . . How? Why? Is it possible?
. . . Are the Singularitarians right?
. . . Anyone here thinking of signing up?
. . . Some people think singularitarianism is like a religion
. . . Kurzweil thinks it is a quantifiable result of existing trends
. . . . . . About 2045!
. . . Kurzweil co-founded Singularity University
. . . . . . Let's take a look...
. . . . . . There is significant risk in "putting all your eggs in one basket"
. . . . . . What if you assume exponential change will solve the worlds problems and...
. . . . . . . . . you hit an S-curve?
. . . . . . . . . Oops - woulda, coulda, shoulda...
More on the Singularity
Or maybe, "The Singularities"
Roots of the concept
. . .Physics and math
Mathematics can be "undefined"
. . .ever see "NaN"?
. . .you might divide by 0, for example
Singularities in physics
. . .Consider the relationship among mass, size & density
. . . . . .Gas is a familiar example
. . . . . . . . .Balloons
. . . . . . . . . . . .Volume of breath vs. volume of balloon
. . . . . . . . .Tires
. . . . . . . . . . . .Pumping a tire
. . . . . . . . . . . .Volume of pump vs. volume of bike tire
. . . . . . . . . . . .Changing volume of airspace in pump
. . . . . .Solids work somewhat similarly!
. . . . . .What is the expression for density?
. . . . . .The center of a black hole in space
An Economic Singularity
Labor:
. . .How much labor does it take to make a loaf of bread?
. . .A car?
. . .Food for a person for a day?
What happens if required labor reaches zero?
. . .How might this happen?
A crucial fact is:
. . .More labor needed for more stuff
. . . . . .no longer true!
. . . . . .What happens then?
. . . This connects to the mincome concept
. . . . . . Good idea?
. . . . . . What could happen?
Does this relate to any of our project topics?
Technological singularity
Let's look at a few graphs:
. . .images.google.com
. . .query: kurzweil graphs
Does this relate to any of our project topics?
The AI singularity
I. J. Good (mathematician):
. . ."Speculations concerning the first ultraintelligent machine"
. . . . . .in Franz L. Alt and Morris Rubinoff, editors,
. . . . . .Advances in Computers,
. . . . . .vol. 6 (1965), pp. 31-88.
. . . . . .Available on the Web.
Vernor Vinge (sci-fi writer and ex-CS prof.):
. . ."The coming technological singularity:
how to survive in the post-human era"
. . . . . .NASA technical report CP-10129 (1993)
. . . . . .and Whole Earth Review(Winter 1993).
. . . . . .Available on the Web
Vinge's "Technological Singularity"
. . .See Vinge quote in his wikipedia article, etc.
. . . Kurzweil's turning on the universe
. . . . . . intelligent self-propagation in the limit
Are there any risks?
See e.g. www.stoptherobots.org (archive: goo.gl/kPIzn7)
How will we know if a computer is intelligent?
. . .Turing Test
. . .Yearly Loebner Prize competitions
. . . . . . What's the deal currently?
. . .Xprize competitions
The life span singularity
Most closely associated with Aubrey de Grey
. . .see images.google.com
"Escape velocity" concept
. . .Does it work??
. . . . . .We could try simulating it...
Singularity University
. . . http://singularityu.org/
. . . . . . Devoted to the technological singularity
. . . . . . . . . (What was that again?)
Do the technologies below relate to our project topics?
. . . . . . Exponentially improving technologies include
. . . . . . . . .Energy
. . . . . . . . . . . .Only one is high-exponential!
. . . . . . . . .Biotech (esp. genetics)
. . . . . . . . . . . .Carlson curves
. . . . . . . . .Computing (esp. AI & robotics)
. . . . . . . . . . . .What "law" is most famous?
. . . . . . . . . . . .AI has its own special singularity
. . . . . . . . . . . . . . .Robotics contest: RoboCup
. . . . . . . . .Nanotech & fabrication
. . . . . . . . .Let's check the web for some
graphs and such for these
exponentially improving technologies
NOTE: we're not talking just exponential, but high-exponential
Software productivity per programmer hour
is exponential too,
but low-exponential
Exercise:
In small groups:
Take your topics (or any topic) & discuss:
. . . Where will it be in 5 years?
. . . 10 years?
. . . 20 years?
. . . 50 years?
. . . 100 years?
. . . 200 years?
. . . 1,000 years?
. . . 10,000 years?
. . . Report to the class
"Tastes Like the Singularity, But Less Filling"
The first name of...
Chapter fifteen in...
The Human Race to the Future
graphs and such for these
exponentially improving technologies
NOTE: we're not talking just exponential, but high-exponential
Software productivity per programmer hour
is exponential too,
but low-exponential
Exercise:
In small groups:
Take your topics (or any topic) & discuss:
. . . Where will it be in 5 years?
. . . 10 years?
. . . 20 years?
. . . 50 years?
. . . 100 years?
. . . 200 years?
. . . 1,000 years?
. . . 10,000 years?
. . . Report to the class
"Tastes Like the Singularity, But Less Filling"
The first name of...
Chapter fifteen in...
The Human Race to the Future
Chapter
Fifteen
Tastes Like the Singularity
Some think the world as we know it will soon end, ushering
in an unimaginable (but hopefully utopian) future. This chapter explains why it
is called the “singularity,” and why it’s
exciting. Will it radically transform the fabric of reality?
If the artificial intelligence singularity happens, the
world will soon thereafter find itself under the sway of entities much smarter than
ourselves. Then things will be, as J. B. S. Haldane put it in 1928,
“not only queerer than we suppose, but queerer than we can suppose.”1 We will be no more able to understand,
outwit, or control an entity much smarter than ourselves than a cow can a
person. At least that’s the theory. The counterpoint is the claim that for
humans to aspire to build a robot more
intelligent than ourselves is impossibly absurd, like for a monkey to aspire to
reach the Moon by climbing a
tree. But who is to say that no monkey has ever tried?
Singularities
A singularity is a
particularly dramatic type of situation in which a mathematical description
stops working. For example, suppose we describe some unknown quantity x using the equation x/2=5. Then x=10, because then 10/2=5. No singularity or anything else unusual there.
And if the ‘2’ decreases, x gets
bigger. But what if it decreases to 0? Then we have x/0=5, and there is no solution for x because ordinary arithmetic does not say what happens if you divide by zero. The value of x in this case is said to be “undefined.”
More generally, when what you are dividing by becomes zero, whether it is money
on line 54 of the infamous Connecticut 2008 income tax
form CT-1040,2 volume of the mass at the center of a black hole, or whatever it may be, you’ve encountered a
singularity, and there is no answer. Connecticut tax authorities might take a
dim view of the matter, but astrophysicists are concerned: It
is thought that inside a black hole, the gravitational field forces everything inside
the event horizon into a dot at the center. Since
calculating density requires dividing by volume (density = mass/volume), the
density of matter at the center of a black hole would be undefined if the
volume truly became zero, thus creating a singularity. Luckily for reality,
physics has proposed theories, like quantum gravity, that avoid this
mathematical modeling problem by allowing volume to get very small while
preventing it from becoming precisely zero.
The AI singularity
Somewhat similarly, the AI singularity occurs when the
attempt to calculate the limits of computer intelligence breaks down,
seemingly predicting an unending spiral toward infinite intelligence. However,
truly infinite intelligence can’t happen anymore than a misguided Connecticut resident could
have made the entire state financial infrastructure go “poof!” in
2008 by trying to fill out line 53 without having any Connecticut adjusted
gross income. Similarly, there is something very strange at the center of a
black hole, but it is real even if we don’t yet know exactly
what it is.
Singularities are properties of defective descriptions
of the real phenomena, not of real phenomena themselves. For the AI singularity, the real phenomenon involves computers getting
steadily more powerful. They will come to outpace human intelligence in more and
more ways. Computers have long exceeded our intelligence in speed and
reliability of arithmetic calculation. They can play chess better. They can play Jeopardy! better. Each new
generation of modern computers can only be designed with the help of previous
generations of computers. This process will continue but there will never be a
moment when computers suddenly become smarter than humans and
take over, because intelligence is seemingly so complex and indefinable a
concept that no single satisfactory measure of it exists, or perhaps can exist,
and therefore there can be no clean line of demarcation between less
intelligent, and more intelligent, than humans.
Thus we won’t wake up one day to find our previously
loved machines suddenly informing us, as the notorious Japanese video game Zero Wing put it, “All your base are
belong to us.”3 But the trends do suggest that
they are gaining greater and greater intelligence and influence
on our lives, perhaps eventually with revolutionary results.
So what is intelligence and how can we
tell if computers have it? The tricky question of properly defining and
measuring intelligence does not seem to be solvable. At least, it hasn’t been
solved yet. Nonetheless, it is obvious that intelligence exists and that some people
have more of it than others. The classical approach to defining when computers
have intelligence is the so-called Turing Test, created by British code breaker and war hero Alan
Turing.4 (Turing was long thought to have later committed suicide by
eating a poisoned apple, like Snow White, after being convicted of homosexuality and then
“treated” with hormone injections in accordance with the British legal process
of the time. However the detailed cause of death is now in dispute.)
In essence the Turing Test says that, in a
keyboard chat session, if one can’t tell whether one is texting with a chatbot or a person,
and it is a chatbot, the chatbot should be considered intelligent. This is a
clever idea, though not perfect:
- One problem is its assumption that writing
intelligent-seeming text messages actually requires intelligence. Maybe it doesn’t.
- Another is that people must be able to tell the
difference between text messages produced by intelligent vs.
non-intelligent entities. Maybe they can’t. For example, in 2014 a chatbot
called “Eugene Goostman” posed as a 13-year old speaker of English as a
second language and, many believe, passed the Turing Test. However no one
seriously claims the bot is actually intelligent.
- A third is that it ignores the possibility that a
computer could be intelligent yet still unable to pass the test, somewhat
like a person not fluent in your language, though intelligent, would be unable to pretend
fluency.
Turing
Test considered
harmful
The first chatbot was the 1967
program ELIZA.5 J. Weizenbaum, its creator, wrote, “ELIZA created the most
remarkable illusion of having ‘understood’ in the minds of the many people who
conversed with it.”6 ELIZA is probably too primitive to have that
effect on today’s much more sophisticated computer users and does not pass the
Turing Test (it’s been
tried). Yet the Turing Test is useful and has inspired a regular contest. Since
1991 the “Loebner Prize”7 has been awarded yearly to the owner of
the chatbot contestant that comes closest to fooling a panel of human judges.
As a side note, AI pioneer Marvin Minsky is on record as
offering a $100 cash prize to anyone who can get Loebner to stop sponsoring the
“stupid … obnoxious and unproductive” prize.8 For his part, Loebner
(a single gentleman and advocate of legalized prostitution) argues this
actually makes Minsky a co-sponsor of the prize, since he would have to give
his cash offering to the owner of the first chatbot to fully pass the test, finally
winning Loebner’s Grand Prize and thereby ending the annual competitions.
The Turing test is clearly
suspect on logical grounds alone
(as explained above), and most anyone working on chatbots will confirm that, in
practice, they don’t consider their impressive creations to be truly
intelligent. But that is likely to change at some point. Chatbot performance
appears to be generally improving from year to year, so progress is occurring.
Indeed, the winner’s performance in the Loebner Prize competitions
over time would appear to be one way to measure progress in computer
intelligence. Although not a perfect metric, it is an interesting one.
Other metrics exist as well, also
imperfect but very different from chatbot performance and
from each other. One measures a computer’s creativity (http://goo.gl/7pkwJp). Game playing is another fruitful source of potential ways to measure
improvements in computer abilities that seem to require intelligence, because games tend to provide a clear context that
supports quantifying performance. Progressively increasing computer chess performance had already won the
world championship years ago, in 1997.9 Soccer is different
from chess, but robots compete in
soccer in the robocup games, held
yearly since 1997. Their soccer performance represents another metric for
intelligence.
A trajectory of improvement in a composite of
different tasks indicative of computer intelligence is more
convincing than one of improvement in any one metric, in part because intelligence itself is such a
complex, composite attribute. A useful approach might be to keep a running
count of human games that machines are able to play better than humans. Chess and
Jeopardy! are already
there, but soccer is not. (Hopeful
robocup organizers however
have a goal to, “By the year 2050, develop a team of fully autonomous humanoid
robots that can win
against the human world soccer champion team.”10)
What we can do
The AI singularity will not rear
up overnight, instantly changing your life dramatically for either the worse
or, as riskily assumed by some, the better. Every age has its messianic movements and
its rapturous apocalypticists. Still, AI does appear to be improving. Computers
are already far in advance of human arithmetic intelligence, and society has leveraged that into many benefits, from calculators to income tax software to spacecraft navigation
systems and more. This will continue to happen with other computer capabilities.
Thus the number of such capabilities that exceed human performance, such as
mishap-free motor vehicles, will grow progressively.
Popular movies have long relied on the concept of a
secretive and misanthropic “mad scientist” who creates a robot of great
capabilities. That will probably not happen. It takes a sizeable community of
skilled humans to create even a pencil. Referring to everything from chopping trees for the
body to making rubber and metal for the eraser, L. E. Read notes in his classic
essay I, pencil, “… not a single
person on the face of this Earth knows how to make me.”11 Even the
simplest computer is obviously far more complex than a pencil. For a robot to
create another robot of greater capability than itself would require either
large numbers of humans and other computers to help, just as it does now, or a
single robot with the intelligence, motor skills, and financial resources of thousands of humans and
their computers, factories, banks, etc. How many thousands? There is no way to
know for sure. But consider that human societies of thousands, once isolated,
have lost even basic pre-industrial technologies.12 Tasmania is a well-known
example.
An important need is for metrics that can tell
us, in practical terms, the rate of progress by which artificial intelligence is marinating
society. Arithmetic, the Turing Test, chess, Jeopardy!, soccer, and even self-driving cars are interesting
but do not fit the bill by themselves. As factors in a richer, composite metric, however they can play a part. Another factor that
might be useful is to count the rate of new AI applications becoming available
over time. A suitable composite metric should be debated and converged upon by
society.
No comments:
Post a Comment