Universitatea Titu Maiorescu din Bucuresti
Computers vs. Human Race: Is Human
Civilization at Stake?
Francisc Andrei-Angelo
Facultatea de Informatica ID
Grupa 106
The history of Artificial Intelligence (AI) began in antiquity, with myths, stories and
rumors of artificial beings endowed with intelligence or consciousness by master craftsmen.
The seeds of modern AI were planted by classical philosophers who attempted to describe the
process of human thinking as the mechanical manipulation of symbols. This work culminated in
the invention of the programmable digital computer in the 1940s, a machine based on the
abstract essence of mathematical reasoning. This device and the ideas behind it inspired a
handful of scientists to begin seriously discussing the possibility of building an electronic brain.
The field of AI research was founded at a workshop held on the campus of Dartmouth
College during the summer of 1956. Those who attended would become the leaders of AI
research for decades. Many of them predicted that a machine as intelligent as a human being
would exist in no more than a generation and they were given millions of dollars to make this
vision come true.
Eventually, it became obvious that they had grossly underestimated the difficulty of the
project. In 1973, in response to the criticism from James Lighthill and ongoing pressure from
congress, the U.S. and British Governments stopped funding undirected research into artificial
intelligence, and the difficult years that followed would later be known as an "AI winter". Seven
years later, a visionary initiative by the Japanese Government inspired governments and
industry to provide AI with billions of dollars, but by the late 80s the investors became
disillusioned and withdrew funding again.
Investment and interest in AI boomed in the first decades of the 21st century,
when machine learning was successfully applied to many problems in academia and industry
due to new methods, the application of powerful computer hardware, and the collection of
immense data sets.
The birth of artificial intelligence
In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics,
psychology, engineering, economics and political science) began to discuss the possibility of
creating an artificial brain. The field of artificial intelligence research was founded as an
academic discipline in 1956.
Cybernetics and early neural networks
The earliest research into thinking machines was inspired by a confluence of ideas that
became prevalent in the late 1930s, 1940s, and early 1950s. Recent research in neurology had
shown that the brain was an electrical network of neurons that fired in all-or-nothing
pulses. Norbert Wiener's cybernetics described control and stability in electrical
networks. Claude Shannon's information theory described digital signals (i.e., all-or-nothing
signals). Alan Turing's theory of computation showed that any form of computation could be
described digitally. The close relationship between these ideas suggested that it might be
possible to construct an electronic brain.
Examples of work in this vein includes robots such as W. Grey Walter's turtles and
the Johns Hopkins Beast. These machines did not use computers, digital electronics or symbolic
reasoning; they were controlled entirely by analog circuitry.[37]
Walter Pitts and Warren McCulloch analyzed networks of idealized artificial neurons and
showed how they might perform simple logical functions. They were the first to describe what
later researchers would call a neural network. One of the students inspired
by Pitts and McCulloch was a young Marvin Minsky, then a 24-year-old graduate student. In
1951 (with Dean Edmonds) he built the first neural net machine, the SNARC. Minsky was to
become one of the most important leaders and innovators in AI for the next 50 years.
Turing's test
In 1950 Alan Turing published a landmark paper in which he speculated about the
possibility of creating machines that think. He noted that "thinking" is difficult to define and
devised his famous Turing Test. If a machine could carry on a conversation (over a teleprinter)
that was indistinguishable from a conversation with a human being, then it was reasonable to
say that the machine was "thinking". This simplified version of the problem allowed Turing to
argue convincingly that a "thinking machine" was at least plausible and the paper answered all
the most common objections to the proposition. The Turing Test was the first serious proposal
in the philosophy of artificial intelligence.
Game AI
In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher
Strachey wrote a checkers program and Dietrich Prinz wrote one for chess. Arthur Samuel's
checkers program, developed in the middle 50s and early 60s, eventually achieved sufficient
skill to challenge a respectable amateur.] Game AI would continue to be used as a measure of
progress in AI throughout its history.
Progress of artificial intelligence
Artificial intelligence applications have been used in a wide range of fields
including medical diagnosis, stock trading, robot control, law, scientific discovery and toys.
However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into
general applications, often without being called AI because once something becomes useful
enough and common enough it's not labeled AI anymore. Many thousands of AI applications
are deeply embedded in the infrastructure of every industry." In the late 1990s and early 21st
century, AI technology became widely used as elements of larger systems, but the field is rarely
credited for these successes.
Kaplan and Haenlein structure artificial intelligence along three evolutionary stages:
1) artificial narrow intelligence – applying AI only to specific tasks
2) artificial general intelligence – applying AI to several areas and able to autonomously solve
problems they were never even designed for
3) artificial super intelligence – applying AI to any area capable of scientific creativity, social
skills, and general wisdom.
To allow comparison with human performance, artificial intelligence can be evaluated
on constrained and well-defined problems. Such tests have been termed subject matter expert
Turing tests. Also, smaller problems provide more achievable goals and there are an ever-
increasing number of positive results.
In his famous Turing test, Alan Turing picked language, the defining feature of human
beings, for its basis. Yet, there are many other useful abilities that can be described as showing
some form of intelligence. This gives better insight into the comparative success of artificial
intelligence in different areas.
In what has been called the Feigenbaum test, the inventor of expert systems argued for
subject specific expert tests. A paper by Jim Gray of Microsoft in 2003 suggested extending the
Turing test to speech understanding, speaking and recognizing objects and behavior.
AI, like electricity or the steam engine, is a general purpose technology. There is no
consensus on how to characterize which tasks AI tends to excel at. Some versions of Moravec's
paradox observe that humans are more likely to outperform machines in areas such as physical
dexterity that have been the direct target of natural selection. While projects such
as AlphaZero have succeeded in generating their own knowledge from scratch, many other
machine learning projects require large training datasets. Researcher Andrew Ng has suggested,
as a "highly imperfect rule of thumb", that "almost anything a typical human can do with less
than one second of mental thought, we can probably now or in the near future automate using
AI."
Games provide a high-profile benchmark for assessing rates of progress; many games
have a large professional player base and a well-established competitive rating
system. AlphaGo brought the era of classical board-game benchmarks to a close. Games
of imperfect knowledge provide new challenges to AI in the area of game theory; the most
prominent milestone in this area was brought to a close by Libratus' poker victory in 2017. E-
sports continue to provide additional benchmarks; Facebook AI, Deepmind, and others have
engaged with the popular StarCraft franchise of videogames.
Broad classes of outcome for an AI test may be given as:
optimal: it is not possible to perform better (note: some of these entries were solved by
humans)
super-human: performs better than all humans
high-human: performs better than most humans
par-human: performs similarly to most humans
sub-human: performs worse than most humans
Chess
An AI defeated a grandmaster in a regulation tournament game for the first time in
1988; rebranded as Deep Blue, it beat the reigning human world chess champion in 1997
(see Deep Blue versus Garry Kasparov).
Deep Blue versus Garry Kasparov was a pair of six-game chess matches between world
chess champion Garry Kasparov and an IBM supercomputer called Deep Blue. The first match
was played in Philadelphia in 1996 and won by Kasparov. The second was played in New York
City in 1997 and won by Deep Blue. The 1997 match was the first defeat of a reigning world
chess champion by a computer under tournament conditions.
Deep Blue's win was seen as symbolically significant, a sign that artificial intelligence was
catching up to human intelligence, and could defeat one of humanity's great intellectual
champions. Later analysis tended to play down Kasparov's loss as a result of
uncharacteristically bad play on Kasparov's part, and play down the intellectual value of chess
as a game that can be defeated by brute force.
In December 2016, discussing the match in a podcast with neuroscientist Sam Harris,
Kasparov advised of a change of heart in his views of this match. Kasparov stated: "While
writing the book I did a lot of research – analysing the games with modern computers, also
soul-searching – and I changed my conclusions. I am not writing any love letters to IBM, but my
respect for the Deep Blue team went up, and my opinion of my own play, and Deep Blue's play,
went down. Today you can buy a chess engine for your laptop that will beat Deep Blue quite
easily."
Go
AlphaGo defeated a European Go champion in October 2015, and Lee Sedol in March
2016, one of the world's top players. According to Scientific American and other sources, most
observers had expected superhuman Computer Go performance to be at least a decade away.
AlphaGo versus Lee Sedol, also known as the Google DeepMind Challenge Match, was
a five-game Go match between 18-time world champion Lee Sedol and AlphaGo, a computer
Go program developed by Google DeepMind, played in Seoul, South Korea between the 9th
and 15th of March 2016. AlphaGo won all but the fourth game; all games were won by
resignation. The match has been compared with the historic chess match between Deep Blue
and Garry Kasparov in 1997.
The winner of the match was slated to win $1 million. Since AlphaGo won, Google
DeepMind stated that the prize will be donated to charities, including UNICEF, and Go
organisations. Lee received $170,000 ($150,000 for participating in the five games and an
additional $20,000 for winning one game).
After the match, The Korea Baduk Association awarded AlphaGo the highest Go
grandmaster rank – an "honorary 9 dan". It was given in recognition of AlphaGo's "sincere
efforts" to master Go. This match was chosen by Science as one of the Breakthrough of the
Year runners-up on 22 December 2016.
Go is a complex board game that requires intuition, creative and strategic thinking. It
has long been considered a difficult challenge in the field of artificial intelligence (AI) and is
considerably more difficult to solve than chess. Many in the field of artificial intelligence
consider Go to require more elements that mimic human thought than chess. Mathematician I.
J. Good wrote in 1965:
Go on a computer? – In order to programme a computer to play a reasonable game of
Go, rather than merely a legal game – it is necessary to formalise the principles of good
strategy, or to design a learning programme. The principles are more qualitative and mysterious
than in chess, and depend more on judgment. So I think it will be even more difficult to
programme a computer to play a reasonable game of Go than of chess.
Prior to 2015, the best Go programs only managed to reach amateur dan level. On the
small 9×9 board, the computer fared better, and some programs managed to win a fraction of
their 9×9 games against professional players. Prior to AlphaGo, some researchers had claimed
that computers would never defeat top humans at Go. Elon Musk, an early investor of
Deepmind, said in 2016 that experts in the field thought AI was 10 years away from achieving a
victory against a Go top professional player.
The match AlphaGo versus Lee Sedol is comparable to the 1997 chess match Deep Blue
versus Garry Kasparov. There IBM's Deep Blue computer's defeat of reigning champion
Kasparov is seen as the symbolic point where computers became better than humans at chess.
AlphaGo is most significantly different from previous AI efforts in that it applies neural
networks, in which evaluation heuristics are not hard-coded by human beings, but instead to a
large extent learned by the program itself, through tens of millions of past Go matches as well
as its own matches with itself. Not even AlphaGo's developer team are able to point out how
AlphaGo evaluates the game position and picks its next move.
Match against Fan Hui
AlphaGo defeated European champion Fan Hui, a 2 dan professional, 5–0 in October
2015, the first time an AI had beaten a human professional player at the game on a full-sized
board without a handicap. Some commentators stressed the gulf between Fan and Lee, who is
ranked 9 dan professional. Computer programs Zen and Crazy Stone have previously defeated
human players ranked 9 dan professional with handicaps of four or five stones. Canadian AI
specialist Jonathan Schaeffer, commenting after the win against Fan, compared AlphaGo with a
"child prodigy" that lacked experience, and considered, "the real achievement will be when the
program plays a player in the true top echelon." He then believed that Lee would win the match
in March 2016. Hajin Lee, a professional Go player and the International Go Federation's
secretary-general, commented that she was "very excited" at the prospect of an AI challenging
Lee, and thought the two players had an equal chance of winning.
In the aftermath of his match against AlphaGo, Fan Hui noted that the game had taught
him to be a better player, and to see things he had not previously seen. By March
2016, Wired reported that his ranking had risen from 633 in the world to around 300.
What is Artificial Intelligence?
Artificial intelligence (AI) is studying an artificially created, intelligent agent, or a
machine to use the colloquial term. Such machines have the ability to learn about the world
that surrounds them and take actions that will have the best chances of achieving success.
Research into AI involves using a lot of tools from other sciences such as psychology, linguistics,
computer science, and many others. It also overlaps with other fields of study, such as facial
recognition, robotics, data mining, and others. As we can see, AI is a very broad term. Now, let’s
take a look at human intelligence.
What is Human Intelligence?
Human intelligence involves a person’s mind to learn from his or her previous
experiences. This could be their education, work experiences, or simply a situation that they
found themselves in and have learned something from it. Most importantly, there are many
types of information a human mind can provide. For example, a person can talk about an
observation during a trip abroad or on their daily morning commute to work and give some
valuable insight into their field of study or expertise. Now that we have an understanding of
both intelligence types, let’s compare AI vs. human intelligence to see if we can spot some
interesting differences.
Human vs. Machines
There are some areas where human intelligence has the advantage over machines. For
example, let’s take a look at multi-tasking. A person can work on many different tasks at the
same time, while it would take a long time for a machine to do something like this. Another
important area where people have the advantage is in decision making. In fact, even the most
advanced machines are on par with a six-year-old child when discussing this category. Humans
are so far ahead of computers in this category because of their ability to learn from experiences
and take in multiple factors involved, as well.
The area where machines have the definitive advantage is in the processing speed. In
fact, a machine can perform 93,000 trillion operations per second. For example, let’s say that a
doctor can make a diagnosis in ten minutes. An artificial intelligence system would be able to
make one million diagnoses in that amount of time. Such processing speed and energy that the
computers can provide is what allows them to excel in areas such as chess since it can calculate
hundreds of thousands of moves per second.
Human Brain vs. AI: Who is the Winner?
After considering the different elements, the human brain has to be the clear winner.
When we get down to it, AI can only compete in some specific areas, like the chess example
mentioned above. If your company is faced with an important decision or you are looking to
create a state-of-the-art product, you will need human workers. Companies hire our software
developers because of their wide experience and the knowledge that they bring to the table.
This is something that a computer cannot compete with.
We hope that such a technology vs. human mind comparison was useful for you. While
it is possible to use AI for some routine tasks, it would not be an excellent decision to rely on
when creating a new product. Whenever there are important decisions that must be taken into
accounts, such as user behavior, product design, and many other highly specialized knowledge,
only human intelligence will do. They can adapt ever-evolving market conditions to make sure
that whatever you are creating will be competitive and will be received well by the users.
Think of AI as a pure consumer of data. It will be able to learn from everything that you
feed into, but it will never go out there and look for new information. Artificial intelligence
relies on humans in this regard to find new data, break it down for them, and then input it into
the computer for their neural networks to process and learn. If this process of human-
generated data would stop, then artificial intelligence would simply collapse. For this reason,
you should not be all that worried about machines taking over your job. After all, you provide a
lot of valuable insights for your company that allows them to navigate the marketplace to
ensure their product or service offering will be best received and most profitable. This is why
your job is safe for the foreseeable future.
Why We Should Think About the Threat of Artificial
Intelligence
The futurist and inventor Ray Kurzweil thinks true, human-level A.I. will be here in less
than two decades. My estimate is at least double that, especially given how little progress has
been made in computing common sense; the challenges in building A.I., especially at the
software level, are much harder than Kurzweil lets on.
But a century from now, nobody will much care about how long it took, only what
happened next. It’s likely that machines will be smarter than us before the end of the century—
not just at chess or trivia questions but at just about everything, from mathematics and
engineering to science and medicine. There might be a few jobs left for entertainers, writers,
and other creative types, but computers will eventually be able to program themselves, absorb
vast quantities of new information, and reason in ways that we carbon-based units can only
dimly imagine. And they will be able to do it every second of every day, without sleep or coffee
breaks.
For some people, that future is a wonderful thing. Kurzweil has written about a
rapturous singularity in which we merge with machines and upload our souls for immortality;
Peter Diamandis has argued that advances in A.I. will be one key to ushering in a new era of
“abundance,” with enough food, water, and consumer gadgets for all. Skeptics like Eric
Brynjolfsson and I have worried about the consequences of A.I. and robotics for employment.
But even if you put aside the sort of worries about what super-advanced A.I. might do to the
labor market, there’s another concern, too: that powerful A.I. might threaten us more directly,
by battling us for resources.
Most people see that sort of fear as silly science-fiction drivel—the stuff of “The
Terminator” and “The Matrix.” To the extent that we plan for our medium-term future, we
worry about asteroids, the decline of fossil fuels, and global warming, not robots. But a dark
new book by James Barrat, “Our Final Invention: Artificial Intelligence and the End of the
Human Era,” lays out a strong case for why we should be at least a little worried.
Barrat’s core argument, which he borrows from the A.I. researcher Steve Omohundro, is
that the drive for self-preservation and resource acquisition may be inherent in all goal-driven
systems of a certain degree of intelligence. In Omohundro’s words, “if it is smart enough, a
robot that is designed to play chess might also want to be build a spaceship,” in order to obtain
more resources for whatever goals it might have. A purely rational artificial intelligence, Barrat
writes, might expand “its idea of self-preservation … to include proactive attacks on future
threats,” including, presumably, people who might be loathe to surrender their resources to the
machine. Barrat worries that “without meticulous, countervailing instructions, a self-aware,
self-improving, goal-seeking system will go to lengths we’d deem ridiculous to fulfill its goals,”
even, perhaps, commandeering all the world’s energy in order to maximize whatever
calculation it happened to be interested in.
Of course, one could try to ban super-intelligent computers altogether. But “the
competitive advantage—economic, military, even artistic—of every advance in automation is so
compelling,” Vernor Vinge, the mathematician and science-fiction author, wrote, “that passing
laws, or having customs, that forbid such things merely assures that someone else will.”
But before we get complacent and decide there is nothing to worry about after all, it is
important to realize that the goals of machines could change as they get smarter. Once
computers can effectively reprogram themselves, and successively improve themselves, leading
to a so-called “technological singularity” or “intelligence explosion,” the risks of machines
outwitting humans in battles for resources and self-preservation cannot simply be dismissed.
One of the most pointed quotes in Barrat’s book belongs to the legendary serial A.I.
entrepreneur Danny Hillis, who likens the upcoming shift to one of the greatest transitions in
the history of biological evolution: “We’re at that point analogous to when single-celled
organisms were turning into multi-celled organisms. We are amoeba and we can’t figure out
what the hell this thing is that we’re creating.”
Already, advances in A.I. have created risks that we never dreamt of. With the advent of
the Internet age and its Big Data explosion, “large amounts of data is being collected about us
and then being fed to algorithms to make predictions,” Vaibhav Garg, a computer-risk specialist
at Drexel University, told me. “We do not have the ability to know when the data is being
collected, ensure that the data collected is correct, update the information, or provide the
necessary context.” Few people would have even dreamt of this risk even twenty years ago.
What risks lie ahead? Nobody really knows, but Barrat is right to ask.
Bill Gates: A.I. is like nuclear energy — ‘both promising and
dangerous’
The power of artificial intelligence is “so incredible, it will change society in some very deep
ways,” said billionaire Microsoft co-founder Bill Gates.
Some ways will be good, some bad, according to Gates.
“The world hasn’t had that many technologies that are both promising and dangerous — you
know, we had nuclear energy and nuclear weapons,” Gates said March 18 at the 2019 Human-
Centered Artificial Intelligence Symposium at Stanford University.
Jeff Bezos has also expressed concerns about killer AI.
“I think autonomous weapons are extremely scary,” said Bezos at the George W. Bush
Presidential Center’s Forum on Leadership in April. The artificial intelligence tech that “we
already know and understand are perfectly adequate” to create these kinds of weapons said
Bezos, “and these weapons, some of the ideas that people have for these weapons, are in fact
very scary.”
Meanwhile, AI also has the potential to do a lot of good for humanity, Gates said, because it can
sort vast quantities of data much more proficiently and efficiently than humans.
“When I see it applied to something that without AI, it is just too complex, we never would
have seen how that system works, that I feel like, ‘Wow, that is a very good thing.’”
For example, said Gates, the “nature of these technologies to find patterns and insights...is a
chance to do something in terms of social science policy, particularly education policy, also, you
know, health care quality, health care cost — it’s a chance to take systems that are inherently
complex in nature,” Gates said.
Elon Musk: ‘Mark my words — A.I. is far more dangerous than
nukes’
Tesla and SpaceX boss Elon Musk has doubled down on his dire warnings about the danger of
artificial intelligence.
The billionaire tech entrepreneur called AI more dangerous than nuclear warheads and said
there needs to be a regulatory body overseeing the development of super intelligence,
speaking at the South by Southwest tech conference in Austin, Texas on Sunday.
It is not the first time Musk has made frightening predictions about the potential of artificial
intelligence — he has, for example, called AI vastly more dangerous than North Korea — and he
has previously called for regulatory oversight.
Some have called his tough talk fear-mongering. Facebook founder Mark Zuckerberg said
Musk’s doomsday AI scenarios are unnecessary and “pretty irresponsible.” And Harvard
professor Steven Pinker also recently criticized Musk’s tactics.
“The biggest issue I see with so-called AI experts is that they think they know more than they
do, and they think they are smarter than they actually are,” said Musk. “This tends to plague
smart people. They define themselves by their intelligence and they don’t like the idea that a
machine could be way smarter than them, so they discount the idea — which is fundamentally
flawed.”
Based on his knowledge of machine intelligence and its developments, Musk believes there is
reason to be worried.
In his analysis of the dangers of AI, Musk differentiates between case-specific applications of
machine intelligence like self-driving cars and general machine intelligence, which he has
described previously as having “an open-ended utility function” and having a “million times
more compute power” than case-specific AI.
“I am not really all that worried about the short term stuff. Narrow AI is not a species-level risk.
It will result in dislocation, in lost jobs,and better weaponry and that kind of thing, but it is not a
fundamental species level risk, whereas digital super intelligence is,” explained Musk.
“So it is really all about laying the groundwork to make sure that if humanity collectively
decides that creating digital super intelligence is the right move, then we should do so very very
carefully — very very carefully. This is the most important thing that we could possibly do.”
Still, Musk is in the business of artificial intelligence with his venture Neuralink, a company
working to create a way to connect the brain with machine intelligence.
Musk hopes “that we are able to achieve a symbiosis” with artificial intelligence: “We do want a
close coupling between collective human intelligence and digital intelligence, and Neuralink is
trying to help in that regard by trying creating a high bandwidth interface between AI and the
human brain,” he said.
Conclusion
In my opinion existential risk from artificial general intelligence is the hypothesis that
substantial progress in artificial general intelligence (AGI) could someday result in human
extinction or some other unrecoverable global catastrophe or an AI takeover in which artificial
intelligence (AI) becomes the dominant form of intelligence on Earth, with computers or robots
effectively taking the control of the planet away from the human species.