Written by Dhairya Shandilya
"The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever-increasing rate" -Stephen Hawking
We live in an era of a technical world that consists of self-driving cars, autonomous drones, deep learning algorithms, computers that beat humans at chess and go, and so on. So, it’s natural to ask, will artificial superintelligence replace humans, take our jobs, and destroy human civilization? Or will SI just become tools like regular computers? So, in this article, we'll discuss SI, its usefulness and the frightening devastation it will cause in the future.
What is SuperIntelligence?
A superintelligence is a thing or basically a hypothetical agent that needs minimal human assistance and possesses the brightest and most intelligent human mind. In simple words, super-intelligence (SI) is the software-based system with intellectual powers beyond those of humans across an almost comprehensive range of categories and fields of endeavor.
Artificial superintelligence (ASI) involves the software-based simulation of human intellectual capabilities such as learning, reasoning, and self-correction. The theoretical future creation of superintelligent machines is sometimes referred to as the Singularity. In that scenario, one potential outcome is the addition of superintelligence to human beings. A superintelligence would be capable of working out a solution when faced with a new problem. Nevertheless, the technology is still in the early days of its development. SI is increasingly a part of our everyday environment in systems such as:-
1) Virtual Assistants
2) Self-Driving Cars
3) E-Mail Spam Filters
4) Searches on Online Platforms
6) Facebook Posts'
7) Many more...
Types of SuperIntelligence
Broadly there are 3 main types of ASI, that are:-
1) Speed SuperIntelligence - A speed superintelligence could do what a human does, but faster. This would make the outside world seem very slow to it. It might cope with this part by being very tiny, or virtual.
2) Collective SuperIntelligence - A collective superintelligence is composed of smaller intellects, interacting in some way. It is especially good at tasks that can be broken into parts and completed in parallel. It can be improved by adding more smaller intellects, or by organizing them better.
3) Quality SuperIntelligence - A quality superintelligence can carry out intellectual tasks that humans just can't in practice. This can be understood by analogy with the difference between other animals and humans, or the difference between humans with and without certain cognitive capabilities.
Examples of SuperIntelligence (Briefly Explained)
1) Smart Personal Assistants - The first iteration were simpler phone assistants like Siri and Google Now (now succeeded by the more sophisticated Google Assistant), which could perform internet searches, set reminders, and integrate with your calendar. One of the best SI personal assistants is Alexa, an AI-powered personal assistant that accepts voice commands to create to-do lists, order items online, set reminders, and answer questions.
2) Self-Driving Cars - A self-driven car is a vehicle that is capable of sensing the environment and moving on its own with little to no human input. This means that it does not require humans to steer the wheel, apply the brakes, or even press the accelerator. For more detailed information about self-driven cars, please read our separate blog on self-driving cars by Yuvraj Dhillon.
3) Searches on Online Platforms - When we search something online(such as social media or shopping sites), it quickly returns a list of the most relevant products related to your search or recommends for products you’re interested in as “customers who viewed this item also viewed” and “customers who bought this item also bought”, as well as via personalized recommendations on the home page, bottom of item pages, and through email. Research has shown that recommenders increase sales by 5.9%. So, these online platforms use artificial neural networks to generate these recommendations and related searches, etc.
4) Facebook Posts - When you upload photos to Facebook, the service automatically highlights faces and suggests friends tag. Facebook uses ASI to recognize faces. The company has invested heavily in this area not only within Facebook, but also through the acquisitions of facial-recognition startups like Face.com, which Facebook acquired in 2012 for a rumored $60M, Masquerade (2016, undisclosed sum), and Faciometrics (2016, undisclosed sum).
Why superintelligence is a threat that should be taken seriously?
In an article for Skeptic, Michael Shermer (the magazine’s founding publisher) put forth an argument for “why ASI is not an existential threat,” where “ASI” stands for “Artificial SuperIntelligence” and an “existential threat” is anything that could cause human extinction or the irreversible decline of civilization.
To be clear, there are many possible societal consequences of developing SI systems, including job losses through automation and future wars involving lethal autonomous weapons. But the most significant worry stems from an AI system with greater-than-human intelligence, or a “superintelligence.” It is the creation of such a system that has led many business leaders and scholars—including Elon Musk, Bill Gates, Stephen Hawking, and Nick Bostrom—to identify superintelligence as one of the greatest possible existential threats facing humanity.
When Oxford University Professor Nick Bostrom wrote New York Times best-seller book, Superintelligence: Paths, Dangers, Strategies was first published in 2014, it struck a nerve at the heart of this debate with its focus on all the things that could go wrong. You can see the full video of our conversation here with Prof. Nick Bostrom.
Since the writing of Bostrom's book in 2014, progress has been very rapid in artificial intelligence and machine and deep learning. Artificial intelligence is in the public discourse, and most governments have some sort of strategy or road map to address ASI. In his book, he talked about AI being a little bit like children playing with a bomb that could go off at any time.
There are all kinds of exciting ASI tools and applications that are beginning to affect the economy in many ways. These shouldn’t be overshadowed by the overhype on the hypothetical future point where you get ASIs with the same general learning and planning abilities that humans have as well as superintelligent machines. These are two different contexts that require attention.
As Bostrom advises, rather than avoid pursuing ASI innovation, "Our focus should be on putting ourselves in the best possible position so that when all the pieces fall into place, we've done our homework. We've developed scalable AI control methods, we've thought hard about the ethics and the governments, etc. And then proceed further and then hopefully have an extremely good outcome from that." If our governments and business institutions don't spend time now formulating rules, regulations, and responsibilities, there could be significant negative ramifications as ASI continues to mature.
Artificial intelligence will change the way conflicts are fought from autonomous drones, robotic swarms, and remote and nanorobot attacks. In addition to being concerned with a nuclear arms race, we'll need to monitor the global autonomous weapons race.
ASI technology makes it very easy to create "fake" videos of real people. These can be used without an individual's permission to spread fake news, create porn in a person's likeness who actually isn't acting in it, and more to not only damage an individual's reputation but livelihood. The technology is getting so good the possibility for people to be duped by it is high.
So, today, the more imminent threat isn't from a superintelligence, but the useful—yet potentially dangerous.