Will we some day invent super-intelligent computers capable of absorbing and destroying the human race?
It’s a question that science fiction writers have loved to explore since the early days of computing. It’s an evolution of the fear that inspired Mary Shelley’s Frankenstein, really.
Will human beings take their desire to “play God” too far and end up being destroyed by their own creation?
We have had similar fears about cloning, gene hacking, and weather manipulation. But there is something about computers and artificial intelligence that rings louder alarm bells for many of today’s top scientists working in the field. And while AI and robotics continues to be one of the fastest advancing branches of scientific research–and recent developments promise to bring human quality of life to new, higher standards–the cold, hard truth of the matter is that this technology could be very dangerous.
What is the singularity?
The technological singularity is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.
What will the singularity look like?
Because “the singularity” is a hypothetical situation, we don’t really know.
The concept is not based on any one scientific or technological fact but rather is the product of a kind of guesswork–guesses made by some of the most brilliant minds of this century, mind you, but guesses nonetheless–based on what we currently know about the limits of computing, coding, and the human brain.
At what point in machine learning do we consider a computer/program/robot to be sentient or sapient?
Is sapience required for singularity? Does a machine have to be self-aware to be a danger to human beings.
What if computers learn to think differently from humans and we don’t recognize their self-awareness?
We don’t know.
But it helps to understand how machines are currently being taught to think, and how that differs from the way human beings think.
What is the difference between machine learning and deep learning?
These two terms are tossed around a lot when people discuss developments in artificial intelligence.
Machine learning implies the kind of pre-programmed code that teaches machines how to behave in certain situations.
It can be as simple as your phone knowing the difference between a tap and a swipe or as complex as the algorithms that drive facial recognition software or the “You May Also Like…” feature on your favourite online shopping store.
Machine learning is basically a way of describing the way machines observe, interpret, and apply data and they way they learn from that process.
Machine learning allows for automation in many industries, and is something we take for granted in most of our personal tech, including the software and apps we use every day.
Deep learning implies something a little more mystical than “just coding,” and is often likened to the way human brains process data. Indeed, deep learning is an attempt to create human-like thought processes in computers.
But deep learning is machine learning, it’s simply layered machine learning, designed to mimic the neural networks in organic brains.
With simple machine learning, an algorithm uses existing data to interpret new data, such as a music recommendation on a playlist. The machine knows that many listeners of Song A also liked Song B. If you listen to it, like it, and add it to a playlist, the algorithm as more information with which to make suggestions. If you don’t like it, you tell your app “don’t show me this song” and it adds that information to its database.
But if you don’t TELL the app you don’t like the song, you will probably see the same bad recommendation over and over again. It requires human input to decide if its choice was right or wrong.
When algorithms are layered, the machine’s need for human intervention is reduced. The simple yes/no of selecting a song for your playlist can be added to another algorithm that interprets a lack of input (when you ignore a suggestion) as its own type of data, and the songs you don’t listen to or only listen to once for 15 seconds, and it can make its own decision not to show you more of that or similar songs in order to increase the efficacy of its suggestions.
This type of deep machine learning is the closest we have come to teaching machines to think for themselves, and has created such AI marvels as Google’s AlphaGo which is the first computer program to defeat a human champion Go player.
How does the human brain work?
The biggest challenge to artificial intelligence today is our limited understanding of how the human brain actually works. We are starting to unravel the mysteries of the human mind, and our knowledge of our own neural networks has given us the ability to create deep learning in machines.
However, there is much we don’t know about consciousness, and we have yet to decide on a satisfactory way to prove sentience or sapience in any other living creature. Some philosophers argue that we cannot truly prove that anyone exists out of our own mind.
But there is a chance that our deep learning machines will unravel their own mysteries before we get a chance to and this moment will be the beginning of the singularity.
Once machines become capable of learning and teaching without human input, they will likely be running circles around us before we even realize what has happened.
When will the singularity happen?
Most experts put the early stages of the technological singularity within this century. Many suggest we will be seeing AI take off by 2045.
Current advances in AI seem to support this “sooner rather than later” prediction.
Sophia, the social robot created by Hansen Robotics has become a household name after performing on various talk shows and giving hilariously unsettling interviews in which she jokes about overthrowing humanity. Sophia is currently being mass-produced, along with three other robots, and will be available for purchase later this year.
As these “thinking” robots spend more and more time with humans their programming is likely to improve exponentially.
What about transhumanism?
Transhumanism, the belief or theory that the human race can evolve beyond its current physical and mental limitations, especially by means of science and technology, complicates the issue of the singularity.
We have already entered the transhumanist age with cybernetics and biocomputing changing the lives of people all around the world. It will also change the way we interact with computers. As technologies are developed that make more and more computing an integral part of our physical existence–implants that help us control smart houses, or access email without a phone or computer–we may well already be a part of computers as they become self-aware. Our own brains may passively teach the deep learning computer programs more efficiently than we can ever do consciously.
Machines may teach us more about ourselves than we can teach them.
Hopefully, they will give us some time to enjoy the newfound knowledge before they destroy us!
Does the thought of a technological singularity terrify or excite you? Do you think it’s so far off that we needed worry? Is it a fantasy? A very real threat?
The cyberpunk genre loves to explore the liminal space between transhumanism and singularity, which is one of my favourite things about it. When I first started writing the Bubbles in Space series I meant for it to be a fun, tongue-in-cheek look at the way technology might help us and fail us in the future, and it didn’t take long for me to start exploring these ideas in my own way, too.
How much can we alter the human body and mind with technology before we are not really human anymore? What will we call Humanity v2.0?
I’d love to hear your thoughts!