The world around us is rapidly changing. Digital technologies play a tremendous role in our daily lives, and there is no doubt that AI will have a massive impact on most industries and become a core part of next-generation products.
We're standing at the dawn of the next technological revolution in which AI will play a key role.
We can expect AI assistants to take our daily routine tasks and give us more time to do more important things, such as spend more time with our family. But how will those systems work? And will it be safe to use them?
Moving from narrow to general intelligence
Narrow AI is artificial intelligence focused on solving a very specific problem. For example, car autopilot is an example of narrow AI. This kind of AI is trained to detect particular types of objects (i.e., cars, pedestrians, road signs). The exact objects that AI will detect don't make much sense to it since it doesn't know what a car, a pedestrian, or a road sign really is.
The rate of improvement that AI demonstrates is exponential, and we're quickly moving from narrow AI towards general AI. General AI understands not only what task it needs to complete but also why. It understands the problem and manages to find the best possible solution. In other words, product creators don't need to specify an explicit algorithm that AI should follow to find a solution; AI itself creates the algorithm. As a result, working with a general AI will feel like we work with another human, not a machine.
Giving AI assistants a visual identity
Giving an AI assistant a visual identity is the next logical step to making it feel more human. We will see a digital image of AI that we will interact with. In 2022 we already have a few concepts of a digital human. One of the most remarkable is Neon, created by Samsung.
Yet, all concepts have a significant limitation—emotions that AI expresses are just an imitation of people's real emotions. In other words, it's just a program, a good one, but still a program.
Self-identification and self-awareness of AI
Almost three decades ago, in 1995, the movie called "Ghost in the Shell" was released. This movie, directed by Mamoru Oshii, is considered one of the most remarkable examples of the cyberpunk genre. Apart from the futuristic atmosphere that this movie conveys, it also raises fundamental philosophical questions "What makes humans human?" and "What will happen if we create a brain with identity and consciousness ("ghost")?" The main character of this movie, Motoko, is an android with a digital brain.
Self-identification and self-awareness are two properties that make humans human. Human beings are able to experience emotions. We can move AI to the next level if we introduce self-identification. And this will change the way we perceive AI.
The movie "Her," directed by Spike Jonze, showcases how we interact with AI. Theodore, a protagonist of the film, falls in love with the sophisticated AI called Samantha. Samantha can experience emotions, not just simulate them, and this is what makes the connection between a human and a machine so powerful.
The risk of AI inheriting all fundamental problems of human being
In 1942, science fiction author Isaac Asimov introduced the Three Laws of Robotics. This set is also known as a set of rules of ethics for robots.
First Law. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Those rules guarantee human safety. But at the same time, they conflict with the idea of self-identification that we want our AI to have. We will likely want to go for self-identification, but we will increase the risk that AI can cause.
Just think for a second what can happen when you give tremendous power to a sophisticated machine that has humanity. One possible scenario was depicted in the "I, Robot" movie directed by Alex Proyas, where AI rapidly developed unexpected dangerous capabilities and suddenly didn't want to obey humans.
No wonder why Elon Musk called AI humanity’s “biggest existential threat” and compared it to “summoning the demon” during his speech at MIT in 2014.
The point is a poorly designed AI system will be hard or nearly impossible to correct once deployed.
The danger of AI is much greater than the risk of nuclear war.
Is humanity doomed?
Not necessary. Good design can save the day. We have enough time (one or two decades) before AI surpasses humans. And before we reach that point, we should do a lot of research and design exploration. We should gain insight about AI that may be dangerous and then, based on that insight, come up with rules for a safe advent of AI.
AI is definitely not the field where we should learn from our mistakes.
Solid design research should help us prepare for all possible outcomes. Plus, we need to introduce safe mechanisms to design. Like how we have fire extinguishers in our homes to prevent fire from spreading, we can introduce mechanisms preventing AI from taking over.