Can or will AI be self-aware? AI good or bad?

Alfonso de la Rocha
4 min readSep 9, 2017

--

Originally published at: https://www.adlrocha.com

Today I want to call for debate about a topic that many people, including popular characters in the field of AI, have been wondering for quite some time. Will AI eventually be conscious and have self-awareness? You may recall how recently Mark Zuckerberg and Elon Musk took out the big guns to argue about if AI is good or evil. Zuckerberg stated that AI will bring us progress and that we should not fear it, while Musk says that we should improve regulation around AI or we will be heading the end of the human race. But will we be heading the end of human race because of the use we will be making of AI, or because AI itself will become self-aware and decide that in order for it (or he/she) to survive in our world is to end up with the human race?

Let’s take a step back to analyze these compelling arguments. Some people may say that AI is like any other tool and it is completely neutral, it is neither good nor bad by itself. Let’s take for instance a knife. A knife is not good or bad, if used properly it can help us to cut meat, fish or any other food, while used in an evil way can be used to kill humans. Is this the case with AI? Well, is not that easy.

AI and Deep Learning algorithms work in the following way. We try to define a cost function (goal of the AI algorithm) and, using all the means in its hands, the AI will try to minimize that cost function and make the optimal decisions to get the best results for its goals. A human being will not be able to infer how the AI reached these results as they are composed by a huge set of neuron weights (numbers) and connections imposible to understand. Up to here nothing seems evil right? But what if to reach that optimal solution our beloved AI made some “ethically questionable” from the point of view of a human being.

What if our beloved AI made some “ethically questionable” decisions to reach the solution?

Let’s use as an example our Facebook feed, which is obviously powered by one of Facebook’s AI. Its goal is to keep us engaged in Facebook as much as possible. To do this, it will try to suggest us news that could be appealing for us. Thus, Facebook’s feed AI will try to maximize our time in FB profiling us and sending us interesting news. However, in this eagerness to keep us engaged FB’s feed could start sending us fake news or news of doubtful origin with explicit content because “hey, that is what I like and it keeps me engaged”.

Taking it to an extreme, this AI could be suggesting explicit content to a child just to keep him engaged and, yes, it is achieving its goal, but through a path ethically questionable. Nevertheless, don’t blame the algorithm as it does not know about ethics and values and, therefore, even if FB had programmed it without evil intents, AI can behave “bad”.

Don’t blame the algorithm as it does not know about ethics and values.

This do not mean that AI have self-awareness and that it is intrinsically bad. Actually, it is doing what it was programmed for, “maximize the time people spend in Facebook”. So how can we bypass these behaviours? Here is where research has to make an effort to try to define policies or mechanisms to avoid ethically questionable and bad behaviours of AIs. And, once again, these bad behaviours are not because AI become self-aware of itself, actually, AI is not conscious, and it will stay this way at least for the next couple of years. It us just focusing on achieving the goal it was programmed for, that’s it.

Related to this, I don’t feel in the near future AI will be able to be conscious about itself and its existence as an entity of our current world and society. Through science fiction and sensationalist journalism we are giving AI powers and capabilities that, at least for now does not have. Take for instance these Facebook AIs that developed a new language to communicate between them and thar were shut down. What they actually did was to optimize our language to communicate between each other trying to achieve the goal they were programmed for. They were not aware about the fact that they were creating a new language, but the algorithm used by the engineers worked in a way that led to this fact, and this is the reason why they were shut down. The algorithm was not working properly and the AI was not doing what they expected from it so they were shut down, that simple. However, sensationalist journalism decided that the evil AIs were able to develop their own secret languages to submit all mankind.

The evil AIs were able to develop their own secret language to submit all mankind.

In conclusion, at least for now (and see how I say “for now”), I don’t see AI being able to gain consciousness, develop self-awareness and decide that the best way to save the earth is to kill al humans (even if that is the optimal solution).

--

--