AI: Should We Fear The Singularity?

Source: https://www.if24.ru/ai-opasna-li-nam-singulyarnost/

Recently, discussions on artificial intelligence (AI) in popular publications have become increasingly alarmist. Some are trying to prove that AI will oust 90% of live people from the market, condemning them to unemployment and misery. Others go even further, asking whether the humankind can find in strong artificial intelligence an existential risk that no hydrogen bomb can match. Let us try to find out.

Supporters of treating AI as an existential risk usually mean the “intelligence explosion” scenario, when a powerful AI acquires a capability to improve itself (for example, by rewriting parts of the code), thereby becoming even “smarter”, which allows for even more radical improvements, and so forth. More details about this can be found in the AI-Foom debate between Robin Hanson and Eliezer Yudkowsky, a very interesting read that discusses this exact scenario. The main danger here is that the goals of the resulting superhuman artificial intelligence may not really align to the goals and original intentions of its human creators. A common example in the field goes as follows: if the original task of a powerful AI was something as innocent as producing paperclips, in a week or two after the “intelligence explosion” the Earth might find itself completely covered by fully automated factories of two kinds: factories producing paperclips and factories for constructing spaceships to bring paperclip manufacturing factories to other planets…

Such a scenario does sound upsetting. Moreover, it is very difficult to assess in advance how realistic this scenario is going to prove when we actually do developf a strong AI with superhuman abilities. Therefore, it is a good idea to consider it and try to prevent it — so I agree that the work of Nick Bostrom and Eliezer Yudkowsky is far from meaningless.

However, it is obvious to me, as a practicing machine learning researcher, that this scenario deals with models that simply do not exist yet — and will not appear for many, many years. The fact is that, despite great advances artificial intelligence has made over the recent years, “strong AI” still remains very far away. Modern deep neural networks are able to recognize faces as well as humans do, can redraw the landscape of your summerhouse à la Van Gogh and teach themselves to play the game of Go better than any human.

However, this does not mean much yet; consider a couple of illustrative examples.

  1. Modern computer vision systems are still inferior to the visual abilities of a human two-year-old. In particular, computer vision systems usually work with two-dimensional inputs and cannot develop any insight that we live in a three-dimensional world unless explicitly provided supervision about it; so far, this greatly limits their abilities.
  2. This lack of intuitive understanding is even more pronounced in natural language processing. Unfortunately, we are still far away from reliably passing the Turing test. The fact is that human languages rely very much on our insight of the world around us. Let me give another standard example: “A laptop did not fit in the bag because it was too big”. What does the pronoun “it” refer to here? What was too big, the laptop or the bag? Before you say it’s obvious, consider a different example: “A laptop did not fit in the bag because it was too small”… There are plenty of such examples. Basically, to process natural language truly correctly the models have to have intuitive understanding and insight into how the world works — and that’s very far away as well.
  3. In reinforcement learning, a kind of machine learning used, in particular, to train AlphaGo and AlphaZero, we encounter a different kind of difficulties: problems with motivation. For example, in the classic work by Volodymyr Mnih et al., a model based on deep reinforcement learning learned to play various computer games from the 1980s by just “watching the screen”, by the stream of screenshots from the game. It turned out to be quite possible… with one exception: the game scores still had to be given to the network separately, the humans had to specifically tell the model that this is a number that the model is supposed to increase. Modern neural networks cannot figure out what to do by themselves, they neither strive to expand their capabilities, nor crave for additional knowledge, and attempts to emulate these human drives are still at a very early stage.
  4. Will neural networks ever overcome these obstacles and learn to generalize heterogeneous information, understand the world around them and strive to learn new things, just like the humans do? It’s quite possible; after all, we humans somehow manage to. However, these problems now appear extremely difficult to resolve, and there is absolutely no chance that modern networks will suddenly “wake up” and decide to overthrow their human overlords.

However, I do see a great danger in the recent surge of hype over AI in general and deep neural networks in particular. But this danger, in my opinion, is not from AI, but for AI. History has already seen at least two “AI winters”, when excessive expectations, promises, and overzealous hype led to disappointments. Ironically, both “AI winters” were associated with neural networks. First, the late 1950s saw a (naturally, unsuccessful) attempt to transform the Rosenblatt’s perceptron into full-scale machine translation and computer vision systems. Then, in the late 1980s neural networks, which at that point already looked in a quite modern way, could not be trained well enough due to lack of data and computing power. In both cases, exaggerated expectations and inevitably crushed hopes resulted in long periods of stagnation in research. Let us hope that with the current third wave of hype for neural networks, history will decide not to repeat itself, and even if today’s inflated promises do not come true (and it will be difficult to fulfill them), the research will continue anyway…

Allow me a small postscript: I have recently written a short story which is extremely relevant to the topic of strong AI and related dangers. Try to read it — I really hope you like it.

Sergey Nikolenko
Chief Research Officer, Neuromation

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *