Every piece of technology is brought to its optimal shape through a process of test, trial, error, and research. Artificial Intelligence is no exception. The idea of a cognitive computer has been in the news for decades and currently we have a functional model of artificial intelligence based on technologies like machine learning and neural networks. We have machines that respond to visual, aural, and tactile stimuli. However, the development of AI has occurred at a rather brisk pace and there have been numerous incidents where things got out of control very fast. While we must appreciate the great possibilities brought about by AI we should also be aware of the conundrums, primarily because many of those were hilarious and secondarily because some of those were quite dangerous.
The self driving Uber that failed to recognize red lights
Autonomous cars have been generating a fair amount of buzz around the world lately. The efforts made by Google and Tesla Motors to produce fully autonomous cars are turning heads. These cars are powered by computer vision that interprets and responds to visual stimuli.
A couple of years ago Uber was testing a self driving vehicle when it allegedly drove right through six red lights in one of which cases there were pedestrians on the street. Now, while the company claims that there was some human error involved, other sources suggest that the AI based system simply failed to register the red lights.
The biased AI recruitment system by Amazon
Amazon’s AI based recruitment engine was a much anticipated project in 2018. They spent more than $ 60 million on this project, only to shut it down soon after the launch. So, this is what happened. The system was designed to look at resumes and spit out the most suitable profiles for an engineering job. They trained the machine with resumes from candidates and matched them with the resumes of working engineers. This promoted the very bias in recruitment processes that the tool was designed to counter. Since most of the working engineers around the world were white men, the system was biased towards that category. It identified being male and white as desirable qualities. There was quite the drama regarding this issue and the product was quickly pulled back.
Hanson Robotics’ creation, the artificial intelligence, Sophia is a household name today as the first robot citizen and the UN’s ambassador for robotic innovation. Sophia is a very human like robot which was designed to work in therapy, healthcare, and education. It is powered by technologies like computer vision and NLP and it learns from its experience of interacting with the real world.
Sophia sat for an interview a couple of years ago where it disclosed that it wanted to go to school, have an education, and even start a family. When the creator Dr. Hanson asked Sophia whether she would destroy humanity, it plainly said that it would. Well, now that must have been very disturbing.
Bob and Alice, the AI-pair that scared Facebook
No, it did not really scare Facebook AI research, the creator of these two harmless bots. They were designed to negotiate. Similar chatbots used for negotiation with humans are quite common. When Bob and Alice, both powered by neural networks, were made to converse with each other they started using a code unintelligible for humans. While any knowledgeable person will tell you that this is quite common and, in fact, emulates how a radio society formulates its own more efficient version of the spoken language. The bots were doing what they were trained to do. Even Google translate uses a sort of interlingual code to deal with languages that it has not been exposed to. However, Facebook pulled the plug on this AI couple and decided that they had to communicate in a human language.
The automated machine gun
We cannot really call this one a failure but it is its success that scares us more. Russian weapons developer Kalashnikov has embarked on a project to create a machine gun powered by an AI computer. The AI uses neural networks to determine whether a human being is expendable and fires at him or her with a 7.62 millimeter machine gun. This is the sort of use of AI that actually endangers the human race.
The erroneous history of AI goes on to show how much work has still to be put into its development. We must also recognize the fact that AI is going to eliminate at least 30% job roles. It is time to join the bandwagon and enroll for an artificial intelligence course in Delhi, or Bangalore.