Shivani Talati | 28 September 2017

India: The ability to think and act upon a thought is what sets humans apart from robots. On that note, what if a system is developed that makes a robot/machine capable of forming its own judgment about problems and resolve them using its own intelligence?   

Artificial intelligence is about a computer demonstrating a form of intelligence i.e. when it learns how to improve itself to solve the problems which it encounters, even because of human interaction or error. AI (artificial intelligence) is a very wide area encompassing fields of computer-science, psychology, biology, engineering and others. It requires machinery and/or deals with perceiving, reasoning and acting.

Humans today cherish their relationship with machines because they provide efficiency in the completion of many tasks. The advancement in technology will lead to greater creativity and innovation, as rightly said by Mr. Bill Gates, however one cannot ignore the possibility of domination being exercised by these Robots over humans.

Among various human activities that make use of AI, in 2016, was a feature film that was written by AI named ‘SURPRISING’. Though most of the dialogues made no sense to humans, it served as a big step of entry of AI in the field of creativity and randomness. As fascinating as this may sound, many AI agents are now creating and working on their own outside the control of humans. Another recent case is one which occurred on Facebook, wherein Facebook’s AI created its own language that researchers could not understand. Even though it was declared that the content created was not harmful, it raised concern as humans were not able to understand what an AI was saying or why it was saying it? Below is the sample of the conversation created by AI: click Here.

These and many other cases are evidence of the AI agents working on their own and beyond the understanding of humans. Once this happens it is terrifying to think what these machines are capable of which we have no control over. So, if machines are developed to solve problems, then a system should also be developed to check on their behaviour. However, in that case how many systems would then have to be developed to check each machine’s behaviour?

"It (AI) would take off on its own and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be suspended". - Stephen Hawking

 Thus, though AI is currently helping humans in various fields and easing their tasks to work and come to solutions, the question remains; where and when do we draw the line to ensure that developments in the field of AI should not lead to the extinction of the Human race?