Path to a better future with artificial intelligent beings: Humanitarian Perspective

Killer Robots are bigger threat than Nuclear North Korea

-Elon Musk

Authors: Vijaya Singh & Vijay Mishra

The 21st century woke in the laps of technology with revolutionary changes in the automation industry. The technology revolution started in late 1950s, but it is believed that this century is the target zone of the intermixing of Artificial Intelligence (AIs) with the human civilization. As the usage of AI has been brought forth in almost all fields from agriculture to education and is  still being actively developed in arenas such as medicine, military etc. The  growing acquaintances in our day to day life with  Siri doing half of our chores, automatic completion of our search queries on search engines,  suggestions on the internet matching with our interests, automated driving cars with sensors to avoid collision and accidents/auto-pilot airplanes etc.  has begun an era where we are closest than ever before in replacing human workforce with software and smart machines. We are now experimenting with creating human like AI.

The growth in technology has not necessarily been a boon for the mankind.

The growth in technology has not necessarily been a boon for the mankind. The acceleration in the research and development as well usage of AI has been tremendously increasing. The usage of this technology has given results, that were unforeseeable and put us ahead of time but the blind efforts in creation of AI warrants precaution. Instances such as the Uber self-driving car, which was AI equipped, killing   a pedestrian[1]  or the inherent bias depicted in the Amazon employed AI based recruiter, which showed preference towards the male applicants has made us rethink about the efficiency and faultlessness of the technology we are advancing towards.[2] It has made us rethink upon our control over the artificial beings. Also, how well can we control, direct and use these beings or has the control already been compromised.

 The usage of Intelligence in military has been prevalent since World War II and  even in the Cold War in the form of German Goliath Tracked Mines and aerial drones such as MQB-1 Predator for collecting intelligence.[3] With the evolution of technology, the intelligence is been used for surveillance, logistics, cyberspace operations, lethal autonomous weapons system and is reaching to the full-fledged autonomous stage in decision making and use of force. The usage of AI in the field of defense can lead to a lot of destruction due to the seepage of bias via the human touch whereas the higher the level of automation sanctioned on AIs can lead to uncontrolled exercise of such bias. Deliberations have already started weighing the factor that can convert the AIs into killer machines. The international community has time and again banned weapons which are dangerous and in violations with the basics of International Humanitarian Laws like cluster munitions, biological and chemical weapons etc. It thereby makes it pertinent to study factors which can make AIs corrupt and lead to its forbidden in usage despite presenting itself as the most convenient option in the battlefield for use. The blog presents notions for better interaction with these artificial beings in the realm of humanitarian laws. Better understanding of any being leads to better implementation of task allotted, also better laws pertaining to the specific sample in hand can be made.

Hypothesis  – Training & Education

 It is up to the makers to mold the technology for relevant use. The two drivers for AI used mechanism are training and education. Training provides art and skill required to handle and care the AI based systems and weapons. Whereas, education provides knowledge to humans to develop AI and improvise its use in different industries. Both these premises have a huge dependence on the human mind. Here, it is pertinent to analyze the behavior of humans in general in order to understand the shortcomings of the AI guided world.

AI functions on the basis of algorithm developed by the architects (here refereed in context of builders of code) of AI. In the present times, AI has also extended its leg in the field of defense. With the help of AI Robots (AI Weapons), it is possible to engage in a war modeled on robot fighters. This reminds us of Rossoueau who theorized that “Man is a noble savage”. Rousseau in his book “A Discourse” rightly stated that a man in his true state of nature does not think about moral and immoral rather he is just concerned about his freedom. Furthermore, he analyses that men is naturally from instinct is amoral, liberal and appetite.  The moral and ethical values in a human are infiltrated by the emotions of anger, jealous, revenge, masochism, biases etc. These emotions are inbuilt and inherent in the human nature. These emotions can be controlled and regulated only by a civil society. A civil society can be developed with the help of education and dependence. However, the system of education also poses challenges to the use of AI and AI based techniques.

AI Humanitarian Perspective

Education helps in making a person civilized and regulated. Improper or not well laid foundation of education seeps in and lays path for multitude of bias in people on the basis of creed, religion, color etc. Also, the State of War is never considered to be civilized. General Humanitarian Law, however, has guided us to make war civilized and less brutal on people with humanitarian grounds.

If we examine the situation of developing AI weapons for defense purpose, considering these emotions, we can draw a possible conclusion that the humans (architect) will generate something which is possibly capable of producing mass destruction and loss of lives. The notion of defense is filled with retaliation, protection, biases and revenge for every state. The development of AI weapons (killer robots) will be in light of retaliation, protection and defense. Therefore, the robots which will function on the basis of inbuilt algorithm will be highly motivated with these feelings of retaliation, bias, defense and protection. Human minds have the possibility of heart change at any point of time in regard to their feelings but if the algorithm based killer robots has been set up with wrong basics, then these robots can bleed countries with leading premises derived by architects in the form of personal bias & will grow thoughts from historical materialism which will lead to mass destruction.

For instance, the mystery of Aircraft MH 370 has not been resolved yet, but it is alleged that the plane had been crashed by the pilot due to his mental conditions.[4] The case study of Syrian crisis suggests that the violence spread would have been massive if killer robots would have been used. The whole of Middle East would have been at risk. The prevalent humanistic angles helps us decide and take sides. With the elimination of human element we will see violence incapable of being rectified.

There can be various situations when these robots can act in irrational manner based on algorithms. For example, there is a high probability of killing the human beings who are not taking part in hostiles but are only civilians. The term “killer robots” given by International Organizations justifies the situation and act of these AI developed robots for defense purposes. [5] Also, the killing of the woman by an Uber AI managed car is an example of improper decision making of AI.

The gender bias here is an example of how policies such as Islamophobia or ideologies such as Nazi can be easily escalated without questions by coding it in AIs.

The positive side of the use of AI weapons is the precision of AI weapons and negligible chance of unnecessary killings, if developed successfully. Humans engaged in the war on the other hand can cause unnecessary killings out of apprehension only. However, to overcome the situation of human minds developing AI weapons, filled with negative and self-centric emotions, the premise of education needs to be improved. There must be a set of guidelines or standard test for persons developing AI weapons to qualify and in order to be eligible for developing AIs. The curriculum of the technical course must be guided with universal guidelines which can teach future developers ethical standards and to minimize involvement of negative human sentiments while developing these AI weapons. This is how education plays an important role for the purpose of development of killer robots.

The second basis on which the premise of these AI developed weapons is contentious is the Training of AIs. There is mechanism provided in International Humanitarian Law as well all defense departments to afford the military involved in battle with a training for engagement in war.[6] However, AIs function on autonomous basis and gets developed day by day on the basis of their functioning. This development can lead to situations when AI’s starts targeting wrong people as the coding done has been influenced with human intents. Also, no one can cut down human flaw or negligence which may get instrumental in the training sessions of AI weapons and may lead to greater harm later. It is essential to maintain control, check and practice on any form of weapons used in warfare to ensure the basic conduct and standards are met by all.

For instance, the Amazon employed recruiters showed a clear bias for men for the applied post than woman due to the earlier applications in record of more men being selected.[7] The gender bias here is an example of how policies such as Islamophobia or ideologies such as Nazi can be easily escalated without questions by coding it in AIs. The human mind has been degraded with the whims and fancies of greediness, power and money. That’s why several countries are spoiled with corruption.[8] The belief here is that the killer robots and the AI related to war must not be used in the war and defense perspective is that, as they will be able to create a revolution which will have mass killings without any distinction of civilians and combatants.[9]

Practical Considerations

Analyzing the foreseeable destruction possible with use of these killer robots, there was a campaign initiated with the name “Campaign to Stop Killer Robots”. The support gained by these campaign shows the sighted fear in the eyes of countries and international bodies. The campaign was supported by 28 countries, 100 NGOs, European Parliament, Human Rights Council, almost 4500 AI experts and 21 Nobel Laureates. The results of a global poll has gained the support of 61% who opposed the Fully Autonomous Weapons.[10] The present world can only be made ready with policies specially made to profuse the destruction, the AI are capable of creating. The proper training and education in true sense has power to liberate the human beings and render them capable to make well functioned AI to guard the borders.

References

[1] Aarian Marshall & Alex Davis, ‘Uber’s Self Driving Car didn’t know pedestrians could jaywalk’ (The Wired, 11 May, 2019) <https://www.wired.com/story/ubers-self-driving-car-didnt-know-pedestrians-could-jaywalk/> accessed 18 December 2019.

[2] Jeffrey Dastin, ‘Insight – Amazon scraps secret AI recruiting tool that showed bias against women’ (Reuters, 10 October, 2018) <https://in.reuters.com/article/amazon-com-jobs-automation/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idINKCN1MK0AH> accessed 18 December 2019.

[3] Leonardo Fiesoli, ‘ Robotic Weapons’ (Prezi, 03 May, 2019) < https://prezi.com/0ztkuyvuyvpz/robotic-weapones-and-other-military-stuff/> accessed 18 December 2019.

[4] Alex Chapman, ‘The MH370 captain, the twin sister models and some VERY creepy messages’ (The Daily Mail, 23 September, 2018) <https://www.dailymail.co.uk/news/article-6197499/Creepy-messages-missing-flight-MH370-captain-twin-Malaysian-models.html> accessed 20 December 2019.

[5] Killer Robots’  Human Rights Watch <https://www.hrw.org/topic/arms/killer-robots> accessed 20 December 2019.

[6] Customary IHL : Rules and Practice, International Committee for Red Cross, rule 142.

[7] [7] Nicol Turner Lee et al., ‘Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms’ (The Brookings, 22 May, 2019) <https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/> accessed 20 December 2019.

[8] Corruption Perception Index 2018’ (Transparency International, 2018) <https://www.transparency.org/cpi2018> accessed 20 December 2019.

[9] Customary IHL : Rules and Practice, International Committee for Red Cross, rule 1.

[10] Campaign to Stop Killer Robots, <https://www.stopkillerrobots.org/> accessed 20 December 2019.

Leave a Reply