

Artificial intelligence will be incredibly helpful for militaries, and militaries should move to adopt systems that can be implemented safely in areas such as improving the situational awareness of the military personnel who would be in the loop, logistics, and defense. Meaningful human control is something we must strive for, but as our colleagues from ICRAC said yesterday, “If states wanted genuine meaningful human control of weapons systems, they would not be using autonomous weapons systems.” These accidents represent tragic examples of how difficult it can be for a human to correct an autonomous system at the last minute if something has gone wrong.
#DAVID REISS WESTROCK LINKEDIN SOFTWARE#
In fact, even when the students had been warned in advance that the robot couldn’t be trusted, they still followed it away from the exit.Īs the delegate from Costa Rica mentioned yesterday, the New York Times has reported that pilots on the Boeing 737 Max had only 40 seconds to fix the malfunctioning automated software on the plane. Almost every student followed the robot, away from the safe exit. The students had the choice of leaving through a clearly marked exit that was right by them, or following a robot that was guiding them away from the exit. In one study at Georgia Tech, students were taking a test alone in a room, when a fire alarm went off. But when under extreme pressure, as in life and death situations, psychologists find that humans become overly reliant on technology. I’m sure everyone in this room has had a beneficial personal experience working with artificial intelligence.
.jpg)
In the last few years, over 4500 aritificial intelligence and robotics researchers have called for a ban on lethal autonomous weapons, over 100 CEOs of prominent AI companies have called for a ban on lethal autonomous weapons, and over 240 companies and nearly 4000 people have pledged to never develop lethal autonomous weapons.īut as we turn our attention to human-machine teaming, we must also carefully consider research coming from the field of psychology and recognize the limitations there as well. It’s not hard to see how quickly image recognition software could misinterpret situations on the battlefield if it has to quickly assess everyday objects that have been upended or destroyed.Īnd challenges with image recognition is only one of many examples why an increasing number of people in AI research and in the tech field – that is an increasing number of the people who are most familiar with how the technology works, and how it can go wrong – are all saying that this technology cannot be used safely or fairly to select and engage a target. This is the same technology that would analyze and interpret data picked up by the sensors of an autonomous weapons system. And this is just one of many, many examples of image recognition software failing because it does not understand the context within the image. For example, researchers at Auburn University found that if objects, like a school bus or a firetruck were simply shifted into unnatural positions, so that they were upended or turned on their sides in an image, the image classifier would not recognize them. Let me give you an example: In just the last few months, researchers from various universities have shown how easy it is to trick image recognition software. FLI is deeply worried about an imprudent application of technology in warfare, especially with regard to emerging technologies in the field of artificial intelligence. The Future of Life Institute (FLI) is a research and outreach organization that works with scientists to mitigate existential risks facing humanity. The following statement was read on the floor of the United Nations during the March, 2019 CCW meeting, in which delegates discussed a possible ban on lethal autonomous weapons. 2019 Statement to the United Nations in Support of a Ban on LAWS
