Laura Nolan: “Military technology is often adopted later for civilian use”
Mohsen Abdelmoumen: You are a brilliant computer engineer and have been recruited from Google where you worked on the Maven project. What can you tell us about this Maven project that links Google to the Pentagon?
Laura Nolan: I didn’t work directly on Maven itself; I was asked to help make certain changes to Google’s cloud platform to enable Maven to be run on it.
Project Maven is part of a United States Department of Defense initiative called the Algorithmic Warfare Cross-Functional Team (AWCFT), which was set up to try and leverage private sector technology expertise for the US military. Maven was the AWCFT’s first project. Google and several other companies were involved.
The US military was analysing their drone surveillance footage mainly by hand, and my understanding is they were limited in how much footage they could analyse because they couldn’t hire enough people. Maven aimed to automate much of the analysis - it was supposed to identify people and vehicles in footage. My understanding is also that it is supposed to be able to maintain a sort of timeline of activity in an area, and a kind of social graph of interactions (people going from building to building). I’ve also heard people say it was intended to be able to find certain patterns of activity.
This project is not an autonomous weapons project, but it is definitely something that could turn into a large part of the control system for an autonomous weapon.
On its own, Maven is objectionable to me, though. Firstly, it’s a system that enables mass surveillance in an area, not just targeted surveillance of individuals already under some kind of suspicion. As with all mass surveillance systems, there are real ethical problems here. One, it infringes on the privacy of those under surveillance, most of whom are innocent civilians. Secondly, it’s an expression of control - it’s hostile, in itself.
The US does this kind of intensive drone surveillance in many areas which aren’t even officially declared war zones, and it does it for incredibly long periods of time. It creates a climate of fear which harms the mental health of those living under surveillance. People fear to send their children to school, to attend events like weddings or town meetings. It is harmful to the social fabric of their societies when it’s done over many years, like in Pakistan.
Maven would allow for an increase in the level of drone surveillance - because the US military would have an increased capacity to process the video captured. Maven could also, with political will, lead to an increase in the level of ‘signature strikes’, which is when people are targeted based on patterns of behaviour. With the kind of information Maven is designed to extract, it becomes very easy to write rules like ‘find me groups of military aged males over a certain size’ or ‘find me convoys of vehicles heading towards this point’.
How do you explain that a company like Google is involved in a military research project?
I didn’t make that decision, and obviously I can’t tell you what the executives who did make that decision were thinking. However, Google has been heavily investing in its Cloud product, and the other major companies in that space (Amazon, Microsoft) certainly do work with militaries.
World public opinion ignores these experiments on autonomous lethal weapons, such as killer robots. Don't you think we are living in a very dangerous time with these terrifying experiences?
When the Campaign to Stop Killer Robots has conducted opinion polls it’s always shown that public opinion is against these technologies, even in the most militarised nations like the US and Russia. Awareness of the issue is not universal, but there are many promising signs. The UN Secretary General, Antonio Guterres, has repeatedly spoken out in the strongest possible terms against autonomous weapons. Thousands of technologists and researchers have signed pledges not to work on such weapons. 30 countries have called explicitly for a ban treaty, and many other countries have expressed strong concern about the issue. We have begun to see national parliamentary resolutions against autonomous weapons, by Belgium. We’ve also seen BDI, the German industry group, call for a ban.
We absolutely are living in dangerous times. It is probably those living in conflict areas now who have the most to fear in the short term from autonomous weapons. Remote warfare has already lowered the bar for developed countries to engage in conflict. Increasing autonomy will accelerate and amplify that even more, as it will reduce the need for personnel to perform analysis, targeting and piloting of weapons. The fact that many weapons can be controlled by fewer people could also increase the power of tyrants.
Those of us living in more peaceful, developed nations should not be complacent either. Military technology is often adopted later for civilian use in policing and border control. If autonomous weapons come into common military use then they’re more likely to be used in terror attacks, too. Military use will legitimise their use by other groups.
Why in your opinion mainstream media never mention these experiences? Why is there very little information about these experiences?
I don’t know that that’s fair to say. Mainstream media absolutely does report on autonomous weapons, and I personally have done a pretty fair number of interviews at this point!
Autonomous weapons are a pressing issue but there are many others too, such as the climate crisis, war in Yemen, Brexit, nuclear weapons, the repression of the Uighurs in western China, Hong Kong… and only so many inches in the papers to report on it all.
You are a courageous whistleblower and you left Google because of the Maven project. Whistleblowers are either threatened or imprisoned and sacrifice their professional careers to make the truth public. Do you not think that they should be protected by specific laws and supported by public opinion for giving sensitive information concerning humanity?
Yes. I think the EU is showing the way here with the recent approval of the new Whistleblower Directive, which addresses these concerns. I’d like to see this implemented speedily in national parliaments, and of course to see other countries without strong whistleblower protection follow suit.
Don't you think that the step has been taken in this field of research, whether with autonomous lethal weapons or "killer robots" or with the Sensor to Shooter project or with these wandering ammunitions like hybrid drones/missiles, and that the threat hangs over the whole of humanity? Isn't this technology irreversible?
You cannot uninvent a technology but you can certainly create regulation to prevent its use and you can also create a moral consensus against its use. It has been done with biological weapons, chemical weapons, blinding lasers, landmines and other weapons. To suggest that the use of any technology is unavoidable is to be profoundly nihilistic.
Lt. General Jack Shanahan, Director of the Center for Artificial Intelligence at the Pentagon, wanted to reassure the public about the use of Sensor to Shooter. According to him, the man will always keep control. Can we believe the words of Lt. General Shanahan?
Shanahan will not be in office forever and we don’t know what the opinions of his successor will be. Only a legally binding instrument can prevent an arms race in autonomous weaponry.
US representatives often refer to the DoD’s Directive 3000.09 as being a commitment to avoid autonomous weapons capability. However, it can be replaced at any time, and its language is weak: it talks about ‘appropriate human judgment’. It might be a commander’s appropriate human judgment to send a loitering munition to patrol a large area for convoys and destroy them; but that isn’t control.
There’s another concern, too. The Sensor to Shooter project to be implemented on top of Maven’s data is supposed to suggest targets to human operators to strike. However, they haven’t spoken about how they’re going to avoid the problem of automation bias, which is a human tendency to favour machine suggestions - essentially, automatically and uncritically accepting what computers say. It’s difficult to prevent automation bias, and there’s a lot of research into the area. I was extremely disappointed that the Defense Innovation Board’s recent AI principles didn’t tackle this problem.
As a scientist, how do you explain that the science that should be used for human well-being has become a fatal weapon in the hands of unscrupulous adventurers?
I think this is certainly not the first time we’ve seen technology and science used to surveil, oppress, injure and kill people. We need, as a global society, to be always vigilant about uses of emerging technology. The UN is the best tool we have to do this, and to promote multilateralism.
However, the UN is undergoing a financial crisis - countries are not paying their dues. This is wasting enormous amounts of valuable meeting time discussing finances instead of substantive issues, like new weapons.
US President Trump’s shortsightedness in pressing for UN budget reductions and the US’s payment arrears are a big part of this problem. Trump and the US are acting very dangerously in undermining the UN.
You are very familiar with this issue. How do you explain the opacity that surrounds all these experiences? Where is the world going with these foolish experiences? Isn't what we know about these experiments the tip of the iceberg, what is hidden from us being more frightening?
Absolutely, it’s difficult to know what’s being built in secret. What we know about is bad enough. Currently, I am very concerned about the Turkish Kargu drone, which is a loitering munition with facial recognition. Reportedly, Turkey intends to deploy this on the border with Syria in the near future. Will they use them as autonomous hunter-killer drones targeting individuals? This throws up enormous moral questions, as well as technical accuracy concerns. Non-cooperative facial recognition in video isn’t accurate enough to use for decisionmaking, according to the US NIST.
[https://www.paxforpeace.nl/publications/all-publications/slippery-slope]
[https://nvlpubs.nist.gov/nistpubs/ir/2017/NIST.IR.8173.pdf]
You have just returned from the UN in Geneva where you participated in a meeting on autonomous lethal weapons, including killer robots. In your opinion, is there a real awareness on these very serious issues?
I believe there absolutely is awareness among the delegates at the UN, yes, and those in many parliaments who are responsible for foreign affairs, military and disarmament. Public awareness is growing, too. We see a definite trend over the years in public opinion - it is increasingly against autonomous weapons. This seems likely to be a result of increased awareness.
You are a member of the International Committee for Robot Arms Control (ICRAC). What are the main missions of this organization and what is its impact?
ICRAC is an interdisciplinary group of people who are interested in this problem - there are academics in technology, law, ethics, psychology and other areas, and some industry professionals like myself. ICRAC and its members do research in the area of autonomous weapons, attend meetings (like the recent CCW meeting in Geneva) where we often present papers and give talks at side events. We also talk to parliamentarians, media, other NGOs, give public talks on autonomous weapons, occasionally write or sign open letters - normal campaigning activities, really.
ICRAC is a founding member of the Campaign to Stop Killer Robots - it was one of the first organisations sounding the alarm on this issue. The fact that the issue has been seriously under discussion at both the UN Convention on Conventional Weapons and recently at the General Assembly is a testament to ICRAC’s impact.
Interview realized by Mohsen Abdelmoumen
Who is Laura Nolan?
Laura Nolan has been a software engineer in industry for over 15 years. Most recently, she worked at Google in Ireland as a Staff Site Reliability Engineer.
She was one of the (many) signatories of the “cancel Maven” open letter, which called for Google to cancel its involvement in the US DoD’s project to use artificial intelligence technology to analyse drone surveillance footage. She campaigned within Google against Project Maven, before leaving the company in protest against it. In 2018 Laura also founded TechWontBuildIt Dublin, an organisation for technology workers who are concerned about the ethical implications of our industry and the work we do.