Background & Occupation at Google
I never expected, as an Irish software engineer, to be asked to help create an artificial intelligence system for the US military to use to analyse drone surveillance video. But this is exactly what happened to me at the end of 2017.
My name is Laura Nolan. I had been working at Google’s European Headquarters in Dublin since the start of 2013. When I tell this story, people usually assume I’m an Artificial Intelligence engineer, but my main specialty is in making software systems more scalable and reliable, although I’m also quite well informed on AI. My MSc in Software Engineering included very substantial AI and machine learning content, and I worked with machine learning systems for years. At Google, I worked on public cloud infrastructure, meaning computing services offered to third parties. You pay Google for storage, computing power, and use of their specialised software services.
Origins of Project Maven
What Google asked me to do in late 2017 was to help modify its public cloud systems so that they would be capable of meeting the US government’s standards for processing classified data. The classified data in question turned out to be Wide Area Motion Imagery, which is an aerial video taken by drones such as the Reaper or Predator – this was to support an initiative called Project Maven. Maven was the pilot project of a US DoD initiative known as the ‘Algorithmic Warfare Cross-Functional Team’, which was set up in 2017 to leverage expertise from the private technology sector that wasn’t available to the US military otherwise.
The idea behind Project Maven is this: the US military generates more drone surveillance footage than it can analyse using human employees. They wanted to develop a system that would use machine learning to analyse the drone video, and automatically pick out interesting things (like people and vehicles) and track their movements over time in the areas under surveillance. It even included a social-graph-like feature, so analysts would be able to click on a house on a map of an area and see where people and cars from that house had been going.
Google took on the Maven contract very quietly. For several months, very few people knew about it – including those who, like me, had been asked to work on cloud infrastructure projects that were intended to support Project Maven.
Ethical Implications of Project Maven
Maven is a military surveillance and intelligence project, and it is part of the military ‘kill chain’ – target identification, force dispatch to target, decision and order to attack the target, and finally the destruction of the target. This kind of work is a real departure for a company whose stated mission had been to ‘provide unbiased, accurate and free access to information for those who rely on us around the world.’ (https://abc.xyz/investor/founders-letters/2004-ipo-letter/). Maven has obvious ethical implications that deeply concerned me. I set out to learn more about drone surveillance and drone strikes, so I began to read books, reports, and articles about drone technology and the ethics of its use.
Firstly, there’s the direct impact of drone surveillance itself. Living for years under drone flights is psychologically stressful – it disrupts sleep, and it makes people very reluctant to gather in groups, which is harmful to communities (https://www.amnestyusa.org/files/asa330132013en.pdf). It could be reasonable to use drone surveillance in limited ways for military reasons, but constantly flying drones over areas for years (or even decades by now) is unethical and disproportionate. Project Maven, by automating the analysis process and therefore allowing more drone footage to be inspected, might lead to an amplification of drone surveillance.
Secondly, the reason for this surveillance is, at least in part, to pick out targets for airstrikes and drone strikes. For years, human rights organisations have condemned the way the US military has used drone strikes. They kill many civilians, government oversight of their use has been nonexistent or weak, and there’s been no transparency around the impact of the strikes or how they’re decided on. There’s good reason to believe that the use of drone strikes actually encourages terrorism.
‘Signature strikes’ are attacks on targets who have been selected based on some pattern of behaviour – such as gathering in groups, or visiting a house thought to belong to a terrorist. Project Maven would make it very easy to find those kinds of patterns, and so, with the right political will, it could lead to an intensification of drone strikes.
Another danger of technology like Maven is that it reduces direct human involvement in surveillance and war, and therefore increases moral distance. Anthony Swofford, a former Marine and the author of Jarhead, wrote that ‘The moral distance a society creates from the killing done in its name will increase the killing done in its name. We allow technology to increase moral distance; thus, technology increases the killing. More civilians than combatants die in modern warfare, so technology increases worldwide civilian murder at the hands of armies large and small.’ (https://www.technologyreview.com/s/614488/why-remote-war-is-bad-war/).
Leaving Google & Joining Campaign to Stop Killer Robots
I concluded that by working on Google’s technology, on which they intended to run the Maven software, I would have some responsibility for civilian deaths. I spent weeks barely sleeping. I developed severe acid reflux for the first time in my life. I escalated to the highest levels I could reach of management chain on both sides of the Atlantic – I told them that if this project continued, I would be leaving Google. I talked to colleagues. I signed (but didn’t write) the open letter (https://static01.nyt.com/files/2018/technology/googleletter.pdf)[4] which was widely reported in the media.
In summer 2018, I did indeed leave Google. Google’s executives had demonstrated that they were willing to secretly take on deeply questionable contracts, such as Project Maven and JEDI. I couldn’t, in good conscience, continue to be employed at Google when I couldn’t trust that what I worked on wouldn’t be used to infringe human rights, or even kill. It was not easy to leave; I miss many friends who still work there, and I walked away from a substantial amount of unvested Google stock.
After I left Google, I took some time away from software engineering. I began to volunteer with the Campaign to Stop Killer Robots, which campaigns for a ban on autonomous weapons (a weapon which selects their own targets without meaningful human control). I use my technical knowledge of complex systems and software reliability to explain some of the problems that we would likely see with the use of these sorts of weapons – they would likely be unpredictable and make errors, causing civilian deaths. I’m also partway through a Master’s degree in Ethics at Dublin City University, with a focus on technology ethics.
Promoting the Ethical Use of Technology
There’s a great interest in ethical issues in technology right now – there’s been sustained discussion in the media about all sorts of issues from privacy to automated decision-making to bias in AI systems; there have been documentaries like The Great Hack, and books like Shoshanna Zuboff’s Surveillance Capitalism. This is good, but we need to make sure that there’s real sustained change.
For those of us in technology, we need to start to think of ourselves as professionals, with responsibilities to the greater good as well as to the companies we work for. We need to educate ourselves on the harm that technology can cause. Unlike more established professions, we don’t have much continued professional development that covers ethics issues – we have to do it ourselves. I set up a group in Dublin, where I live, whose focus is largely on the discussion of technology ethics. I also think that software engineers should be part of a professional organisation or a union – groups have political power that individuals do not.
There are limits to what software engineers can do as individuals – it’s difficult to win arguments on ethical grounds when there is money to be made and a lack of legislation. Technology always runs ahead of the law, but we can narrow the gap. All of us, whether in the technology industry or not, should be asking our elected representatives about technology issues. Next time a politician knocks on your door before an election, consider asking them about their opinions on the use of facial recognition systems in your city, or whether health data should be shared with the private sector for research purposes, or about online election advertising. Research and find out what the potential issues are in your country that are not being addressed by your lawmakers. And of course, join the Campaign to Stop Killer Robots (https://www.stopkillerrobots.org/act/).
Twenty years ago, when I was in university the first time around, computers on our desks at work, and maybe at home, were turned on a few hours a week. They ran payroll and forecast the weather. Now they’re much more powerful, and they’re on our persons most of our waking hours. They’re deciding who gets to interview for jobs, who gets social welfare benefits, who sees what political advertising online, and maybe even who gets killed by a drone strike. The technology industry has grown very powerful very quickly, but it hasn’t yet grown up. It’s time to make that happen.
Show Comments
DC
A clear, concise call to action. Spread the word, and answer the call.