CB 350

1 November 2022
The circulation of power and ethics in AI, robotics, and autonomy research in Australia
Key to unlocking these opportunities is solving or mitigating both the ethical and practical challenges associating with using such systems. 
by Sian Troath
Image Credit // Adobe Stock
Sian Troath
Sian Troath is a postdoctoral fellowship at the University of Canterbury, and an adjunct researcher at the Jeff Bleich Centre for the US Alliance in Digital Technology, Security & Governance at Flinders University. Her research focuses on Australian foreign and defence policy, lethal autonomous weapons systems, and theories of trust in international relations.

Autonomous vehicles, drones, swarming and collaborative robotics have together recently been announced as not only a ‘critical technology’, but a ‘critical technology of initial focus’ by the Australian government – one of a shortlist of nine priority technologies identified as essential to Australia’s economic and national security.1 Australia is seen as a leader when it comes to trusted autonomous systems, with autonomy research in the defence space escalating in recent years. Robotics and autonomous systems, or RAS, have been identified as both a threat (when in the hands of adversaries) and an opportunity for Defence (when in the hands of Defence, and allies and partners).

The opportunities identified include the following: enhanced combat capability, improved efficiency, increased mass, decision superiority, reduced risk to personnel, reduced physical and cognitive loads of soldiers, improved decision making, agility, resilience and enhanced lethality.2 Key to unlocking these opportunities is solving or mitigating both the ethical and practical challenges associating with using such systems. 

In terms of practical challenges, the aim is to support collaboration between Defence, industry and academia to advance technology to a point where RAS are cheap, small and many.3 In terms of ethical challenges, five facets of ethical AI for Defence have been identified: responsibility (who is responsible), governance (how AI is controlled), trust (how AI can be trusted), law (how AI can be used lawfully) and traceability (how the actions of AI are recorded).4

This work is largely being led by the Trusted Autonomous Systems Defence Cooperative Research Centre (TASDCRC). As the first Defence Cooperative Research Centre, launched in 2018, it aims to bring together Defence, academia and industry to tackle challenges relating to trusted autonomous systems.5 A key focus of the centre is ethics.

Two recent articles have drawn attention to the circulation of ethics and capture regarding big technology corporations and AI research. Phan, Goldenfein, Mann and Kuch trace how particular approaches to ethics circulate across Big Tech, universities and other industries.6 They argue that ‘Big Tech has transformed ethics into a form of capital – a transactional object external to the organisation, one of the many “things” contemporary capitalists must tame and procure’.7 Whittaker explores the influence of the tech industry on AI research, arguing that reliance on large data sets, computational processing power and data storage concentrates power in the hands of a small number of large tech companies who hold such resources.8

Both of these pieces provide an interesting springboard to jump off when thinking about defence AI research in Australia. It is the push and pull between profit-based industry incentives, Defence desire for military advantage, and academia’s growing reliance on external funding that is shaping the development of robotics and autonomous systems – not only the technological systems themselves, but also ideas about the ethics and practicality of their use. Power, ethics and narratives all circulate between these spaces. These interconnections, exacerbated by dual-use opportunities which see technologies able to be used or adapted between civilian and military purposes, require examination.

AI and robotics and autonomous systems research for Defence purposes does not take place in a vacuum – it is part of a broader ecosystem of power and dependency. Kate Crawford highlights these very power dynamics in her book, in the American context. As she outlines, in seeking to enact the Third Offset strategy and utilise AI and autonomy for military advantage, ‘the Department of Defense would need gigantic extractive infrastructures’ – and the only place where both the required human and technological resources can be accessed is the tech industry.9 

The power flows in both directions, however, with perceptions of strategic competition driving a desire for technological superiority and creating a complex relationship between the state and industry. Indeed, Crawford argues, the dual-use nature of AI technologies has led the US to adopt civilian-military collaboration as ‘an explicit strategy: to seek national control and international dominance of AI in order to secure military and corporate advantage’.10

The push for this kind of dual-use approach is also evident in Australia. The TAS-DCRC was set up with investment from the Queensland state government, who make it clear that they expect the benefits of defence-focused research on trusted autonomous systems to bolster civilian industries such as agriculture, mining and environmental management.11 In the reverse direction, Chief Engineer for the Royal Australian Air Force Remotely Piloted Aircraft Systems/Unmanned Aerial Systems, Kierin Joyce argues that Australia’s world-leading autonomy research in the mining and research sector can be adapted to a defence context.12 This all takes place in the context of an ongoing shift in approach to the defence industry, with the government aiming to enhance connections between Defence, industry, and academia.13

The growing push for strengthening collaboration between Defence, industry and academia takes us into a second thread: how the dynamics of power, dependency and the circulation of ethics emanate from Defence. Whittaker lays out the cost of capture of AI research by industry through providing the historical context of the Cold War dominance of the US military over scientific research.14 While she is right to focus on tech companies as the dominant source of this dynamic in the present day, it is important not to discount the ongoing influence of Defence and the messy relationship between civilian and military research – particularly when it comes to AI, autonomy and robotics.

In Australia, both industry and academia vie for defence research funding, with academia also focused on attracting industry funding. On the university side, COVID-19 has exacerbated pre-existing crises in the neoliberal university – leaving research and jobs increasingly reliant on accessing external funding.15 These dynamics of power and dependence influence how research develops. In Whittaker’s words,

This doesn’t mean that researchers within these domains are compromised. Neither does it mean that there aren’t research directions that can elude such dependencies. It does mean, however, that the questions and incentives that animate the field are not always individual researchers’ to decide. And that the terms of the field—including which questions are deemed worth answering, and which answers will result in grants, awards, and tenure—are inordinately shaped by the corporate turn to resource-intensive AI, and the tech-industry incentives propelling it.16

Phan et al. also highlight these difficulties regarding both government and industry funding, pointing out that even the mere selection of what will get funded, influenced by various interests, creates ‘a set of dilemmas and paradoxes’ for researchers. 17 

It is interesting to note, then, the funding choices that lead money to be directed to ethics in AI for Defence, and trusted autonomous systems, taking us back to the circulation of ethics in the messy civilian-military relationship. Phan et al. describe the corporate logic at play, which views ethics as a problem to solve, as something to acquire to bolster legitimacy. There is a similar logic at play in the Defence arena: Ethics are seen as something to acquire, a problem to solve, in order to succeed in the quest for military advantage. The TASDCRC commenced a $9 million, six year Programme on the Ethics and Law of Trusted Autonomous Systems in 2019.18 Further to this is the establishment of the TASDCRC Ethics Uplift Program, which aims, among other things, to ‘build enduring ethical capacity in Australian industry and universities to service Australian RAS-AI’ and ‘educate in how to build ethical and legal autonomous systems’.19  TASDCRC CEO, Jason Scholz, has said that ‘ethics is a fundamental consideration across the game-changing Projects that TAS are bringing together with Defence, Industry and Research Institutions’.20 These efforts all rely on the industry-academia-military relationship at their core.

Meanwhile, Defence has itself emphasised that ‘the ethics of AI and autonomous systems is an ongoing priority’.21  Air Vice-Marshal Cath Roberts has spoken of the need to ‘ensure that ethical, moral and legal issues are resolved at the same pace as the technology is developed’, given the vital role AI and autonomy will play for air power.22 A group of researchers from or associated with the TASDCRC go further still, arguing that autonomous weapons will be ‘ethical weapons’, able to make war ‘safer’.23  Again, ethics are viewed as a problem to be solved. In the corporate space, ethics are to be solved for profit. In the defence space, ethics are to be solved for military advantage and, relatedly, to facilitate acceptance and trust of such technologies by both personnel and the public. 

The approach to ‘trust’ is the same: a problem to be solved. If an autonomous system can be trusted, then it can be used – ethically as well as legally – in the pursuit of military advantage. Autonomous systems need to be trusted by the personnel who will use them so that they are willing to use them,24  and they need to be trusted by the public so that Defence retains its license to operate. It is an interesting Australian quirk, that such systems are not lethal autonomous systems or even merely autonomous systems, but rather trusted autonomous systems. 

None of this is to say that either research on ethics or trust in technology is inherently suspect. Rather, it is to point out that the terms ‘ethical’ and ‘trusted’ in AI, robotics and autonomy research are being used for political purposes – sometimes intentionally, sometimes unintentionally. As someone who has conducted research on trust in technology under a Defence contract, I was once in a room where someone asked, ‘Can’t we use trust as a weapon?’ Militarism is quite the drug. 

These are threads which still need further unravelling. The circulation of ethics and power at the intersection of academia, industry and defence when it comes to AI, robotics and autonomy research in Australia demands further exploration. It is the intertwining of corporate interests seeking profit, Defence motivations for military superiority, and academia’s desperation for external funding that is shaping AI, robotics, and autonomous systems research in Australia. The narratives being deployed to achieve these aims require careful consideration.

Notes:

  1. Critical Technologies Policy Coordination Office, Australian Government, ‘The Action Plan for Critical Technologies’, 2021. See also Critical Technologies Policy Coordination Office, Australian Government, ‘Blueprint for Critical Technologies’, 2021.
  2. Australian Defence Force, ‘Concept for Robotic and Autonomous Systems’, 2020, p. 8; Australian Army, ‘Robotic & Autonomous Systems Strategy’, 2018, p. 6; Royal Australian Navy, ‘RAS-AI Strategy 2040: Warfare Innovation Navy’, 2020, p. 14.
  3. Australian Defence Force, ‘Concept for Robotic and Autonomous Systems’, 2020, p. 20.
  4. Kate Devitt, Michael Gan, Jason Scholz, and Robert Bolia, ‘A Method for Ethical AI in Defence’, Australian Government Department of Defence, 2020, p. ii.
  5. Trusted Autonomous Systems, ‘About Us’, nd., https://tasdcrc.com.au/about-us/. 
  6. Thao Phan, Jake Goldenfein, Monique Mann, and Declan Kuch, ‘Economies of Virtue: The Circulation of “Ethics” in Big Tech’, Science as Culture, 2021, pp. 1-15.
  7. Phan et al., p. 1.
  8. Meredith Whittaker, ‘The Steep Cost of Capture’, Interactions XXVII.6 (November-December), 2021, pp. 51-55.
  9. Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press: New Haven, 2021, p. 188.
  10. Crawford, p. 187.
  11. Queensland Government, ‘Queensland Drones Strategy’, 2019, p. 29.
  12. Robbin Laird, ‘The Quest for Next Generation Autonomous Systems: Impact on Reshaping Australian Defence Forces’, 25 May 2021, https://defense.info/re-shaping-defense-security/2021/05/the-quest-for-next-generation-autonomous-systems-impact-on-reshaping-australian-defence-forces/. 
  13. Australia Department of Defence, ‘2016 Defence Industry Policy Statement’, 2016; Melissa Price, ‘Op-Ed: Give Pillars Approach to Support Defence Industry’, 24 September 2020, https://www.minister.defence.gov.au/minister/melissa-price/media-releases/op-ed-five-pillars-approach-support-defence-industry; Melissa Price, ‘Defence Innovation System Goes Under Microscope’, 3 September 2021, https://www.minister.defence.gov.au/minister/melissa-price/media-releases/defence-innovation-system-goes-under-microscope. 
  14. Whittaker, p. 52.
  15. Phan et al., p. 8.
  16. Whittaker, p. 52.
  17. Phan et al., p. 2.
  18. Trusted Autonomous Systems, ‘TASDCRC Activity on Ethics and Law of Trusted Autonomous Systems’, 12 February 2021, https://tasdcrc.com.au/tasdcrc-activity-on-ethics-and-law-of-trusted-autonomous-systems/. 
  19. Ibid.
  20. Trusted Autonomous Systems, ‘A Method for Ethical AI in Defence’, 16 February 2021, https://tasdcrc.com.au/a-method-for-ethical-ai-in-defence/. 
  21. Australian Government Department of Defence, ‘Defence Releases Report on Ethical Use of AI’, 16 February 2021, https://news.defence.gov.au/media/media-releases/defence-releases-report-ethical-use-ai. 
  22. Trusted Autonomous Systems, ‘A Method for Ethical AI in Defence’, 16 February 2021, https://tasdcrc.com.au/a-method-for-ethical-ai-in-defence/.
  23. Jason B. Scholz, Dale A. Lambert, Robert S. Bolia, and Jai Galliott, ‘Ethical Weapons: A Case for AI in Weapons’, in Steven C. Roach and Amy E. Eckert (eds.), Moral Responsibility in Twenty-First-Century Warfare: Just War Theory and the Ethical Challenges of Autonomous Weapons Systems, State University of New York Press: Albany, 2020, pp. 181-214.
  24. Jai Galliott and Austin Wyatt, ‘Risks and Benefits of Autonomous Weapon Systems: Perceptions Among Future Australian Defence Force Officers’, Journal of Indo-Pacific Affairs, Winter 2020, pp. 17-34; Jai Galliott and Austin Wyatt, ‘Considering the Importance of Autonomous Weapon System Design Factors to Future Military Leaders’, Australian Journal of International Affairs, 2021, pp. 1-26.

MORE ARTICLES

A decade after, “Welcome to the Jungle.”  A retrospective.

Almost a decade has rolled by since I was asked to pen an open-source scenario on the future of law enforcement, which I titled ‘Welcome to the Jungle’. It was well-loved, and praised, albeit treated in the same manner as the fiction tha...

by The Stroud

A decade after, “Welcome to the Jungle.”  A retrospective.

Almost a decade has rolled by since I was asked to pen an open-source scenario on the future of l...

by The Stroud

Online and imploding on their smartphones

Launched in 2007, this one device now makes it possible to record events, find work, manage teams...

by Matthew Ford

Partner – Proxy – Glitch

The conflict in Ukraine offers unexpected insight into a military construct which has previously ...

by Zac Rogers

The circulation of power and ethics in AI, robotics, and...

Key to unlocking these opportunities is solving or mitigating both the ethical and practical chal...

by Sian Troath