• Skip to main content
Circus Bazaar Magazine

Circus Bazaar Magazine

Penned from the crooked timber of humanity

logo

R.A. the Rugged Man: Hate Speech

R.A. the Rugged Man: Hate Speech

Nov 4, 2022 by Shane Alexander Caldwell

Hate Speech is a concept film produced by the Circus Bazaar Company and distributed by Nature Sounds Entertainment.

Visit the official release: Youtube
Internet Movie Database: IMBD

Official credits
Produced by Shane Alexander Caldwell
Directed by Linn Marie Christensen & Shane Alexander Caldwell
Written by Shane Alexander Caldwell
Director of photography | Andreas Nesse
Edited by R.A. the Rugged Man
Composed by Teddy Roxpin
Costume Designer | Julie Filion
Special Effects by Julie Filion
Associate Producers | Linn Marie Christensen & Colin Hagen Åkerland

Cast in Order of Appearance
Rachael Robbins | Aase-Marie Sandberg El-Sayed
The Bald Nazi | Christian A. Sterk
Mad Ass Twerker | Tone Sørbøen Gasbakk
Tranny | Tom Rikard Ostad
Pissing Cracker | Kristen Nordal Ingolfsdottir
R.A. the Rugged Man | Himself
Rag Headed Peasant | Anders Petterøe
Peasant Gimp | Sondre Larsen
Peasant Mother | Christina Christensen
The Priests Spy | Ania Nova
Sad Kid | John John Thorburn
Filthy Priest | Shane Alexander Caldwell
Hair Picking Freak | Martin Lax
Trump Supporter | Jakob Ole Nordman
Mask Face | Frida Synnøve Dehlin
Big Fuck Off Executioner | Lillegutt Bøhmer
Clown Women | Christina Christensen
Flag Burner | David A. Lunde
Hateful Girl | Ania Nova

Militarised Police
Jarne Byhre
Marianne Lindbeck
Ørjan Steinsvik
Mia Elise Sundal
Håkon Smeby
Shane Alexander Caldwell
Colin Hagen Åkerland

Dog Women | Sandra Hedstrom
Someone Random | Line Marie Winther
Radical Feminist | Mari Åse Hajem
The Mad Prepper | Kristin Nordal Ingolfsdottir
Maga Man | Markus Fu
Proud Boy | Lasse Josephsen

The Book Burners
Eline Irja Korpi
Radek Silewicz
Julie Filion
Colin Hagen Åkerland

Nerd | Fredrik Hovdegård
Hot Mammas | Kamilla Berg & Plata Diesen
Uzi Shooter | Frida Synnøve Dehlin

Image Credit // The Circus Bazaar Company

Production Designer | Linn Marie Christensen
First Assistant Director | Caroline Andresen
Production Assistant | Colin Hagen Åkerland
Drone Operator | Trond Bergfald
Covid Manager | Markus Hempton
Stunt Coordinator | Christel Jørgensen

Stunt Performers
Maria Hansen
Evert Anton Steen
Martin Lax

Production Department
Gaffer | Bendik D. Antonsen
Focus Puller | Christer Smital
Best Boy | Roar Midtlien
2nd Assistant Camera | Trym Bertheussen Falkanger
Hair/Makeup | Maria Magdalena Ly Auraaen
Waepons | Huw William Hægeland Reynolds

Props
Tom Barnard
Plata Diesen
Marcin Lubas
Jens-Erik Wielsgaard Langstrand

Telent Supervisor | Kamilla Berg
Caterer & Craft Services | Cafe Riss

Post Production
Sound Design by This Old Man
Colour by Shanon Moratti with the Circus Bazaar Colour Tablet
VFX Editor | Sarp Karaer
Illustrations | Maria Borges & Yevhen Mychak
Technical Supervisor | Shanon Moratti
Assistant Editor | Linn Marie Christensen
Titles Design by Shane Alexander Caldwell & Zac Rogers

2nd Unit
Producer | Shane Alexander Caldwell
Director | Linn Marie Christensen & Shane Alexander Caldwell
DOP | Daniel James Aadne
Special Effects by Julie Filion

Executive Producers
The Circus Bazaar Company AS & Pty Ltd
Viken Filmsentre AS
Halden Kommune
Moss Kommune
XL-Bygg Knatterudfjellet
Moss I Sentrum

Special Thanks
Circus Bazaar Magazine
Steven Simonsen

Legal Supervisors
Bull & Co Advokatfirma AS
Bing Hodneland AS
Cowan, DeBaets, Abrahams & Sheppard LLP

Shooting Locations
Fredriksten fortress
Verket Scene

Finalisation by the Circus Bazaar Company

Copyright ©
Nature Sounds Entertainment
The Circus Bazaar Company AS/Pty Ltd

A chicken nearly broke this film
Fuck the Chicken (Sandra)

Filed Under: Photography, computer art, film, video Tagged With: Cinematography and Videography

Catching tigers in red weather and the falling human

Nov 1, 2022 by Zac Rogers

Every technology comes to be used to meet the needs of its time. Those needs interact with the often hidden-from-view affordances that reside in the tech to radically skew the intentions with which it might have been conceived and developed. When those affordances are geared to exploit network effects, and lock-in is pursued by monopolists at scale, no one can predict what happens next. Need trumps everything. A 2003 independent report commissioned by the Pentagon1  was tasked with considering the geopolitical and national security implications of a worst-case climate change scenario. Widely ignored and roundly dismissed at the time as alarmist (it was statedly alarmist by design), the report’s premises, nearly two decades on, now fall easily within the scope of plausible near-term scenarios. 

Historical evidence suggests that long periods of slow warming are consistently followed by dramatic falls in global temperature. Scientists believe these sharp falls are caused by the thermohaline conveyor – the current that mixes and moves heat around the world’s oceans – shutting down. Heat that is trapped means a hotter, wetter equator, more polar ice, while the intermediary zones – the site of all of the world’s grain production – get windier and drier. 

The impact on food production and transit is catastrophic. Before rapid and severe climate change means anything at all, it means a precipitous decline in the earth’s carrying capacity. In other words, mass starvation. Mass starvation means the mass uncontrolled movement of people. Conflict and danger that spreads like a Californian wildfire was the report’s main prediction.   

The report was released when the United States government was readying itself to invade the sovereign state of Iraq. The administration saw two main opportunities. First, control of Iraq’s large oil reserves would be increasingly important for US security-of-supply in the coming years. Second, at the Pentagon under Secretary Rumsfeld, a new way of war was being conceived. Large bases and heavy boot prints were to be replaced with a type of bit-torrent war, enabled and driven by the new era of networked digital telecommunications. Smaller and more agile forces, able to move, assemble, and disassemble rapidly anywhere on the globe, fed situational awareness by a global information grid, formed Rumsfeld’s vision.

9/11 created new needs and accelerated trends already underway. The information age created novel challenges for national security, not the least of which was what to do with all this information. Would it actually be more useful? Or more of a hindrance? Sometime in the first decade of the twenty-first century, human civilisation swept past an inflection point with regard to information and knowledge. The vast tail of history before the Internet harboured an information scarcity problem. In a vertigo-inducing heartbeat, that became an overload problem. 

The houses are haunted
By white night-gowns.
None are green,
Or purple with green rings,
Or green with yellow rings,
Or yellow with blue rings.
None of them are strange,
With socks of lace
And beaded ceintures.
People are not going
To dream of baboons and periwinkles.
Only, here and there, an old sailor,
Drunk and asleep in his boots,
Catches tigers
In red weather.13

Wallace Stevens

Killer apps

A solution had to be sought that would prevent the generational wealth invested in digital technologies by the United States from becoming a wasting asset. Artificial intelligence, or more accurately, statistical inference software capable of inferring patterns within large digital data sets, became the Emperor’s New Clothes. Iraq and Afghanistan would be its military testing grounds. Blurring with the civilian domain, feedback loops that serve and return information based on past activity are the Internet’s basic sorting mechanisms. Enabling the statistical inference of these loops has been the killer app of the AI era.   

Sorting information in such a manner comes at a cost. As Nicholas Carr wrote in 2011,2  using the world-spanning, globe-connecting Internet has actually made everybody’s world a little smaller. That the Internet was causing changes in the brain was no accident.3 The human cognitive system, and all of its bugs and vulnerabilities, were central to the largest growth industries of the early twenty-first century.4 The commercial domain was its centre of innovation and growth, with government seed-funding and often in tow as a follow-on customer for its products and services. Unsurprisingly, science mixed with pseudoscience and commercial incentives in irreversible ways,5  producing endless tropes taken as gospel by the sector’s often breathless acolytes, particularly in bureaucracy and finance.   

The Russo–Ukrainian Civil War

What is Mr Putin Doing In Ukraine? – Thoughts From Kiev.

Today, Sunday, I went to Maidan. Several hundre…

by Mychailo Wynnyckyj

Putin On Pause – Thoughts from Kiev

President Putin’s press conference seems to hav…

by Mychailo Wynnyckyj

The Vladimir Putin Problem – Thoughts From Kiev

The Russian invasion of Crimea has put the enti…

by Mychailo Wynnyckyj

Imminent Invasion – Thoughts from Kiev

Today, Crimeans “voted”. Given the barrage of m…

by Mychailo Wynnyckyj

Philosophy – Thoughts From Kiev

Today is a noteworthy day. Exactly 120 days ago…

by Mychailo Wynnyckyj

While commercial Big Tech and the national security state typically take the brunt of a growing backlash, less public attention has been directed at their primary sources of intellectual legitimacy. The streams of scientism that came to feed much economic and behavioural theorising, of the type awarded the Bank of Sweden Prize and lauded by no less influential institutions than the World Bank, have their origins in the world’s most prestigious universities. Harvard’s ‘Nudgers’, Stanford’s ‘Captologists’, and MIT’s ‘Social Physics’ cohorts appeared to take the digital information age as a sign that historically devastating critiques of behaviourism and positivism no longer applied or could now be more readily ignored. One reason might simply be that the demand for a datafied episteme was generated as a consequence of the oversupply of data; Goodhart’s Law be damned.6  Another reason may be that when they surveyed their surrounds for informed political opposition to these historical tropes, they encountered a neoliberal wasteland propelling them forward. 

For all the influence of academic theorising, commercial and financial imperatives ruled. A common trope about the virtues of widespread automation is the promise that it will ‘free up’ human beings to pursue more creative and productive ventures. On the contrary, automation-for-the-sake-of-it frees up human attention so that it may consume more distraction product. Why bother with rote tasks and tactile experience when all that surplus attention can be burned in capitalism’s new furnaces? The commercial titans of the digital age peddle distraction, not development. As John Gray wrote in 1998,7  the chief engine of capitalism in the wake of modernity was the rising demand for divergence. 

Bait-and-switch and a fat tail

Iran is now the controlling power in Iraq. The withdrawal of US and allied forces from Afghanistan heralds the end of an experimental period in American strategic culture for which mounting strategic costs are the main outcome. The rapid and severe climate change flagged in 2003 is playing out amidst an attempt by global corporate, financial, monetary, and bureaucratic authorities to shift away from the institutional mediation of resource allocation of the post-war era to an infrastructure of algorithmic mediation. The consequences of the experiment are unstated, yet clear. As climate crises and the impacts on human security play out, the command and control of human behaviour, most importantly human movement, will be pivotal in determining stakeholder advantage. 

Every technology is used to meet the needs of its time. The intentions and dreams of its progenitors are irrelevant. And so it appears to go for the rise, fall, and return of behaviourism. The needs of the coming era for mass population control, amidst the declining conditions in human security, will become the defining character of a regime of digital technologies sold by corporate entities to the public-at-large as novel and convenient. Mass surveillance, behavioural prediction and modification, and various forms of cognitive simulation are already the chief manifestations of the digital economy. 

Developed and scaled as advertising disruption, and as an expansion of the profiling and scoring industry boosted by the electronic telecommunication boom of the 1970s, digital ICTs curated by AI are uniquely applicable to automated command and control activities.8  They harbour this affordance. In a stunning episode of bait-and-switch, scholars are now exploring how their incursion on every facet of human behaviour and cognition under the guise of market productivity and consumer want has achieved scale. They need look no further than the early 2000s literature on how network effects achieve lock-in,9  which much of the industry read and appropriated.       

We grab anything when we fall 

These were business strategies for getting rich in the digital age before they were the tools and methods of C2. Which only reinforces the point about technologies meeting the needs of their time, regardless of intent. One of the oldest and most widespread myths about technology is the gnostic belief that it harbours a type of mystery, which the adept society alone can tap and bend to their want. In fact, no such thing exists. The gnostic awe for technology is a dangerous spillage of a latent monotheism; it delivers to its believers exactly the same service: a view of history as having a hidden meaning. 

Technology is more like detritus than magic dust. Its effects linger and distort, long after the devout are gone. As George Dyson notes, constant mediation by statistical inference machines is already distorting social, political, and economic relations in both open and closed societies.10 For all the dystopian and futuristic themes popularly associated with high technology, the chief danger may be simply of needless regression. As Jane Jacobs foresaw,11 when the tireless work of sustaining hard-won civilisational gains is subordinated to infantile fantasies of the future, backsliding will be fast and easy. 

A darker reality, however, stalks modernity’s wake. The technologies of control which have been scaled and locked in under the yoke of techno-fetish are already the object of a quickening geopolitical contest. Apparently unable to conceive of an alternative, corporate and financial elites, and their often witless shills in government, have ridden late modernity like an express train heading for the edge of a cliff. Able but unwilling to change course or even slow down, they are now fully invested in preparing for themselves a survivable landing when the time comes to disembark.  

There is not a single government of any consequence on the face of the earth that is ‘denying’ climate change. Those with any capacity to do so are preparing for the types of scenarios outlined in the Pentagon report. Powerful states such as Germany, Japan, China and the US will strategise to quarantine themselves from the growing disorder, ensure access to supply chains, resources and transit zones, and prepare to defend these advantages by force. Weak states with strong leadership, such as Russia, strategise to profit from a dangerous yet lucrative spoiler role under cover of nuclear arms. States of little or no capacity are left to twist in the wind. 

The Emperor has no clothes

Statistical inference software, crawling over huge streams of data and predicting inscrutable futures, is an arresting vision of technological prowess. But it is a vision of absurdity. Data is a recorded digital abstraction of a state of the world past. As much information is missing as is present, perhaps a great deal more. AI, marketed as ready to solve climate change, conflict and scarcity on the same day, is not going to do these things. What it will do is what it is already doing, which is to distort and disable the capacity for any collective political response to the wants of capital. 

The techno-fetish is a cul-de-sac both erected and defended by capital, which enables it to feed off its own waste. East and West. The wake of modernity accommodates a techno-political struggle that is little more than a harlequinade of stagnant and duelling monisms.  

At least since the birth of monotheism, humankind has been intoxicated by an idea of itself unfolding in history. Such a vision provides unique salve to an intolerable condition to which every human is vulnerable: the possibility of meaningless suffering. The decline of formal religion only drove the intoxicated to the next well. Science and technology are now secular religions that supply the devout with stories of their place in history, and with reasons to regard said history as having order and meaning. Such an order forms an arc of history for the world-fixing, life-improving cohort, the wrong side of which can only be occupied in error. The arc includes and excludes ‘behaviour’ formulated with all the cultural force of pop science. 

It is an unmistakably human folly, and for that, it must be forgiven. Cognitive dissonance is the condition of being human, not a behavioural bug that can be vaccinated against. But such folly comes at a grave cost. As John Gray has written, the need to posit meaning in suffering incurs a price in delusion.12  As gnostic visions of technological oracles, supplying humans with supernatural power, become common tropes, the price steadily rises. Subordinating their cognitive capacities to a machine episteme has only prepared humans to be utterly unprepared. 

Nature never locked in

Notes

  1. Peter Schwartz and Doug Randall, ‘An Abrupt Climate Change Scenario and Its Implications for United States National Security’, October 2003, https://eesc.columbia.edu/courses/v1003/readings/Pentagon.pdf.
  2. Nicholas Carr, The Shallows: What the Internet Is Doing to Our Brains, Updated Edition (W. W. Norton & Company, 2020).
  3. Gary W. Small et al., ‘Brain Health Consequences of Digital Technology Use’, Dialogues in Clinical Neuroscience 22, no. 2 (June 2020): 179–87, https://doi.org/10.31887/DCNS.2020.22.2/gsmall.
  4. Howard E. Gardner, The Mind’s New Science: A History of the Cognitive Revolution (Hachette UK, 2008).
  5. Philip Mirowski, Science-Mart (Harvard University Press, 2011).
  6. Zac Rogers, ‘Goodhart’s Law: Why the Future of Conflict Will Not Be Data-Driven’, Grounded Curiosity (blog), February 13, 2021, https://groundedcuriosity.com/goodharts-law-why-the-future-of-conflict-will-not-be-data-driven/.
  7. John Gray, False Dawn: The Delusions of Global Capitalism (Granta Books, 2015).
  8. Jeremy Packer and Joshua Reeves, Killer Apps: War, Media, Machine (Duke University Press, 2020).
  9. Albert-László Barabási, Linked: How Everything Is Connected to Everything Else and What It Means for Business, Science, and Everyday Life (Plume, 2003).
  10. George Dyson, ‘Childhood’s End’, Edge (blog), January 1, 2019, https://www.edge.org/conversation/george_dyson-childhoods-end.
  11. Jane Jacobs, Dark Age Ahead (Knopf Doubleday Publishing Group, 2007).
  12. John Gray, Feline Philosophy: Cats and the Meaning of Life (New York: Farrar, Straus and Giroux, 2020).
  13. Wallace Stevens, ‘Disillusionment of Ten O’Clock’, The Palm at the End of the Mind: Selected Poems and a Play (Knopf Doubleday Publishing Group, 2011), 11.

Filed Under: Political science Tagged With: Relation of state to organized groups and their members

On Data, Devices and Daemons

Nov 1, 2022 by Lesley Seebeck

The world changed in 2007. It’s hard to imagine now, but there was a time before smartphones. The breakthrough device, of course, was the iPhone. We’d had constant availability, whether through Blackberries, HTC devices or Nokias before then. But the iPhone brought with it a platform that permitted the development of range of other tools—apps—plus means of accessibility. Others, naturally, copied it and built new business models that have changed our societies. The ‘dark side’ of such availability, and apps tailored to our immediate needs, was the quantification of the human. Through our use of apps into which we add information about ourselves, on devices with geolocation, we generate data that in turn is collected by the companies that build platforms and the hardware. Consequently, each of us is reduced to data that exposes our physical presence, our social connections, the state of our health, our patterns of behaviour, and our very thoughts and aspirations to a range of commercial interests—some apparent, many less so—and beyond them, to a range of other organisations, including criminal groups and governments.

Our devices devour our attention, too.  We are constantly distracted by the dopamine hits from checking email, checking threads of discussions on messaging apps, or the latest photos from our friends or ‘influencers’.  It’s not unlike the constant attention one must pay a particularly demanding toddler, but it also shapes our behaviour.  As such, and precisely because it is as untameable as a recalcitrant toddler, it can prove life-threatening rather than life-enhancing.1  

The idea that the human can be separated and divorced entirely from technology is unrealistic.  Technology is fundamentally a human artefact; its use reflects human values, human motives and human priorities.  So, in the modern world, rather than seeking to treat technology, data and devices as something separate—and easily separable—from the human, perhaps a better way is to think of them as intrinsically tied to our concept of the individual, of the self.  

Doing so would allow for a more sophisticated discussion about personal rights, for freedoms, for the right to sanctuary and the ability to express oneself, and contest ideas, without fear or favour based on now-outdated concepts of property, artefacts and carriage.  It may also help to build better technology better suited to the limits of human cognition. Further, it’s evident that in a digital democracy, we need better ways to consider what a citizen is, with regard to their rights, roles and responsibilities, including their relationship with governments and commercial entities. There are few appropriate concepts ready to hand.  But fiction offers some help.  For example, we could think of our relationship with our data and our devices by drawing on Philip Pullman’s His Dark Materials trilogy.2   

In those books—the first dramatised as a movie,3 and subsequently launched as an HBO/BBC1 series4 —each human has their own daemon.  These daemons are the external representation of their person’s inner self.  Daemons may manifest in different physical forms until a person reaches their maturity, whereupon they settle on one.  To touch another person’s daemon is taboo: even in battle soldiers will avoid touching the daemons of others.  But daemons themselves can interact with, or attack, another daemon.  

Separation from a daemon causes discomfort for short distances, over a few metres or so, increasing to real physical pain for longer distances.  Excision, the complete cutting of the link between a person and its daemon, is one of the most evil things that can be done.  After excision, the person is lesser than before, often left without personality, while the daemon is left a ghost, constantly seeking but unable to receive comfort.

There are some parallels with our modern world.  Most of us have a digital shadow—not quite a twin, but part of who we are—that is located in the internet and on our devices.  These shadows are a combination of data, information, our virtual expressions and online behaviours, and increasingly our own peculiar twists on algorithms.  Our means of accessing, transforming and interacting with that digital shadow may, like daemons before their human partners reach maturity, change shape, from iPhones to Android devices.  But the essential character of our digital shadow, like daemons, remains the same.  

In normal human company, it’s polite to ask before touching another’s device—a rare request—which reflects norms we have developed over time and echoes Pullman’s taboo about touching another’s daemon.  But our current reality is that in cyberspace, we are forced to sign inscrutable, choice-free terms and conditions, assigning our personal data—representations of ourselves—and our rights to companies.  Governments, too, encroach into the same space—exerting control over personal data and digital rights, even our identity—in the name of safety and security.  

And we have little choice: in a digital society and economy we cannot live effectively without that digital presence and persona.  It’s how we access banking, government services, health information, friends and family, emergency warnings, and so on.  COVID-19 has exacerbated that dependence.  Lockdowns and distancing meant working from home and using internet services, for primarily white-collar workers.  This has led to the further intrusion of work time, school time, and company systems into personal and family lives.  As we move out of lockdowns, we now are required to use our devices to sign in, to gain access, to justify our presence, wherever we go, leaving digital trails for governments, and others, to follow.

Further, separation from devices can cause anxiety, distress and impaired cognition;5 the loss of a device can mean we lose aspects of ourselves, that which we have privileged to or recorded on that particular device.  This is part of the concern too around cybersecurity: the loss of operational devices and corruption of personal and organisational data, at scale, may well cause disruption at the societal level.

We have yet to build the norms that protect our identity, our data, our aspirations, and even our own algorithms in the digital world.  These live, like daemons, in both the real world and a netherspace—an equivalent to the supranational cloud—that’s outside the normal experience of people or institutions.  Nonetheless, we need to find a way to recognise our digital shadows, and enable those parts of our being—our personal data, applied algorithms, our thoughts, our social relationships and interactions, and the technology housing them—to be recognised as our own and protected irrevocably.    

Sure, evoking daemons is an artifice.  But doing so may help coalesce some of the debate around data, devices, and the individual’s rights and freedoms on the internet, giving us a means through which we can coherently manage our relationships with the tech platforms, data collectors, algorithm developers and governments, as fully enfranchised citizens.  It may also help us define and seek personal sanctuary—the right to privacy, exclusion from surveillance, a sphere of personal safety in the digital world, and our fundamental right to self-definition and our own identity, whether in the physical or the digital world.

  1. John Spencer, ‘The Perils of Distracted Fighting,’ Wired, October 9, 2019, https://www.wired.com/story/the-dangers-of-distracted-fighting/.
  2. See https://www.philip-pullman.com/hdm
  3. https://www.imdb.com/title/tt0385752/
  4. https://www.imdb.com/title/tt5607976/
  5. Amit Chowdhry, ‘IPhone Separation Anxiety Hinders Cognitive Abilities, Says Study,’ Forbes, January 13, 2015, https://www.forbes.com/sites/amitchowdhry/2015/01/13/iphone-separation-anxiety/.

Filed Under: Social problems & social services Tagged With: Social problems and services

Witnessing Algorithms and the Paradox of Synthetic Media

Nov 1, 2022 by Michael Richardson

Synthetic media are everywhere. Digital images and objects that appear to index something in the world but do nothing of the sort have their roots in video games and online worlds like Second Life. However, with the growing appetite for niche machine learning training sets and artificial environments for testing autonomous machines, synthetic media are increasingly central to the development of algorithmic systems that make meaningful decisions or undertake actions in physical environments. Microsoft AirSim is a prime example of the latter, an environment created in Epic’s Unreal Engine that can be used to test autonomous vehicles, drones and other devices that depend on computer vision for navigation. Artificial environments are useful testing grounds because they are so precisely manipulable: trees can be bent to a specific wind factor, light adjusted, surface resistance altered. They are also faster and cheaper places to test and refine navigation software prior to expensive material prototyping and real-world testing. In machine learning, building synthetic training sets is an established practice. Synthetic media are particularly valuable in contexts such as armed conflict, where images might be too few in number to produce a large enough corpus and too classified to be released to either digital piece workers for tagging or private sector developers to train algorithms.

But what happens when synthetic media are marshalled to do the activist work of witnessing state and corporate violence? What are we to make of the proposition that truths about the world might be produced via algorithms trained almost exclusively with synthetic data? This essay sketches answers to these questions through an engagement with Triple Chaser, an investigative and aesthetic project from the UK-based research agency Forensic Architecture. Founded in 2010 by architect and academic Eyal Weizman and located at Goldsmiths, Forensic Architecture pioneers investigative techniques using spatial, architectural, and situated methods. Using aesthetic practice to produce actionable forensic evidence, their work appears in galleries, court rooms, and communities. In recent years, they have begun to use machine learning and synthetic media to overcome limited publicly available data and to multiply by several orders of magnitude the effectiveness of images collected by activists. My contention in this essay is that these techniques show how algorithms can do the work of witnessing: registering meaningful events to produce knowledge founded on claims of truth and significance.

Presented at the 2019 Whitney Biennial in New York, Triple Chaser combines photographic images and video with synthetic media to develop a dataset for a deep learning neural network able to recognise tear gas canisters used against civilians around the world. It responds to the controversy that engulfed the Biennial following revelations that tear gas manufactured by Safariland, a company owned by Whitney trustee Warren B. Kanders, was used against protestors at the US-Mexican border. Public demonstrations and artist protests erupted, leading to significant negative press coverage across 2018 and 2019. Rather than withdraw, Forensic Architecture submitted an investigative piece that sought to demonstrate the potential for machine learning to function as an activist tool. 

Produced in concert with Praxis Films, run by the artist and filmmaker Laura Poitras, Triple Chaser was presented as an 11-minute video installation. Framed by a placard explaining the controversy and Forensic Architecture’s decision to remain in the exhibition, viewers entered a severe, dark room to watch the tightly focused account of Safariland, the problem of identifying tear gas manufacturers, the technical processes employed by the research agency, and its further applications. Despite initial intransigence, the withdrawal of eight artists in July 2019 pushed Kanders to resign as vice chairman of the Museum and, later, announce that Safariland would sell off its chemicals division that produced tear gas and other anti-dissent weapons. Meanwhile, Forensic Architecture began to make its codes and image sets available for open source download while applying the same techniques to other cases, uploading its Mtriage tool and Model Zoo synthetic media database to the code repository GitHub. A truth-seeking tool trained on synthetic data, Triple Chaser reveals how witnessing can occur in and through nonhuman agencies, as well as and even in place of humans. 

In keeping with the established ethos of Forensic Architecture, Triple Chaser demonstrates how forensics – a practice heavily associated with both policing – can be turned against the very state agencies that typically deploy its gaze. As the cultural studies scholar Joseph Pugliese points out, ‘[E]mbedded in the concept of forensic is a combination of rhetorical, performative, and narratological techniques’1  that can be deployed outside courts of law. For Weizman, the fora of forensics is critical: it brings evidence into the domain of contestation in which politics happens. In his agency’s counter-forensic investigation into Safariland, tear gas deployed by police and security agencies becomes the subject of interrogation and re-presentation to the public. In this making public, distinctions and overlaps can be traced between different modes of knowledge making and address: the production of evidence, the speaking of testimony, the witnessing of the audience. But how might we understood the role of the machine learning algorithm itself? And what are we to make of this synthetic evidence? 

Weizman describes the practice of forensic architecture as composing ‘evidence assemblages’ from ‘different structures, infrastructures, objects, environments, actors and incidents’.2  There is an inherent tension between testimony and evidence that forensics as a resistant and activist practice seeks to harness by making the material speak in its own terms. As a methodology, forensic architecture seeks a kind of ‘synthesis between testimony and evidence’ that takes up the lessons of the forensic turn in human rights investigation to perceive testimony itself as a material practice as well as a linguistic one. Barely detectable traces of violence can be marshalled through the forensic process to become material witnesses, evidentiary entities. But evidence cannot speak for itself: it depends on the human witness. Evidence and testimony are closely linked notions, not least because both demarcate an object: speech spoken, matter marked. Testimony can, of course, enter into evidence. But I think something more fundamental is at work in Triple Chaser. It doesn’t simply register or represent: it is operational, generative of relations between objects in the world and the parameters of its data. Its technical assemblage precedes both evidence and testimony. It engages in a witnessing that is, I think, nonhuman. Triple Chaser brings the registering of violations of human rights into an agential domain in which the work of witnessing is necessarily inseparable from the nonhuman, whether in the form of code, data, or computation.

As development commenced, Triple Chaser faced a challenge:  Forensic Architecture was only able to source a small percentage of the thousands of images needed to train a machine learning algorithm to recognise the tear gas canister. They were, however, able to source detailed video footage of depleted canisters from activists, and even obtained some material fragments. Borrowing from strategies used by Microsoft, Nvidia and others, this video data could be modelled in environments built in the Unreal gaming engine, and then scripted to output thousands of canister images against backgrounds ranging from abstract patterns to simulated real-world contexts. Tagging of these natively digital objects also sidestepped the labour and error of manual tagging, allowing a training set to be swiftly built from images created with their metadata attached. Using a number of different machine learning techniques, investigators were able to train a neural network to identify Safariland tear gas canisters from a partial image, with a high degree of accuracy and with weighted probabilities. These synthetic evidence assemblages then taught the algorithm to witness.

Like most image recognition systems, Triple Chaser deploys a convolutional neural network, or CNN, which learns how to spatially analyse the pixels of an image. Trained on tagged data sets, CNNs slide – convolve, rather – a series of filters across the surface of an image to produce activation maps that allow the algorithm to iteratively learn about the spatial arrangements of large sets of images. These activation maps are passed from one convolution layer to the next, with various techniques applied to increase accuracy and prevent the spatial scale of the system from growing out of control. Exactly what happens within each convolutional layer remains in the algorithmic unknown: it cannot be distilled into representational form but rather eludes cognition. 

The Russo–Ukrainian Civil War

What is Mr Putin Doing In Ukraine? – Thoughts From Kiev.

Today, Sunday, I went to Maidan. Several hundre…

by Mychailo Wynnyckyj

Putin On Pause – Thoughts from Kiev

President Putin’s press conference seems to hav…

by Mychailo Wynnyckyj

The Vladimir Putin Problem – Thoughts From Kiev

The Russian invasion of Crimea has put the enti…

by Mychailo Wynnyckyj

Imminent Invasion – Thoughts from Kiev

Today, Crimeans “voted”. Given the barrage of m…

by Mychailo Wynnyckyj

Philosophy – Thoughts From Kiev

Today is a noteworthy day. Exactly 120 days ago…

by Mychailo Wynnyckyj

Machine learning processes thus exhibit a kind of autonomic, affective capacity to form relations between objects and build schemas for action from the modulation and mapping of those relations. Relations between elements vary in intensity, with the process of learning both producing and identifying intensities that are autonomous from the elements themselves. Intensive relations assemble elements into new aggregations; bodies affect and are affected by other bodies. Geographer of algorithmic systems Louise Amoore writes that algorithms must be understood as ‘entities whose particular form of experimental and adventurous rationality incorporates unreason in an intractable and productive knot’.3  There is an autonomic quality to such algorithmic knowledge making, more affective than cognitive. In the context of image analysis, Anna Munster and Adrian MacKenzie call this platform seeing,4 a mode of perception that is precisely not visual because it works only via the spatial arrangement of pixels in an image, with no regard for its content or meaning. This machinic registering of relations accumulates to make legible otherwise unknown connections between sensory data, and it does so with the potential (if not intention) to make political claims: to function as a kind of witnessing of what might otherwise go undetected. 

Underpinning the project is the proposition that social media and other image platforms contain within them markers of violence that can and should be revealed. For the machine learning algorithm of Triple Chaser, the events to which it becomes responsible are themselves computational: machinic encounters with the imaged mediation of tear gas canisters launched at protesters, refugees, migrants. But their computational nature does not exclude them from witnessing. With so much of the world now either emergent within or subject to computational systems, the reverse holds true: the domain of computation and the events that compose it must be brought within the frame of witnessing. While the standing of such counter-forensic algorithms in the courtroom might – for now – demand an expert human witness to vouch for their accuracy and explain their processes, witnessing itself has already taken place long before testimony occurs in front of the law. Comparisons can be drawn to the analogue photograph, which gradually became a vital mode of witnessing and testimony, not least in contexts of war and violence. Yet despite its solidity, the photograph is an imperfect witness. Much that matters resides in what it obscures, or in what fails to enter the frame. With the photograph giving way to the digital image and the digital image to the computational algorithm, the ambit of witnessing must expand.  As power is increasingly exercised through and even produced by algorithmic systems, modes of knowledge making and contestation predicated on an ocular era must be updated. 

As Triple Chaser demonstrates, algorithmic witnessing troubles relations both between witness and evidence and between witnessing and event. This machine learning system, trained to witness via synthetic data sets, suggests that the linear temporal relation in which evidence – the photograph, the fragment of tear gas canister – is interpreted by the human witness cannot or need not hold. Through their capacities for recognition and discrimination, nonhuman agencies of the machinic system enact the witnessing that turns the trace of events into evidence. Witnessing is, in this sense, a relational diagram that makes possible the composition of relations that in turn assemble into meaningful, even aesthetic objects. If witnessing precedes both evidence and witness, then witnessing forges the witness rather than the figure of the witness granting witnessing its legitimacy and standing. 

While this processual refiguring of witnessing has ramifications for nonhuman agencies and contexts beyond the algorithmic, Forensic Architecture’s movement into this space suggests the strategic potential of machine learning systems as the anchor for an alternative politics of machine learning. While I firmly believe that scepticism towards the emancipatory and resistant potential for machine learning – and algorithmic systems more generally – is deeply warranted, there is also a strategic imperative to do more to ask how such systems can work for people rather than against them. With its tool and synthetic media database both made open source, Forensic Architecture aims to democratise the production of evidence through the proliferation of algorithmic witnessing that works on behalf of NGOs, activists and oppressed peoples, and against the techno-political state.

Notes:

  1. Pugliese, Joseph. Biopolitics of the More-Than-Human: Forensic Ecologies of Violence. Durham, NC: Duke University Press, 2020.
  2. Weizman, Eyal. Forensic Architecture: Violence at the Threshold of Detectability. New York: Zone Books, 2017.
  3. Amoore, Louise. Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Durham: Duke University Press, 2020.
  4. 1. MacKenzie A, Munster A. Platform Seeing: Image Ensembles and Their Invisualities. Theory, Culture & Society. 2019;36(5):3-22. doi:10.1177/0263276419847508

Filed Under: Commerce, communications & transportation Tagged With: Communications

Online and imploding on their smartphones

Nov 1, 2022 by Matthew Ford

Today, the smartphone has become ‘the place where we live’.2 It is an integral part of our everyday existence. Launched in 2007, this one device now makes it possible to record events, find work, manage teams, locate ourselves on the planet, upload our experiences to social media, get a mortgage, read the newspaper, order a taxi, rent a holiday home, buy almost anything and get it delivered to our front door. The smartphone and the platforms, services and applications that form part of the mobile, connected ecosystem have redefined how we experience the world. These changes have not just affected how we think about day-to-day living. It also affects how we experience, prosecute and come to understand war.

‘I have been fighting for 17 years. I am willing to throw it all away to say to my senior leaders, I demand accountability.’
A reckoning will come for this catastrophe; military and political. For those of us who fought, it’s too much.1

Johnny Mercer, former Captain, veteran of Afghanistan and now the Member of Parliament for Plymouth Moor View

All of this has affected the Anglosphere’s armed forces in a range of unexpected and sometimes radicalising ways. Bringing local, national and transnational narratives into new conflict, the smartphone’s information ecosystem helps disaffected constituencies find each other and band together. Military, veteran and activist identities get reframed in these new spaces, creating an important location for sharing frustration and discontent. This has led to several incidents involving serving military personnel being investigated for their connections to extremist and right-wing political groups.3 

In this new ecology of war, connected technologies allow everyone the opportunity to participate, whether they are keyboard warriors or broadcasting live from the frontlines. The smartphone, for example, enables us to produce, publish and consume media from the palm of our hands, wherever we can get online. This has accelerated discussions and flattened our experiences. People draw connections between events in ways that only smart devices make possible. It has given us the opportunity to amplify our emotions and created asynchronous engagements with war and violence. Different communities record, reuse and recycle content at different times, locations and speeds.

WhatsApp, for example, is an end-to-end encrypted messenger service owned by Facebook. Instant messaging services like this are used by government ministers looking to avoid public scrutiny4  and targeteers circulating kill lists. Free to download to your smartphone, WhatsApp connects users to war and violence wherever they are in the world. Overseas, WhatsApp was in use among armed forces coordinating Reaper drone attacks in Mosul.5  American forces have been advised to download the app for operational use on their phones,6  and it has also been hacked by Israeli ‘cyber-arms dealer’, NSO Group.7 

The same technologies that the military use in targeting operations overseas are the same technologies civilians use to stage, broadcast and record political demonstrations at home. Thus, WhatsApp, Instagram and social media sites like Parler and Gab were used by supporters of President Trump to organise an insurrection and storm the Capitol Building on 6 January 2021. Including veterans from the wars in Iraq and Afghanistan – one of whom, Ashli Babbitt, was shot dead by Capitol Police8  – the goal was to stop Congress from certifying President Biden’s election victory. Recording and broadcasting events from their smartphones, the data the protagonists produced made it easy for the FBI to identify and subsequently arrest them.

Just as members of the Islamic State now maintain the memory of the State by circulating key propaganda online,9  the events in the Capitol created a digital archive for Trump supporters to look back on and invoke in their ongoing efforts to re-elect the 45th President. Like the proverbial music gig, they’d bought the T-shirt and had the smartphone photos. They had been there on that memorable day. The smartphone and the digital ecosystem it fostered have created all number of entirely new media for war and violence to occupy. People now experience a constantly churning spectacle of opinions and perceptions that spill out and feed back into each other, irrespective of whether they are expressed overseas or at home.

As British Tory politician Johnny Mercer demonstrates, this has given us a window into the emotional tensions prompted by military defeat in Afghanistan. Calling for a military and political reckoning, Mercer cited U.S. Marine Corps Lieutenant Colonel Stuart Scheller who, in August 2021, had taken to Facebook to demand that the military and political chain of command be held to account for the decisions they had taken in relation to Afghanistan.10  Knowing that his videos would certainly damage his career, Scheller subsequently recorded a video for YouTube and declared, ‘Follow me and we will bring the whole fucking system down’.11

Just like many veterans of the Global War on Terror, defeat left Mercer and Scheller wondering what the GWOT was all about.12  In the context of the January 6, 2021 insurrection at the Capitol Building, however, Scheller’s invocation to his audience not only reflected his emotional response to events in Kabul but also implied a call for action. Although Scheller subsequently denied it, senior officers feared that the Marine Corps colonel wanted to see an insurrection in Washington D.C. and a restoration of Donald Trump to the presidency.13  The reckoning that Mercer called for was reflected in the language used by Scheller. The political and military establishment had stabbed ordinary servicemen in the back. Something had to be done.

In these circumstances, it was inevitable that Scheller’s video would have a political effect in Washington D.C. It would also bring conspiracy theory directly into the heart of Anglo-American politics. Republican Congressman Louie Gohmert and Republican Congresswomen Marjorie Taylor Greene, for example, both spoke in support of Scheller. Both legislators are pro-Trump. Both have links to QAnon, the conspiracy theory that posits, ‘Donald Trump is waging a secret war against elite Satan-worshipping paedophiles in government, business and the media’.14  QAnon supporters were not just outside the Capitol Building. Conspiracy theory had in effect gone mainstream, brought into the heart of politics by those Congressmen and women who looked on defeat in Iraq and Afghanistan as an example of establishment politics gone wrong.

Mercer might not take QAnon seriously but just like in the States, conspiracy theory is now a feature of British politics. In Britain’s case, former members of the Parachute Regiment and veterans from Iraq and Afghanistan have been involved in COVID-19 anti-vaccine protests. Having sought to gain entry to old BBC studios in protest against MSM propagating what they consider to be pro-vaccine propaganda, one ex-soldier declared, ‘Basically the men of our unit in our service, believe that we’re pointing the weapons in the wrong direction’.15  Here too the language of the GWOT is spun back at the politicians that directed the military to go to Iraq and Afghanistan. Recorded on a smartphone by an apparent member of the anti-vax political party Freedom Alliance, the soldier went on to say,

“This time now the tyranny is against our people and we can’t see it ’cos it’s on our home soil where it’s never been before. Because [it’s] psychological warfare not bombs, we can’t see it, because [it’s] invisible. We’ve had this experience and used these tactics in other countries to manipulate, divide and conquer and now we’re watching our own government and our own military use it against us. But the only men and women in this country that can resist against that are the ones that have the experience and the training that we use to help us [sic].16“

The smartphone has done a great deal to create the media ecosystems where people who share counter-cultural views can meet and organise. Presented as an affirmation of free speech, conspiracy theory has become the reality, not the exception. In many respects, the effects on political action are not always easy to see. There is every possibility that online echo chambers will lead to a further radicalisation of politics, where the tools and techniques applied overseas become the means by which social division is instrumentalised for political effect at home.

All of this has been amplified online through the connected technologies that both the military and the public use to organise their everyday lives The smartphone’s digital ecosystem has imploded conventional civil-military relations, enabled disaffected veteran soldiers and officers to find each other, and facilitated access to a like-minded audience. Among friends, they now feel comfortable attacking the state in the hope of defending it. In some cases, such rhetoric has bled into and drawn upon conspiracy theory. This has animated the frustration and dysphoria experienced by many veterans now wondering why they bothered to sacrifice themselves in Iraq and Afghanistan. Whatever happens in the future, blowback from the wars in Iraq and Afghanistan has spiralled out of the information prisms of the new war ecology in unanticipated ways. As Facebook whistleblower Frances Haugen observes, the algorithms built into social media are designed to push people towards ‘extreme content’.17  This is ripe territory for political exploitation. Something politicians should weigh carefully as they call for their reckoning.

Notes:

  1. Johnny Mercer, 09:23, 27 August 2021 posted on Twitter @johnnyMercerUK, at:https://twitter.com/JohnnyMercerUK/status/1431351799303348235?s=20. Accessed 8 November 2021.
  2. Alex Hern, ‘Smartphone is now “the place where we live”, anthropologists say’, The Guardian, 10 May 2021. Available at: https://www.theguardian.com/technology/2021/may/10/smartphone-is-now-the-place-where-we-live-anthropologists-say. Accessed 18 October 2021.
  3. Sian Norris and Heidi Siegmund Cuda, ‘Fantasy of War – far right and the military’, Bylinetimes, 10 November 2021. Available at: https://bylinetimes.com/2021/11/10/the-fantasy-of-war-the-far-right-and-the-military/. Accessed 11 November 2021.
  4. Haroon Siddique, ‘Cabinet Policy obliges ministers to delete instant messages’, The Guardian, 12 October 2021. Available at: https://www.theguardian.com/politics/2021/oct/12/cabinet-policy-ministers-delete-whatsapp-messages. Accessed 9 November 2021.
  5. James Verini, ‘How the battle of Mosul was waged on WhatsApp’, The Guardian, 28 September 2019. Available at: https://www.theguardian.com/world/2019/sep/28/battle-of-mosul-waged-on-whatsapp-james-verini. Accessed 23 October 2021.
  6. Shawn Snow, Kyle Rempfer and Meghann Myers, Deployed 82nd Airborne unit told to use these encrypted messaging apps on government cell phones’, Military Times, 23 January 2020. Available at: https://www.militarytimes.com/flashpoints/2020/01/23/deployed-82nd-airborne-unit-told-to-use-these-encrypted-messaging-apps-on-government-cellphones/. Blake Moore and Jan E. Tighe, ‘Insecure communications like WhatsApp are putting U.S. National Security at risk’, 8 December 2020. Available at: https://www.nextgov.com/ideas/2020/12/insecure-communications-whatsapp-are-putting-us-national-security-risk/170577/. Both articles accessed 23 October 2021.
  7. Stephanie Kirchgaessner, ‘How NSO became the company whose software can spy on the world’, The Guardian, 23 July 2021. Available at: https://www.theguardian.com/news/2021/jul/23/how-nso-became-the-company-whose-software-can-spy-on-the-world. Accessed 23 October 2021.
  8. Stephen Losey, ‘Woman shot and killed at Capitol was security forces airman, QAnon adherent’, Air Force Times, 7 January 2021. Available at: https://www.airforcetimes.com/news/your-air-force/2021/01/07/woman-shot-and-killed-at-capitol-was-security-forces-airman-qanon-adherent/. Accessed 30 October 2021.
  9. Charlie Winter, ‘Media Jihad: the Islamic State’s doctrine for information warfare’, The International Centre for the Study of Radicalisation and Political Violence, King’s College London, 2017. Report available at: https://icsr.info/2017/02/13/icsr-report-media-jihad-islamic-states-doctrine-information-warfare/. Accessed 17 August 2020.
  10. Stuart Scheller, ‘To the American leadership. Very respectfully, US’. 26 August 2021. Video on Facebook at: https://www.facebook.com/stuart.scheller/videos/561114034931173/?t=238. Accessed 30 October 2021.
  11. Stuart Scheller, ‘Your move’. 29 August 2021. Video on YouTube at: https://www.youtube.com/watch?v=lR7jBsR0D10&t=495s. Accessed 30 October 2021.
  12. ‘“We Never Got It. Not Even Close”: Afghanistan Veterans Reflect on 20 Years of War’. Politico Magazine, 10 September 2021. Available at: https://www.politico.com/news/magazine/2021/09/10/politico-mag-afghan-vets-roundtable-506989. Accessed 30 October 2021.
  13. Jeff Schogol, ‘Leaked documents reveal just how concerned the Marine Corps was about Lt. Col. Stuart Scheller’s call for “revolution”’, Task and Purpose, 17 October 2021. Available at: https://taskandpurpose.com/news/marine-corps-lt-col-stuart-scheller-court-martial/. Accessed 30 October 2021.
  14. Mike Wendling, ‘QAnon: What is it and where did come from’?, BBC News, 6 January 2021. Available at: https://www.bbc.co.uk/news/53498434. Accessed 30 October 2021.
  15. The video was posted on Twitter by Katherine Denkinson at 20:23 on 9 August 2021. Available at: https://twitter.com/KDenkWrites/status/1424813677849415685?s=20. Accessed 30 October 2021.
  16. Ibid.
  17. ‘Frances Haugen says Facebook is “making hate worse”’, BBC News, 26 October 2021. Available at: https://www.bbc.co.uk/news/technology-59038506. Accessed 1 November 2021.

Filed Under: Public administration & military science Tagged With: Air and other specialized forces and warfare; engineering and related services

The circulation of power and ethics in AI, robotics, and autonomy research in Australia

Nov 1, 2022 by Sian Troath

Autonomous vehicles, drones, swarming and collaborative robotics have together recently been announced as not only a ‘critical technology’, but a ‘critical technology of initial focus’ by the Australian government – one of a shortlist of nine priority technologies identified as essential to Australia’s economic and national security.1 Australia is seen as a leader when it comes to trusted autonomous systems, with autonomy research in the defence space escalating in recent years. Robotics and autonomous systems, or RAS, have been identified as both a threat (when in the hands of adversaries) and an opportunity for Defence (when in the hands of Defence, and allies and partners).

The opportunities identified include the following: enhanced combat capability, improved efficiency, increased mass, decision superiority, reduced risk to personnel, reduced physical and cognitive loads of soldiers, improved decision making, agility, resilience and enhanced lethality.2 Key to unlocking these opportunities is solving or mitigating both the ethical and practical challenges associating with using such systems. 

In terms of practical challenges, the aim is to support collaboration between Defence, industry and academia to advance technology to a point where RAS are cheap, small and many.3 In terms of ethical challenges, five facets of ethical AI for Defence have been identified: responsibility (who is responsible), governance (how AI is controlled), trust (how AI can be trusted), law (how AI can be used lawfully) and traceability (how the actions of AI are recorded).4

This work is largely being led by the Trusted Autonomous Systems Defence Cooperative Research Centre (TASDCRC). As the first Defence Cooperative Research Centre, launched in 2018, it aims to bring together Defence, academia and industry to tackle challenges relating to trusted autonomous systems.5 A key focus of the centre is ethics.

Two recent articles have drawn attention to the circulation of ethics and capture regarding big technology corporations and AI research. Phan, Goldenfein, Mann and Kuch trace how particular approaches to ethics circulate across Big Tech, universities and other industries.6 They argue that ‘Big Tech has transformed ethics into a form of capital – a transactional object external to the organisation, one of the many “things” contemporary capitalists must tame and procure’.7 Whittaker explores the influence of the tech industry on AI research, arguing that reliance on large data sets, computational processing power and data storage concentrates power in the hands of a small number of large tech companies who hold such resources.8

Both of these pieces provide an interesting springboard to jump off when thinking about defence AI research in Australia. It is the push and pull between profit-based industry incentives, Defence desire for military advantage, and academia’s growing reliance on external funding that is shaping the development of robotics and autonomous systems – not only the technological systems themselves, but also ideas about the ethics and practicality of their use. Power, ethics and narratives all circulate between these spaces. These interconnections, exacerbated by dual-use opportunities which see technologies able to be used or adapted between civilian and military purposes, require examination.

AI and robotics and autonomous systems research for Defence purposes does not take place in a vacuum – it is part of a broader ecosystem of power and dependency. Kate Crawford highlights these very power dynamics in her book, in the American context. As she outlines, in seeking to enact the Third Offset strategy and utilise AI and autonomy for military advantage, ‘the Department of Defense would need gigantic extractive infrastructures’ – and the only place where both the required human and technological resources can be accessed is the tech industry.9 

The power flows in both directions, however, with perceptions of strategic competition driving a desire for technological superiority and creating a complex relationship between the state and industry. Indeed, Crawford argues, the dual-use nature of AI technologies has led the US to adopt civilian-military collaboration as ‘an explicit strategy: to seek national control and international dominance of AI in order to secure military and corporate advantage’.10

The push for this kind of dual-use approach is also evident in Australia. The TAS-DCRC was set up with investment from the Queensland state government, who make it clear that they expect the benefits of defence-focused research on trusted autonomous systems to bolster civilian industries such as agriculture, mining and environmental management.11 In the reverse direction, Chief Engineer for the Royal Australian Air Force Remotely Piloted Aircraft Systems/Unmanned Aerial Systems, Kierin Joyce argues that Australia’s world-leading autonomy research in the mining and research sector can be adapted to a defence context.12 This all takes place in the context of an ongoing shift in approach to the defence industry, with the government aiming to enhance connections between Defence, industry, and academia.13

The Russo–Ukrainian Civil War

What is Mr Putin Doing In Ukraine? – Thoughts From Kiev.

Today, Sunday, I went to Maidan. Several hundre…

by Mychailo Wynnyckyj

Putin On Pause – Thoughts from Kiev

President Putin’s press conference seems to hav…

by Mychailo Wynnyckyj

The Vladimir Putin Problem – Thoughts From Kiev

The Russian invasion of Crimea has put the enti…

by Mychailo Wynnyckyj

Imminent Invasion – Thoughts from Kiev

Today, Crimeans “voted”. Given the barrage of m…

by Mychailo Wynnyckyj

Philosophy – Thoughts From Kiev

Today is a noteworthy day. Exactly 120 days ago…

by Mychailo Wynnyckyj

The growing push for strengthening collaboration between Defence, industry and academia takes us into a second thread: how the dynamics of power, dependency and the circulation of ethics emanate from Defence. Whittaker lays out the cost of capture of AI research by industry through providing the historical context of the Cold War dominance of the US military over scientific research.14 While she is right to focus on tech companies as the dominant source of this dynamic in the present day, it is important not to discount the ongoing influence of Defence and the messy relationship between civilian and military research – particularly when it comes to AI, autonomy and robotics.

In Australia, both industry and academia vie for defence research funding, with academia also focused on attracting industry funding. On the university side, COVID-19 has exacerbated pre-existing crises in the neoliberal university – leaving research and jobs increasingly reliant on accessing external funding.15 These dynamics of power and dependence influence how research develops. In Whittaker’s words,

This doesn’t mean that researchers within these domains are compromised. Neither does it mean that there aren’t research directions that can elude such dependencies. It does mean, however, that the questions and incentives that animate the field are not always individual researchers’ to decide. And that the terms of the field—including which questions are deemed worth answering, and which answers will result in grants, awards, and tenure—are inordinately shaped by the corporate turn to resource-intensive AI, and the tech-industry incentives propelling it.16

Phan et al. also highlight these difficulties regarding both government and industry funding, pointing out that even the mere selection of what will get funded, influenced by various interests, creates ‘a set of dilemmas and paradoxes’ for researchers. 17 

It is interesting to note, then, the funding choices that lead money to be directed to ethics in AI for Defence, and trusted autonomous systems, taking us back to the circulation of ethics in the messy civilian-military relationship. Phan et al. describe the corporate logic at play, which views ethics as a problem to solve, as something to acquire to bolster legitimacy. There is a similar logic at play in the Defence arena: Ethics are seen as something to acquire, a problem to solve, in order to succeed in the quest for military advantage. The TASDCRC commenced a $9 million, six year Programme on the Ethics and Law of Trusted Autonomous Systems in 2019.18 Further to this is the establishment of the TASDCRC Ethics Uplift Program, which aims, among other things, to ‘build enduring ethical capacity in Australian industry and universities to service Australian RAS-AI’ and ‘educate in how to build ethical and legal autonomous systems’.19  TASDCRC CEO, Jason Scholz, has said that ‘ethics is a fundamental consideration across the game-changing Projects that TAS are bringing together with Defence, Industry and Research Institutions’.20 These efforts all rely on the industry-academia-military relationship at their core.

Meanwhile, Defence has itself emphasised that ‘the ethics of AI and autonomous systems is an ongoing priority’.21  Air Vice-Marshal Cath Roberts has spoken of the need to ‘ensure that ethical, moral and legal issues are resolved at the same pace as the technology is developed’, given the vital role AI and autonomy will play for air power.22 A group of researchers from or associated with the TASDCRC go further still, arguing that autonomous weapons will be ‘ethical weapons’, able to make war ‘safer’.23  Again, ethics are viewed as a problem to be solved. In the corporate space, ethics are to be solved for profit. In the defence space, ethics are to be solved for military advantage and, relatedly, to facilitate acceptance and trust of such technologies by both personnel and the public. 

The approach to ‘trust’ is the same: a problem to be solved. If an autonomous system can be trusted, then it can be used – ethically as well as legally – in the pursuit of military advantage. Autonomous systems need to be trusted by the personnel who will use them so that they are willing to use them,24  and they need to be trusted by the public so that Defence retains its license to operate. It is an interesting Australian quirk, that such systems are not lethal autonomous systems or even merely autonomous systems, but rather trusted autonomous systems. 

None of this is to say that either research on ethics or trust in technology is inherently suspect. Rather, it is to point out that the terms ‘ethical’ and ‘trusted’ in AI, robotics and autonomy research are being used for political purposes – sometimes intentionally, sometimes unintentionally. As someone who has conducted research on trust in technology under a Defence contract, I was once in a room where someone asked, ‘Can’t we use trust as a weapon?’ Militarism is quite the drug. 

These are threads which still need further unravelling. The circulation of ethics and power at the intersection of academia, industry and defence when it comes to AI, robotics and autonomy research in Australia demands further exploration. It is the intertwining of corporate interests seeking profit, Defence motivations for military superiority, and academia’s desperation for external funding that is shaping AI, robotics, and autonomous systems research in Australia. The narratives being deployed to achieve these aims require careful consideration.

Notes:

  1. Critical Technologies Policy Coordination Office, Australian Government, ‘The Action Plan for Critical Technologies’, 2021. See also Critical Technologies Policy Coordination Office, Australian Government, ‘Blueprint for Critical Technologies’, 2021.
  2. Australian Defence Force, ‘Concept for Robotic and Autonomous Systems’, 2020, p. 8; Australian Army, ‘Robotic & Autonomous Systems Strategy’, 2018, p. 6; Royal Australian Navy, ‘RAS-AI Strategy 2040: Warfare Innovation Navy’, 2020, p. 14.
  3. Australian Defence Force, ‘Concept for Robotic and Autonomous Systems’, 2020, p. 20.
  4. Kate Devitt, Michael Gan, Jason Scholz, and Robert Bolia, ‘A Method for Ethical AI in Defence’, Australian Government Department of Defence, 2020, p. ii.
  5. Trusted Autonomous Systems, ‘About Us’, nd., https://tasdcrc.com.au/about-us/. 
  6. Thao Phan, Jake Goldenfein, Monique Mann, and Declan Kuch, ‘Economies of Virtue: The Circulation of “Ethics” in Big Tech’, Science as Culture, 2021, pp. 1-15.
  7. Phan et al., p. 1.
  8. Meredith Whittaker, ‘The Steep Cost of Capture’, Interactions XXVII.6 (November-December), 2021, pp. 51-55.
  9. Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press: New Haven, 2021, p. 188.
  10. Crawford, p. 187.
  11. Queensland Government, ‘Queensland Drones Strategy’, 2019, p. 29.
  12. Robbin Laird, ‘The Quest for Next Generation Autonomous Systems: Impact on Reshaping Australian Defence Forces’, 25 May 2021, https://defense.info/re-shaping-defense-security/2021/05/the-quest-for-next-generation-autonomous-systems-impact-on-reshaping-australian-defence-forces/. 
  13. Australia Department of Defence, ‘2016 Defence Industry Policy Statement’, 2016; Melissa Price, ‘Op-Ed: Give Pillars Approach to Support Defence Industry’, 24 September 2020, https://www.minister.defence.gov.au/minister/melissa-price/media-releases/op-ed-five-pillars-approach-support-defence-industry; Melissa Price, ‘Defence Innovation System Goes Under Microscope’, 3 September 2021, https://www.minister.defence.gov.au/minister/melissa-price/media-releases/defence-innovation-system-goes-under-microscope. 
  14. Whittaker, p. 52.
  15. Phan et al., p. 8.
  16. Whittaker, p. 52.
  17. Phan et al., p. 2.
  18. Trusted Autonomous Systems, ‘TASDCRC Activity on Ethics and Law of Trusted Autonomous Systems’, 12 February 2021, https://tasdcrc.com.au/tasdcrc-activity-on-ethics-and-law-of-trusted-autonomous-systems/. 
  19. Ibid.
  20. Trusted Autonomous Systems, ‘A Method for Ethical AI in Defence’, 16 February 2021, https://tasdcrc.com.au/a-method-for-ethical-ai-in-defence/. 
  21. Australian Government Department of Defence, ‘Defence Releases Report on Ethical Use of AI’, 16 February 2021, https://news.defence.gov.au/media/media-releases/defence-releases-report-ethical-use-ai. 
  22. Trusted Autonomous Systems, ‘A Method for Ethical AI in Defence’, 16 February 2021, https://tasdcrc.com.au/a-method-for-ethical-ai-in-defence/.
  23. Jason B. Scholz, Dale A. Lambert, Robert S. Bolia, and Jai Galliott, ‘Ethical Weapons: A Case for AI in Weapons’, in Steven C. Roach and Amy E. Eckert (eds.), Moral Responsibility in Twenty-First-Century Warfare: Just War Theory and the Ethical Challenges of Autonomous Weapons Systems, State University of New York Press: Albany, 2020, pp. 181-214.
  24. Jai Galliott and Austin Wyatt, ‘Risks and Benefits of Autonomous Weapon Systems: Perceptions Among Future Australian Defence Force Officers’, Journal of Indo-Pacific Affairs, Winter 2020, pp. 17-34; Jai Galliott and Austin Wyatt, ‘Considering the Importance of Autonomous Weapon System Design Factors to Future Military Leaders’, Australian Journal of International Affairs, 2021, pp. 1-26.

Filed Under: Public administration & military science Tagged With: Air and other specialized forces and warfare; engineering and related services

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to page 6
  • Interim pages omitted …
  • Go to page 51
  • Go to Next Page »

Sections

  • Knowledge & Systems
  • Religion
  • Philosophy & Psychology
  • Social Sciences
  • Language
  • Science
  • Technology
  • Arts & Entertainment
  • Literature
  • History & Geography

More

  • Architecture
  • Bibliographies
  • Engineering
  • Epistemology
  • Ethics
  • Astronomy
  • Biology
  • Chemistry
  • Christianity
  • Economics
  • About Us
  • Contact Us
  • Advertise with us
  • The Big Tent Podcast
  • Terms & Conditions
  • Cookie Policy
  • Privacy Policy

© 2025 Circus Bazaar Magazine. All rights reserved.

Use of this site constitutes acceptance of our Terms and Condition, Privacy Policy and Cookie Statement. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of the Circus Bazaar Company AS, Pty Ltd or LLC.

We noticed you're visiting from Norway. We've updated our prices to Norwegian krone for your shopping convenience. Use United States (US) dollar instead. Dismiss