• Skip to main content
  • Skip to primary sidebar
Circus Bazaar Magazine

Circus Bazaar Magazine

Penned from the crooked timber of humanity

logo

On Data, Devices and Daemons

On Data, Devices and Daemons

Nov 1, 2022 by Lesley Seebeck

The world changed in 2007. It’s hard to imagine now, but there was a time before smartphones. The breakthrough device, of course, was the iPhone. We’d had constant availability, whether through Blackberries, HTC devices or Nokias before then. But the iPhone brought with it a platform that permitted the development of range of other tools—apps—plus means of accessibility. Others, naturally, copied it and built new business models that have changed our societies. The ‘dark side’ of such availability, and apps tailored to our immediate needs, was the quantification of the human. Through our use of apps into which we add information about ourselves, on devices with geolocation, we generate data that in turn is collected by the companies that build platforms and the hardware. Consequently, each of us is reduced to data that exposes our physical presence, our social connections, the state of our health, our patterns of behaviour, and our very thoughts and aspirations to a range of commercial interests—some apparent, many less so—and beyond them, to a range of other organisations, including criminal groups and governments.

Our devices devour our attention, too.  We are constantly distracted by the dopamine hits from checking email, checking threads of discussions on messaging apps, or the latest photos from our friends or ‘influencers’.  It’s not unlike the constant attention one must pay a particularly demanding toddler, but it also shapes our behaviour.  As such, and precisely because it is as untameable as a recalcitrant toddler, it can prove life-threatening rather than life-enhancing.1  

The idea that the human can be separated and divorced entirely from technology is unrealistic.  Technology is fundamentally a human artefact; its use reflects human values, human motives and human priorities.  So, in the modern world, rather than seeking to treat technology, data and devices as something separate—and easily separable—from the human, perhaps a better way is to think of them as intrinsically tied to our concept of the individual, of the self.  

Doing so would allow for a more sophisticated discussion about personal rights, for freedoms, for the right to sanctuary and the ability to express oneself, and contest ideas, without fear or favour based on now-outdated concepts of property, artefacts and carriage.  It may also help to build better technology better suited to the limits of human cognition. Further, it’s evident that in a digital democracy, we need better ways to consider what a citizen is, with regard to their rights, roles and responsibilities, including their relationship with governments and commercial entities. There are few appropriate concepts ready to hand.  But fiction offers some help.  For example, we could think of our relationship with our data and our devices by drawing on Philip Pullman’s His Dark Materials trilogy.2   

In those books—the first dramatised as a movie,3 and subsequently launched as an HBO/BBC1 series4 —each human has their own daemon.  These daemons are the external representation of their person’s inner self.  Daemons may manifest in different physical forms until a person reaches their maturity, whereupon they settle on one.  To touch another person’s daemon is taboo: even in battle soldiers will avoid touching the daemons of others.  But daemons themselves can interact with, or attack, another daemon.  

Separation from a daemon causes discomfort for short distances, over a few metres or so, increasing to real physical pain for longer distances.  Excision, the complete cutting of the link between a person and its daemon, is one of the most evil things that can be done.  After excision, the person is lesser than before, often left without personality, while the daemon is left a ghost, constantly seeking but unable to receive comfort.

There are some parallels with our modern world.  Most of us have a digital shadow—not quite a twin, but part of who we are—that is located in the internet and on our devices.  These shadows are a combination of data, information, our virtual expressions and online behaviours, and increasingly our own peculiar twists on algorithms.  Our means of accessing, transforming and interacting with that digital shadow may, like daemons before their human partners reach maturity, change shape, from iPhones to Android devices.  But the essential character of our digital shadow, like daemons, remains the same.  

In normal human company, it’s polite to ask before touching another’s device—a rare request—which reflects norms we have developed over time and echoes Pullman’s taboo about touching another’s daemon.  But our current reality is that in cyberspace, we are forced to sign inscrutable, choice-free terms and conditions, assigning our personal data—representations of ourselves—and our rights to companies.  Governments, too, encroach into the same space—exerting control over personal data and digital rights, even our identity—in the name of safety and security.  

And we have little choice: in a digital society and economy we cannot live effectively without that digital presence and persona.  It’s how we access banking, government services, health information, friends and family, emergency warnings, and so on.  COVID-19 has exacerbated that dependence.  Lockdowns and distancing meant working from home and using internet services, for primarily white-collar workers.  This has led to the further intrusion of work time, school time, and company systems into personal and family lives.  As we move out of lockdowns, we now are required to use our devices to sign in, to gain access, to justify our presence, wherever we go, leaving digital trails for governments, and others, to follow.

Further, separation from devices can cause anxiety, distress and impaired cognition;5 the loss of a device can mean we lose aspects of ourselves, that which we have privileged to or recorded on that particular device.  This is part of the concern too around cybersecurity: the loss of operational devices and corruption of personal and organisational data, at scale, may well cause disruption at the societal level.

We have yet to build the norms that protect our identity, our data, our aspirations, and even our own algorithms in the digital world.  These live, like daemons, in both the real world and a netherspace—an equivalent to the supranational cloud—that’s outside the normal experience of people or institutions.  Nonetheless, we need to find a way to recognise our digital shadows, and enable those parts of our being—our personal data, applied algorithms, our thoughts, our social relationships and interactions, and the technology housing them—to be recognised as our own and protected irrevocably.    

Sure, evoking daemons is an artifice.  But doing so may help coalesce some of the debate around data, devices, and the individual’s rights and freedoms on the internet, giving us a means through which we can coherently manage our relationships with the tech platforms, data collectors, algorithm developers and governments, as fully enfranchised citizens.  It may also help us define and seek personal sanctuary—the right to privacy, exclusion from surveillance, a sphere of personal safety in the digital world, and our fundamental right to self-definition and our own identity, whether in the physical or the digital world.

  1. John Spencer, ‘The Perils of Distracted Fighting,’ Wired, October 9, 2019, https://www.wired.com/story/the-dangers-of-distracted-fighting/.
  2. See https://www.philip-pullman.com/hdm
  3. https://www.imdb.com/title/tt0385752/
  4. https://www.imdb.com/title/tt5607976/
  5. Amit Chowdhry, ‘IPhone Separation Anxiety Hinders Cognitive Abilities, Says Study,’ Forbes, January 13, 2015, https://www.forbes.com/sites/amitchowdhry/2015/01/13/iphone-separation-anxiety/.

Filed Under: Social problems & social services Tagged With: Social problems and services

Witnessing Algorithms and the Paradox of Synthetic Media

Nov 1, 2022 by Michael Richardson

Synthetic media are everywhere. Digital images and objects that appear to index something in the world but do nothing of the sort have their roots in video games and online worlds like Second Life. However, with the growing appetite for niche machine learning training sets and artificial environments for testing autonomous machines, synthetic media are increasingly central to the development of algorithmic systems that make meaningful decisions or undertake actions in physical environments. Microsoft AirSim is a prime example of the latter, an environment created in Epic’s Unreal Engine that can be used to test autonomous vehicles, drones and other devices that depend on computer vision for navigation. Artificial environments are useful testing grounds because they are so precisely manipulable: trees can be bent to a specific wind factor, light adjusted, surface resistance altered. They are also faster and cheaper places to test and refine navigation software prior to expensive material prototyping and real-world testing. In machine learning, building synthetic training sets is an established practice. Synthetic media are particularly valuable in contexts such as armed conflict, where images might be too few in number to produce a large enough corpus and too classified to be released to either digital piece workers for tagging or private sector developers to train algorithms.

But what happens when synthetic media are marshalled to do the activist work of witnessing state and corporate violence? What are we to make of the proposition that truths about the world might be produced via algorithms trained almost exclusively with synthetic data? This essay sketches answers to these questions through an engagement with Triple Chaser, an investigative and aesthetic project from the UK-based research agency Forensic Architecture. Founded in 2010 by architect and academic Eyal Weizman and located at Goldsmiths, Forensic Architecture pioneers investigative techniques using spatial, architectural, and situated methods. Using aesthetic practice to produce actionable forensic evidence, their work appears in galleries, court rooms, and communities. In recent years, they have begun to use machine learning and synthetic media to overcome limited publicly available data and to multiply by several orders of magnitude the effectiveness of images collected by activists. My contention in this essay is that these techniques show how algorithms can do the work of witnessing: registering meaningful events to produce knowledge founded on claims of truth and significance.

Presented at the 2019 Whitney Biennial in New York, Triple Chaser combines photographic images and video with synthetic media to develop a dataset for a deep learning neural network able to recognise tear gas canisters used against civilians around the world. It responds to the controversy that engulfed the Biennial following revelations that tear gas manufactured by Safariland, a company owned by Whitney trustee Warren B. Kanders, was used against protestors at the US-Mexican border. Public demonstrations and artist protests erupted, leading to significant negative press coverage across 2018 and 2019. Rather than withdraw, Forensic Architecture submitted an investigative piece that sought to demonstrate the potential for machine learning to function as an activist tool. 

Produced in concert with Praxis Films, run by the artist and filmmaker Laura Poitras, Triple Chaser was presented as an 11-minute video installation. Framed by a placard explaining the controversy and Forensic Architecture’s decision to remain in the exhibition, viewers entered a severe, dark room to watch the tightly focused account of Safariland, the problem of identifying tear gas manufacturers, the technical processes employed by the research agency, and its further applications. Despite initial intransigence, the withdrawal of eight artists in July 2019 pushed Kanders to resign as vice chairman of the Museum and, later, announce that Safariland would sell off its chemicals division that produced tear gas and other anti-dissent weapons. Meanwhile, Forensic Architecture began to make its codes and image sets available for open source download while applying the same techniques to other cases, uploading its Mtriage tool and Model Zoo synthetic media database to the code repository GitHub. A truth-seeking tool trained on synthetic data, Triple Chaser reveals how witnessing can occur in and through nonhuman agencies, as well as and even in place of humans. 

In keeping with the established ethos of Forensic Architecture, Triple Chaser demonstrates how forensics – a practice heavily associated with both policing – can be turned against the very state agencies that typically deploy its gaze. As the cultural studies scholar Joseph Pugliese points out, ‘[E]mbedded in the concept of forensic is a combination of rhetorical, performative, and narratological techniques’1  that can be deployed outside courts of law. For Weizman, the fora of forensics is critical: it brings evidence into the domain of contestation in which politics happens. In his agency’s counter-forensic investigation into Safariland, tear gas deployed by police and security agencies becomes the subject of interrogation and re-presentation to the public. In this making public, distinctions and overlaps can be traced between different modes of knowledge making and address: the production of evidence, the speaking of testimony, the witnessing of the audience. But how might we understood the role of the machine learning algorithm itself? And what are we to make of this synthetic evidence? 

Weizman describes the practice of forensic architecture as composing ‘evidence assemblages’ from ‘different structures, infrastructures, objects, environments, actors and incidents’.2  There is an inherent tension between testimony and evidence that forensics as a resistant and activist practice seeks to harness by making the material speak in its own terms. As a methodology, forensic architecture seeks a kind of ‘synthesis between testimony and evidence’ that takes up the lessons of the forensic turn in human rights investigation to perceive testimony itself as a material practice as well as a linguistic one. Barely detectable traces of violence can be marshalled through the forensic process to become material witnesses, evidentiary entities. But evidence cannot speak for itself: it depends on the human witness. Evidence and testimony are closely linked notions, not least because both demarcate an object: speech spoken, matter marked. Testimony can, of course, enter into evidence. But I think something more fundamental is at work in Triple Chaser. It doesn’t simply register or represent: it is operational, generative of relations between objects in the world and the parameters of its data. Its technical assemblage precedes both evidence and testimony. It engages in a witnessing that is, I think, nonhuman. Triple Chaser brings the registering of violations of human rights into an agential domain in which the work of witnessing is necessarily inseparable from the nonhuman, whether in the form of code, data, or computation.

As development commenced, Triple Chaser faced a challenge:  Forensic Architecture was only able to source a small percentage of the thousands of images needed to train a machine learning algorithm to recognise the tear gas canister. They were, however, able to source detailed video footage of depleted canisters from activists, and even obtained some material fragments. Borrowing from strategies used by Microsoft, Nvidia and others, this video data could be modelled in environments built in the Unreal gaming engine, and then scripted to output thousands of canister images against backgrounds ranging from abstract patterns to simulated real-world contexts. Tagging of these natively digital objects also sidestepped the labour and error of manual tagging, allowing a training set to be swiftly built from images created with their metadata attached. Using a number of different machine learning techniques, investigators were able to train a neural network to identify Safariland tear gas canisters from a partial image, with a high degree of accuracy and with weighted probabilities. These synthetic evidence assemblages then taught the algorithm to witness.

Like most image recognition systems, Triple Chaser deploys a convolutional neural network, or CNN, which learns how to spatially analyse the pixels of an image. Trained on tagged data sets, CNNs slide – convolve, rather – a series of filters across the surface of an image to produce activation maps that allow the algorithm to iteratively learn about the spatial arrangements of large sets of images. These activation maps are passed from one convolution layer to the next, with various techniques applied to increase accuracy and prevent the spatial scale of the system from growing out of control. Exactly what happens within each convolutional layer remains in the algorithmic unknown: it cannot be distilled into representational form but rather eludes cognition. 

If you are not a soldier by proxy, you are an intelligence officer by proxy.

What is Mr Putin Doing In Ukraine? – Thoughts From Kiev.

Today, Sunday, I went to Maidan. Several hundre…

by Mychailo Wynnyckyj

Putin On Pause – Thoughts from Kiev

President Putin’s press conference seems to hav…

by Mychailo Wynnyckyj

The Vladimir Putin Problem – Thoughts From Kiev

The Russian invasion of Crimea has put the enti…

by Mychailo Wynnyckyj

Imminent Invasion – Thoughts from Kiev

Today, Crimeans “voted”. Given the barrage of m…

by Mychailo Wynnyckyj

Philosophy – Thoughts From Kiev

Today is a noteworthy day. Exactly 120 days ago…

by Mychailo Wynnyckyj

Machine learning processes thus exhibit a kind of autonomic, affective capacity to form relations between objects and build schemas for action from the modulation and mapping of those relations. Relations between elements vary in intensity, with the process of learning both producing and identifying intensities that are autonomous from the elements themselves. Intensive relations assemble elements into new aggregations; bodies affect and are affected by other bodies. Geographer of algorithmic systems Louise Amoore writes that algorithms must be understood as ‘entities whose particular form of experimental and adventurous rationality incorporates unreason in an intractable and productive knot’.3  There is an autonomic quality to such algorithmic knowledge making, more affective than cognitive. In the context of image analysis, Anna Munster and Adrian MacKenzie call this platform seeing,4 a mode of perception that is precisely not visual because it works only via the spatial arrangement of pixels in an image, with no regard for its content or meaning. This machinic registering of relations accumulates to make legible otherwise unknown connections between sensory data, and it does so with the potential (if not intention) to make political claims: to function as a kind of witnessing of what might otherwise go undetected. 

Underpinning the project is the proposition that social media and other image platforms contain within them markers of violence that can and should be revealed. For the machine learning algorithm of Triple Chaser, the events to which it becomes responsible are themselves computational: machinic encounters with the imaged mediation of tear gas canisters launched at protesters, refugees, migrants. But their computational nature does not exclude them from witnessing. With so much of the world now either emergent within or subject to computational systems, the reverse holds true: the domain of computation and the events that compose it must be brought within the frame of witnessing. While the standing of such counter-forensic algorithms in the courtroom might – for now – demand an expert human witness to vouch for their accuracy and explain their processes, witnessing itself has already taken place long before testimony occurs in front of the law. Comparisons can be drawn to the analogue photograph, which gradually became a vital mode of witnessing and testimony, not least in contexts of war and violence. Yet despite its solidity, the photograph is an imperfect witness. Much that matters resides in what it obscures, or in what fails to enter the frame. With the photograph giving way to the digital image and the digital image to the computational algorithm, the ambit of witnessing must expand.  As power is increasingly exercised through and even produced by algorithmic systems, modes of knowledge making and contestation predicated on an ocular era must be updated. 

As Triple Chaser demonstrates, algorithmic witnessing troubles relations both between witness and evidence and between witnessing and event. This machine learning system, trained to witness via synthetic data sets, suggests that the linear temporal relation in which evidence – the photograph, the fragment of tear gas canister – is interpreted by the human witness cannot or need not hold. Through their capacities for recognition and discrimination, nonhuman agencies of the machinic system enact the witnessing that turns the trace of events into evidence. Witnessing is, in this sense, a relational diagram that makes possible the composition of relations that in turn assemble into meaningful, even aesthetic objects. If witnessing precedes both evidence and witness, then witnessing forges the witness rather than the figure of the witness granting witnessing its legitimacy and standing. 

While this processual refiguring of witnessing has ramifications for nonhuman agencies and contexts beyond the algorithmic, Forensic Architecture’s movement into this space suggests the strategic potential of machine learning systems as the anchor for an alternative politics of machine learning. While I firmly believe that scepticism towards the emancipatory and resistant potential for machine learning – and algorithmic systems more generally – is deeply warranted, there is also a strategic imperative to do more to ask how such systems can work for people rather than against them. With its tool and synthetic media database both made open source, Forensic Architecture aims to democratise the production of evidence through the proliferation of algorithmic witnessing that works on behalf of NGOs, activists and oppressed peoples, and against the techno-political state.

Notes:

  1. Pugliese, Joseph. Biopolitics of the More-Than-Human: Forensic Ecologies of Violence. Durham, NC: Duke University Press, 2020.
  2. Weizman, Eyal. Forensic Architecture: Violence at the Threshold of Detectability. New York: Zone Books, 2017.
  3. Amoore, Louise. Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Durham: Duke University Press, 2020.
  4. 1. MacKenzie A, Munster A. Platform Seeing: Image Ensembles and Their Invisualities. Theory, Culture & Society. 2019;36(5):3-22. doi:10.1177/0263276419847508

Filed Under: Commerce, communications & transportation Tagged With: Communications

Online and imploding on their smartphones

Nov 1, 2022 by Matthew Ford

Today, the smartphone has become ‘the place where we live’.2 It is an integral part of our everyday existence. Launched in 2007, this one device now makes it possible to record events, find work, manage teams, locate ourselves on the planet, upload our experiences to social media, get a mortgage, read the newspaper, order a taxi, rent a holiday home, buy almost anything and get it delivered to our front door. The smartphone and the platforms, services and applications that form part of the mobile, connected ecosystem have redefined how we experience the world. These changes have not just affected how we think about day-to-day living. It also affects how we experience, prosecute and come to understand war.

‘I have been fighting for 17 years. I am willing to throw it all away to say to my senior leaders, I demand accountability.’
A reckoning will come for this catastrophe; military and political. For those of us who fought, it’s too much.1

Johnny Mercer, former Captain, veteran of Afghanistan and now the Member of Parliament for Plymouth Moor View

All of this has affected the Anglosphere’s armed forces in a range of unexpected and sometimes radicalising ways. Bringing local, national and transnational narratives into new conflict, the smartphone’s information ecosystem helps disaffected constituencies find each other and band together. Military, veteran and activist identities get reframed in these new spaces, creating an important location for sharing frustration and discontent. This has led to several incidents involving serving military personnel being investigated for their connections to extremist and right-wing political groups.3 

In this new ecology of war, connected technologies allow everyone the opportunity to participate, whether they are keyboard warriors or broadcasting live from the frontlines. The smartphone, for example, enables us to produce, publish and consume media from the palm of our hands, wherever we can get online. This has accelerated discussions and flattened our experiences. People draw connections between events in ways that only smart devices make possible. It has given us the opportunity to amplify our emotions and created asynchronous engagements with war and violence. Different communities record, reuse and recycle content at different times, locations and speeds.

WhatsApp, for example, is an end-to-end encrypted messenger service owned by Facebook. Instant messaging services like this are used by government ministers looking to avoid public scrutiny4  and targeteers circulating kill lists. Free to download to your smartphone, WhatsApp connects users to war and violence wherever they are in the world. Overseas, WhatsApp was in use among armed forces coordinating Reaper drone attacks in Mosul.5  American forces have been advised to download the app for operational use on their phones,6  and it has also been hacked by Israeli ‘cyber-arms dealer’, NSO Group.7 

The same technologies that the military use in targeting operations overseas are the same technologies civilians use to stage, broadcast and record political demonstrations at home. Thus, WhatsApp, Instagram and social media sites like Parler and Gab were used by supporters of President Trump to organise an insurrection and storm the Capitol Building on 6 January 2021. Including veterans from the wars in Iraq and Afghanistan – one of whom, Ashli Babbitt, was shot dead by Capitol Police8  – the goal was to stop Congress from certifying President Biden’s election victory. Recording and broadcasting events from their smartphones, the data the protagonists produced made it easy for the FBI to identify and subsequently arrest them.

Just as members of the Islamic State now maintain the memory of the State by circulating key propaganda online,9  the events in the Capitol created a digital archive for Trump supporters to look back on and invoke in their ongoing efforts to re-elect the 45th President. Like the proverbial music gig, they’d bought the T-shirt and had the smartphone photos. They had been there on that memorable day. The smartphone and the digital ecosystem it fostered have created all number of entirely new media for war and violence to occupy. People now experience a constantly churning spectacle of opinions and perceptions that spill out and feed back into each other, irrespective of whether they are expressed overseas or at home.

As British Tory politician Johnny Mercer demonstrates, this has given us a window into the emotional tensions prompted by military defeat in Afghanistan. Calling for a military and political reckoning, Mercer cited U.S. Marine Corps Lieutenant Colonel Stuart Scheller who, in August 2021, had taken to Facebook to demand that the military and political chain of command be held to account for the decisions they had taken in relation to Afghanistan.10  Knowing that his videos would certainly damage his career, Scheller subsequently recorded a video for YouTube and declared, ‘Follow me and we will bring the whole fucking system down’.11

Just like many veterans of the Global War on Terror, defeat left Mercer and Scheller wondering what the GWOT was all about.12  In the context of the January 6, 2021 insurrection at the Capitol Building, however, Scheller’s invocation to his audience not only reflected his emotional response to events in Kabul but also implied a call for action. Although Scheller subsequently denied it, senior officers feared that the Marine Corps colonel wanted to see an insurrection in Washington D.C. and a restoration of Donald Trump to the presidency.13  The reckoning that Mercer called for was reflected in the language used by Scheller. The political and military establishment had stabbed ordinary servicemen in the back. Something had to be done.

In these circumstances, it was inevitable that Scheller’s video would have a political effect in Washington D.C. It would also bring conspiracy theory directly into the heart of Anglo-American politics. Republican Congressman Louie Gohmert and Republican Congresswomen Marjorie Taylor Greene, for example, both spoke in support of Scheller. Both legislators are pro-Trump. Both have links to QAnon, the conspiracy theory that posits, ‘Donald Trump is waging a secret war against elite Satan-worshipping paedophiles in government, business and the media’.14  QAnon supporters were not just outside the Capitol Building. Conspiracy theory had in effect gone mainstream, brought into the heart of politics by those Congressmen and women who looked on defeat in Iraq and Afghanistan as an example of establishment politics gone wrong.

Mercer might not take QAnon seriously but just like in the States, conspiracy theory is now a feature of British politics. In Britain’s case, former members of the Parachute Regiment and veterans from Iraq and Afghanistan have been involved in COVID-19 anti-vaccine protests. Having sought to gain entry to old BBC studios in protest against MSM propagating what they consider to be pro-vaccine propaganda, one ex-soldier declared, ‘Basically the men of our unit in our service, believe that we’re pointing the weapons in the wrong direction’.15  Here too the language of the GWOT is spun back at the politicians that directed the military to go to Iraq and Afghanistan. Recorded on a smartphone by an apparent member of the anti-vax political party Freedom Alliance, the soldier went on to say,

“This time now the tyranny is against our people and we can’t see it ’cos it’s on our home soil where it’s never been before. Because [it’s] psychological warfare not bombs, we can’t see it, because [it’s] invisible. We’ve had this experience and used these tactics in other countries to manipulate, divide and conquer and now we’re watching our own government and our own military use it against us. But the only men and women in this country that can resist against that are the ones that have the experience and the training that we use to help us [sic].16“

The smartphone has done a great deal to create the media ecosystems where people who share counter-cultural views can meet and organise. Presented as an affirmation of free speech, conspiracy theory has become the reality, not the exception. In many respects, the effects on political action are not always easy to see. There is every possibility that online echo chambers will lead to a further radicalisation of politics, where the tools and techniques applied overseas become the means by which social division is instrumentalised for political effect at home.

All of this has been amplified online through the connected technologies that both the military and the public use to organise their everyday lives The smartphone’s digital ecosystem has imploded conventional civil-military relations, enabled disaffected veteran soldiers and officers to find each other, and facilitated access to a like-minded audience. Among friends, they now feel comfortable attacking the state in the hope of defending it. In some cases, such rhetoric has bled into and drawn upon conspiracy theory. This has animated the frustration and dysphoria experienced by many veterans now wondering why they bothered to sacrifice themselves in Iraq and Afghanistan. Whatever happens in the future, blowback from the wars in Iraq and Afghanistan has spiralled out of the information prisms of the new war ecology in unanticipated ways. As Facebook whistleblower Frances Haugen observes, the algorithms built into social media are designed to push people towards ‘extreme content’.17  This is ripe territory for political exploitation. Something politicians should weigh carefully as they call for their reckoning.

Notes:

  1. Johnny Mercer, 09:23, 27 August 2021 posted on Twitter @johnnyMercerUK, at:https://twitter.com/JohnnyMercerUK/status/1431351799303348235?s=20. Accessed 8 November 2021.
  2. Alex Hern, ‘Smartphone is now “the place where we live”, anthropologists say’, The Guardian, 10 May 2021. Available at: https://www.theguardian.com/technology/2021/may/10/smartphone-is-now-the-place-where-we-live-anthropologists-say. Accessed 18 October 2021.
  3. Sian Norris and Heidi Siegmund Cuda, ‘Fantasy of War – far right and the military’, Bylinetimes, 10 November 2021. Available at: https://bylinetimes.com/2021/11/10/the-fantasy-of-war-the-far-right-and-the-military/. Accessed 11 November 2021.
  4. Haroon Siddique, ‘Cabinet Policy obliges ministers to delete instant messages’, The Guardian, 12 October 2021. Available at: https://www.theguardian.com/politics/2021/oct/12/cabinet-policy-ministers-delete-whatsapp-messages. Accessed 9 November 2021.
  5. James Verini, ‘How the battle of Mosul was waged on WhatsApp’, The Guardian, 28 September 2019. Available at: https://www.theguardian.com/world/2019/sep/28/battle-of-mosul-waged-on-whatsapp-james-verini. Accessed 23 October 2021.
  6. Shawn Snow, Kyle Rempfer and Meghann Myers, Deployed 82nd Airborne unit told to use these encrypted messaging apps on government cell phones’, Military Times, 23 January 2020. Available at: https://www.militarytimes.com/flashpoints/2020/01/23/deployed-82nd-airborne-unit-told-to-use-these-encrypted-messaging-apps-on-government-cellphones/. Blake Moore and Jan E. Tighe, ‘Insecure communications like WhatsApp are putting U.S. National Security at risk’, 8 December 2020. Available at: https://www.nextgov.com/ideas/2020/12/insecure-communications-whatsapp-are-putting-us-national-security-risk/170577/. Both articles accessed 23 October 2021.
  7. Stephanie Kirchgaessner, ‘How NSO became the company whose software can spy on the world’, The Guardian, 23 July 2021. Available at: https://www.theguardian.com/news/2021/jul/23/how-nso-became-the-company-whose-software-can-spy-on-the-world. Accessed 23 October 2021.
  8. Stephen Losey, ‘Woman shot and killed at Capitol was security forces airman, QAnon adherent’, Air Force Times, 7 January 2021. Available at: https://www.airforcetimes.com/news/your-air-force/2021/01/07/woman-shot-and-killed-at-capitol-was-security-forces-airman-qanon-adherent/. Accessed 30 October 2021.
  9. Charlie Winter, ‘Media Jihad: the Islamic State’s doctrine for information warfare’, The International Centre for the Study of Radicalisation and Political Violence, King’s College London, 2017. Report available at: https://icsr.info/2017/02/13/icsr-report-media-jihad-islamic-states-doctrine-information-warfare/. Accessed 17 August 2020.
  10. Stuart Scheller, ‘To the American leadership. Very respectfully, US’. 26 August 2021. Video on Facebook at: https://www.facebook.com/stuart.scheller/videos/561114034931173/?t=238. Accessed 30 October 2021.
  11. Stuart Scheller, ‘Your move’. 29 August 2021. Video on YouTube at: https://www.youtube.com/watch?v=lR7jBsR0D10&t=495s. Accessed 30 October 2021.
  12. ‘“We Never Got It. Not Even Close”: Afghanistan Veterans Reflect on 20 Years of War’. Politico Magazine, 10 September 2021. Available at: https://www.politico.com/news/magazine/2021/09/10/politico-mag-afghan-vets-roundtable-506989. Accessed 30 October 2021.
  13. Jeff Schogol, ‘Leaked documents reveal just how concerned the Marine Corps was about Lt. Col. Stuart Scheller’s call for “revolution”’, Task and Purpose, 17 October 2021. Available at: https://taskandpurpose.com/news/marine-corps-lt-col-stuart-scheller-court-martial/. Accessed 30 October 2021.
  14. Mike Wendling, ‘QAnon: What is it and where did come from’?, BBC News, 6 January 2021. Available at: https://www.bbc.co.uk/news/53498434. Accessed 30 October 2021.
  15. The video was posted on Twitter by Katherine Denkinson at 20:23 on 9 August 2021. Available at: https://twitter.com/KDenkWrites/status/1424813677849415685?s=20. Accessed 30 October 2021.
  16. Ibid.
  17. ‘Frances Haugen says Facebook is “making hate worse”’, BBC News, 26 October 2021. Available at: https://www.bbc.co.uk/news/technology-59038506. Accessed 1 November 2021.

Filed Under: Public administration & military science Tagged With: Air and other specialized forces and warfare; engineering and related services

The circulation of power and ethics in AI, robotics, and autonomy research in Australia

Nov 1, 2022 by Sian Troath

Autonomous vehicles, drones, swarming and collaborative robotics have together recently been announced as not only a ‘critical technology’, but a ‘critical technology of initial focus’ by the Australian government – one of a shortlist of nine priority technologies identified as essential to Australia’s economic and national security.1 Australia is seen as a leader when it comes to trusted autonomous systems, with autonomy research in the defence space escalating in recent years. Robotics and autonomous systems, or RAS, have been identified as both a threat (when in the hands of adversaries) and an opportunity for Defence (when in the hands of Defence, and allies and partners).

The opportunities identified include the following: enhanced combat capability, improved efficiency, increased mass, decision superiority, reduced risk to personnel, reduced physical and cognitive loads of soldiers, improved decision making, agility, resilience and enhanced lethality.2 Key to unlocking these opportunities is solving or mitigating both the ethical and practical challenges associating with using such systems. 

In terms of practical challenges, the aim is to support collaboration between Defence, industry and academia to advance technology to a point where RAS are cheap, small and many.3 In terms of ethical challenges, five facets of ethical AI for Defence have been identified: responsibility (who is responsible), governance (how AI is controlled), trust (how AI can be trusted), law (how AI can be used lawfully) and traceability (how the actions of AI are recorded).4

This work is largely being led by the Trusted Autonomous Systems Defence Cooperative Research Centre (TASDCRC). As the first Defence Cooperative Research Centre, launched in 2018, it aims to bring together Defence, academia and industry to tackle challenges relating to trusted autonomous systems.5 A key focus of the centre is ethics.

Two recent articles have drawn attention to the circulation of ethics and capture regarding big technology corporations and AI research. Phan, Goldenfein, Mann and Kuch trace how particular approaches to ethics circulate across Big Tech, universities and other industries.6 They argue that ‘Big Tech has transformed ethics into a form of capital – a transactional object external to the organisation, one of the many “things” contemporary capitalists must tame and procure’.7 Whittaker explores the influence of the tech industry on AI research, arguing that reliance on large data sets, computational processing power and data storage concentrates power in the hands of a small number of large tech companies who hold such resources.8

Both of these pieces provide an interesting springboard to jump off when thinking about defence AI research in Australia. It is the push and pull between profit-based industry incentives, Defence desire for military advantage, and academia’s growing reliance on external funding that is shaping the development of robotics and autonomous systems – not only the technological systems themselves, but also ideas about the ethics and practicality of their use. Power, ethics and narratives all circulate between these spaces. These interconnections, exacerbated by dual-use opportunities which see technologies able to be used or adapted between civilian and military purposes, require examination.

AI and robotics and autonomous systems research for Defence purposes does not take place in a vacuum – it is part of a broader ecosystem of power and dependency. Kate Crawford highlights these very power dynamics in her book, in the American context. As she outlines, in seeking to enact the Third Offset strategy and utilise AI and autonomy for military advantage, ‘the Department of Defense would need gigantic extractive infrastructures’ – and the only place where both the required human and technological resources can be accessed is the tech industry.9 

The power flows in both directions, however, with perceptions of strategic competition driving a desire for technological superiority and creating a complex relationship between the state and industry. Indeed, Crawford argues, the dual-use nature of AI technologies has led the US to adopt civilian-military collaboration as ‘an explicit strategy: to seek national control and international dominance of AI in order to secure military and corporate advantage’.10

The push for this kind of dual-use approach is also evident in Australia. The TAS-DCRC was set up with investment from the Queensland state government, who make it clear that they expect the benefits of defence-focused research on trusted autonomous systems to bolster civilian industries such as agriculture, mining and environmental management.11 In the reverse direction, Chief Engineer for the Royal Australian Air Force Remotely Piloted Aircraft Systems/Unmanned Aerial Systems, Kierin Joyce argues that Australia’s world-leading autonomy research in the mining and research sector can be adapted to a defence context.12 This all takes place in the context of an ongoing shift in approach to the defence industry, with the government aiming to enhance connections between Defence, industry, and academia.13

If you are not a soldier by proxy, you are an intelligence officer by proxy.

What is Mr Putin Doing In Ukraine? – Thoughts From Kiev.

Today, Sunday, I went to Maidan. Several hundre…

by Mychailo Wynnyckyj

Putin On Pause – Thoughts from Kiev

President Putin’s press conference seems to hav…

by Mychailo Wynnyckyj

The Vladimir Putin Problem – Thoughts From Kiev

The Russian invasion of Crimea has put the enti…

by Mychailo Wynnyckyj

Imminent Invasion – Thoughts from Kiev

Today, Crimeans “voted”. Given the barrage of m…

by Mychailo Wynnyckyj

Philosophy – Thoughts From Kiev

Today is a noteworthy day. Exactly 120 days ago…

by Mychailo Wynnyckyj

The growing push for strengthening collaboration between Defence, industry and academia takes us into a second thread: how the dynamics of power, dependency and the circulation of ethics emanate from Defence. Whittaker lays out the cost of capture of AI research by industry through providing the historical context of the Cold War dominance of the US military over scientific research.14 While she is right to focus on tech companies as the dominant source of this dynamic in the present day, it is important not to discount the ongoing influence of Defence and the messy relationship between civilian and military research – particularly when it comes to AI, autonomy and robotics.

In Australia, both industry and academia vie for defence research funding, with academia also focused on attracting industry funding. On the university side, COVID-19 has exacerbated pre-existing crises in the neoliberal university – leaving research and jobs increasingly reliant on accessing external funding.15 These dynamics of power and dependence influence how research develops. In Whittaker’s words,

This doesn’t mean that researchers within these domains are compromised. Neither does it mean that there aren’t research directions that can elude such dependencies. It does mean, however, that the questions and incentives that animate the field are not always individual researchers’ to decide. And that the terms of the field—including which questions are deemed worth answering, and which answers will result in grants, awards, and tenure—are inordinately shaped by the corporate turn to resource-intensive AI, and the tech-industry incentives propelling it.16

Phan et al. also highlight these difficulties regarding both government and industry funding, pointing out that even the mere selection of what will get funded, influenced by various interests, creates ‘a set of dilemmas and paradoxes’ for researchers. 17 

It is interesting to note, then, the funding choices that lead money to be directed to ethics in AI for Defence, and trusted autonomous systems, taking us back to the circulation of ethics in the messy civilian-military relationship. Phan et al. describe the corporate logic at play, which views ethics as a problem to solve, as something to acquire to bolster legitimacy. There is a similar logic at play in the Defence arena: Ethics are seen as something to acquire, a problem to solve, in order to succeed in the quest for military advantage. The TASDCRC commenced a $9 million, six year Programme on the Ethics and Law of Trusted Autonomous Systems in 2019.18 Further to this is the establishment of the TASDCRC Ethics Uplift Program, which aims, among other things, to ‘build enduring ethical capacity in Australian industry and universities to service Australian RAS-AI’ and ‘educate in how to build ethical and legal autonomous systems’.19  TASDCRC CEO, Jason Scholz, has said that ‘ethics is a fundamental consideration across the game-changing Projects that TAS are bringing together with Defence, Industry and Research Institutions’.20 These efforts all rely on the industry-academia-military relationship at their core.

Meanwhile, Defence has itself emphasised that ‘the ethics of AI and autonomous systems is an ongoing priority’.21  Air Vice-Marshal Cath Roberts has spoken of the need to ‘ensure that ethical, moral and legal issues are resolved at the same pace as the technology is developed’, given the vital role AI and autonomy will play for air power.22 A group of researchers from or associated with the TASDCRC go further still, arguing that autonomous weapons will be ‘ethical weapons’, able to make war ‘safer’.23  Again, ethics are viewed as a problem to be solved. In the corporate space, ethics are to be solved for profit. In the defence space, ethics are to be solved for military advantage and, relatedly, to facilitate acceptance and trust of such technologies by both personnel and the public. 

The approach to ‘trust’ is the same: a problem to be solved. If an autonomous system can be trusted, then it can be used – ethically as well as legally – in the pursuit of military advantage. Autonomous systems need to be trusted by the personnel who will use them so that they are willing to use them,24  and they need to be trusted by the public so that Defence retains its license to operate. It is an interesting Australian quirk, that such systems are not lethal autonomous systems or even merely autonomous systems, but rather trusted autonomous systems. 

None of this is to say that either research on ethics or trust in technology is inherently suspect. Rather, it is to point out that the terms ‘ethical’ and ‘trusted’ in AI, robotics and autonomy research are being used for political purposes – sometimes intentionally, sometimes unintentionally. As someone who has conducted research on trust in technology under a Defence contract, I was once in a room where someone asked, ‘Can’t we use trust as a weapon?’ Militarism is quite the drug. 

These are threads which still need further unravelling. The circulation of ethics and power at the intersection of academia, industry and defence when it comes to AI, robotics and autonomy research in Australia demands further exploration. It is the intertwining of corporate interests seeking profit, Defence motivations for military superiority, and academia’s desperation for external funding that is shaping AI, robotics, and autonomous systems research in Australia. The narratives being deployed to achieve these aims require careful consideration.

Notes:

  1. Critical Technologies Policy Coordination Office, Australian Government, ‘The Action Plan for Critical Technologies’, 2021. See also Critical Technologies Policy Coordination Office, Australian Government, ‘Blueprint for Critical Technologies’, 2021.
  2. Australian Defence Force, ‘Concept for Robotic and Autonomous Systems’, 2020, p. 8; Australian Army, ‘Robotic & Autonomous Systems Strategy’, 2018, p. 6; Royal Australian Navy, ‘RAS-AI Strategy 2040: Warfare Innovation Navy’, 2020, p. 14.
  3. Australian Defence Force, ‘Concept for Robotic and Autonomous Systems’, 2020, p. 20.
  4. Kate Devitt, Michael Gan, Jason Scholz, and Robert Bolia, ‘A Method for Ethical AI in Defence’, Australian Government Department of Defence, 2020, p. ii.
  5. Trusted Autonomous Systems, ‘About Us’, nd., https://tasdcrc.com.au/about-us/. 
  6. Thao Phan, Jake Goldenfein, Monique Mann, and Declan Kuch, ‘Economies of Virtue: The Circulation of “Ethics” in Big Tech’, Science as Culture, 2021, pp. 1-15.
  7. Phan et al., p. 1.
  8. Meredith Whittaker, ‘The Steep Cost of Capture’, Interactions XXVII.6 (November-December), 2021, pp. 51-55.
  9. Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press: New Haven, 2021, p. 188.
  10. Crawford, p. 187.
  11. Queensland Government, ‘Queensland Drones Strategy’, 2019, p. 29.
  12. Robbin Laird, ‘The Quest for Next Generation Autonomous Systems: Impact on Reshaping Australian Defence Forces’, 25 May 2021, https://defense.info/re-shaping-defense-security/2021/05/the-quest-for-next-generation-autonomous-systems-impact-on-reshaping-australian-defence-forces/. 
  13. Australia Department of Defence, ‘2016 Defence Industry Policy Statement’, 2016; Melissa Price, ‘Op-Ed: Give Pillars Approach to Support Defence Industry’, 24 September 2020, https://www.minister.defence.gov.au/minister/melissa-price/media-releases/op-ed-five-pillars-approach-support-defence-industry; Melissa Price, ‘Defence Innovation System Goes Under Microscope’, 3 September 2021, https://www.minister.defence.gov.au/minister/melissa-price/media-releases/defence-innovation-system-goes-under-microscope. 
  14. Whittaker, p. 52.
  15. Phan et al., p. 8.
  16. Whittaker, p. 52.
  17. Phan et al., p. 2.
  18. Trusted Autonomous Systems, ‘TASDCRC Activity on Ethics and Law of Trusted Autonomous Systems’, 12 February 2021, https://tasdcrc.com.au/tasdcrc-activity-on-ethics-and-law-of-trusted-autonomous-systems/. 
  19. Ibid.
  20. Trusted Autonomous Systems, ‘A Method for Ethical AI in Defence’, 16 February 2021, https://tasdcrc.com.au/a-method-for-ethical-ai-in-defence/. 
  21. Australian Government Department of Defence, ‘Defence Releases Report on Ethical Use of AI’, 16 February 2021, https://news.defence.gov.au/media/media-releases/defence-releases-report-ethical-use-ai. 
  22. Trusted Autonomous Systems, ‘A Method for Ethical AI in Defence’, 16 February 2021, https://tasdcrc.com.au/a-method-for-ethical-ai-in-defence/.
  23. Jason B. Scholz, Dale A. Lambert, Robert S. Bolia, and Jai Galliott, ‘Ethical Weapons: A Case for AI in Weapons’, in Steven C. Roach and Amy E. Eckert (eds.), Moral Responsibility in Twenty-First-Century Warfare: Just War Theory and the Ethical Challenges of Autonomous Weapons Systems, State University of New York Press: Albany, 2020, pp. 181-214.
  24. Jai Galliott and Austin Wyatt, ‘Risks and Benefits of Autonomous Weapon Systems: Perceptions Among Future Australian Defence Force Officers’, Journal of Indo-Pacific Affairs, Winter 2020, pp. 17-34; Jai Galliott and Austin Wyatt, ‘Considering the Importance of Autonomous Weapon System Design Factors to Future Military Leaders’, Australian Journal of International Affairs, 2021, pp. 1-26.

Filed Under: Public administration & military science Tagged With: Air and other specialized forces and warfare; engineering and related services

The Social Media Mirage

Nov 1, 2022 by Mark Andrejevic

During the Delta wave of the COVID-19 pandemic in the United States, an octogenarian emeritus professor at the University of Georgia abruptly quit in the middle of a class he was teaching because a student refused to don a hygienic mask.1  The teacher had legitimate reason for concern: as someone in his late 80s with diabetes and other underlying health conditions, he was at high risk for an adverse outcome should he become infected with the virus. The student, by contrast, had not come to class equipped with a mask and, after being provided one, refused to wear it correctly because she said it made it difficult for her to breathe. What was disturbingly familiar about this scenario – one being repeated elsewhere in the US – is the antisocial and absolutist version of personal freedom embraced by the student. This is not an example of the classic liberal version of freedom that informs the libertarian strains of the contemporary right – the version that stipulates, ‘your freedom to swing your arm ends at the point where another person’s nose begins’. Rather, it interprets the freedom to swing one’s arm – at least figuratively – so broadly as to render any consideration of someone else’s nose, or even their life, irrelevant.

This version of personal freedom has become a familiar staple of the right-wing response to the virus and, not coincidentally, of its critique of so-called ‘cancel culture’. Consider what Ted Cruz really meant when he demanded of Twitter CEO Jack Dorsey during a Congressional hearing, ‘Who the hell elected you and put you in charge of what the media are allowed to report and what the American people are allowed to hear?’ It was an interesting rejoinder to a commercial company media company, considering that the business of such companies has, from their inception, been to curate what information gets publicised and circulated. For Cruz, apparently, anyone should have the right to say whatever they want on Jack Dorsey’s platform. Forget for a second the question of private ownership and control (as Cruz apparently did). The model of free speech implicit in Cruz’s formulation – one that recurs in the furore over ‘cancel culture’ and ‘political correctness’ is that anyone should be able to say whatever they want, whenever they like, without any social consequences. 

This last qualification lies at the heart of emergent free speech absolutism2  and links it directly to the unqualified version of personal freedom that leads a young, healthy student to completely disregard the wellbeing of those around her. We have been confronting the symptoms of this version of absolutist individualism for some time now – so it is perhaps time to consider the media conditions that enable it. An asocial individualism – one that conveniently forgets the social conditions that enable this understanding of the individual to emerge in the first place – is a characteristic symptom of an economic model based on targeting and customisation. This model, as we are aware of in increasingly pointed ways, relies upon the collection of detailed personal information to foreground a hypertrophied individualism while simultaneously relegating to the background our irreducible interdependence as social beings living in a shared society. 

It is one thing to insist that people should be free to say what they want, whenever they like – but it is something altogether different to assert they should be exempt from facing the social consequences of doing so. That is asking a lot – much more than any existing society or community has ever tolerated. The principle of freedom of expression may, in the abstract, be considered a rigorous one (with exceptions for cases of direct harm – such as the proverbial shout of ‘Fire!’ in a crowded theatre). But it has always existed within concrete social and historical conditions. It has been used, in some contexts, to prevent so-called ‘prior restraint’ (in the form of a government ban on publishing information), but it has never meant blanket insulation from the social consequences of publication. No society has ever dispensed with all social limits on what might be uttered publicly without consequence, nor would it be desirable to create one. The right-wing Republicans who lay claim to such a version of free speech repeatedly fail to honour it in practice. We have seen what happens to Republicans who break with Trump’s lies about the 2020 election: Liz Cheney was literally de-platformed when she lost her leadership in the House of Representatives (although she retained her media access), and other Republicans who joined her were likely be targeted in the primaries by their own party. Those who most aggressively denigrate ‘cancel culture’ from one side of their mouths have been issuing direct calls from the other to, for example, gag anyone teaching about the history of racism in the US.3  

The tension between an abstract commitment to ideals of free speech and social reality comes to a head with the emergence of technologies that make widespread, socially unaccountable speech possible. Prior to the rise of the Internet, it was certainly possible to circulate all kinds of speech anonymously or otherwise, but there were significant barriers to distributing it speedily and widely while bypassing established media gatekeepers. Public distribution based on pamphlets, bootleg audio and video recordings, and self-published manuscripts depended primarily on ‘pull’ forms of circulation – that is, demand on the part of readers. 

The Internet, coupled with the rise of social media platforms, significantly reconfigured the circumstances for the circulation of anonymous content: not only do the barriers to speedy, widespread distribution decrease significantly, but also commercial algorithms ‘push’ content to viewers, often prioritising the most controversial and extreme forms of content to boost engagement4  – regardless of whether this is positive or negative. Whereas it would have been unlikely, once upon a time, to devote too much of one’s time and resources to seeking out content just because it was outrageous and offensive, social media does this work automatically to provide users with the dopamine hit that comes from doom scrolling and hate posting according to rhythms of intermittent positive reinforcement engineered to hook users. 

Online, it becomes easier to imagine an almost purely abstract version of free speech: the possibility of being able to say whatever one wants, to as large an audience as possible, consequence free. The Internet not only allows instantaneous mass circulation of anonymous speech, it does so at a distance from the audience, removed from a sense of the social context (despite the attempt of commercial media platforms to brand themselves as ‘social’). Drawing on the work of the British sociologist Anthony Giddens, we might describe this version of abstract freedom as the result of the ‘disembedding’ of communication practices. For Giddens, this process refers to the abstraction or ‘lifting out’ of social relations (or, in this case, interactions) from their social contexts and ‘their restructuring across indefinite spans of time-space’. 

When speech is embedded in social contexts, the practical limits imposed upon it are evident. One wouldn’t walk into a room full of people and deliberately insult them to their faces or lie about them without expecting consequences. Similarly, media outlets and advertisers understand that even if they are free, in principle, from prior restraint, there are social (and sometimes legal) consequences for airing material that transgresses social norms. The real social struggle, of course, comes in assessing, defining and redefining these norms rather than attempting to dispense with them altogether (which would mean dispensing with society itself). Establishing such norms and their tolerance for violation is an inherently social process. We cannot invent our own social norms any more than we can invent our own language. This is what it means to exist in relation with others: to be social beings. 

The Internet makes it possible, in other words, to bypass the gatekeepers that enforce consensus norms while simultaneously relying on algorithmic (and human) amplification to ‘push’ content to a broad audience. There are certainly avenues for response and, on occasion, violent forms of pushback – often targeted toward women or minorities (rather than toward those most likely to complain about ‘cancel culture’) – that move from the online context to the offline, in the form of stalking, intimidation and physical assault. However, the Internet and, more recently, social media, allow for the first time in human history the materialised fantasy of a space in which one can imagine the prospect of an absolutist version of free speech – one that is not only free from prior restraint, but from social consequences. That is, they construct the fantasy of a kind of post-social model of communication. 

The attack on ‘cancel culture’ launched by characters like Ted Cruz positions social norms themselves as inappropriate and illegitimate, something that can be sloughed off as we move toward a world where those in positions of privilege and power can malign whomever they like consequence-free (Donald Trump was the avatar of this version of privilege), whereas those who respond in kind from less privileged positions are accused of hatred and intolerance. But the analysis needs to push further; it is not enough to note that social media algorithms elevate the most controversial, noxious and obnoxious forms of communication.

If you are not a soldier by proxy, you are an intelligence officer by proxy.

What is Mr Putin Doing In Ukraine? – Thoughts From Kiev.

Today, Sunday, I went to Maidan. Several hundre…

by Mychailo Wynnyckyj

Putin On Pause – Thoughts from Kiev

President Putin’s press conference seems to hav…

by Mychailo Wynnyckyj

The Vladimir Putin Problem – Thoughts From Kiev

The Russian invasion of Crimea has put the enti…

by Mychailo Wynnyckyj

Imminent Invasion – Thoughts from Kiev

Today, Crimeans “voted”. Given the barrage of m…

by Mychailo Wynnyckyj

Philosophy – Thoughts From Kiev

Today is a noteworthy day. Exactly 120 days ago…

by Mychailo Wynnyckyj

The broader point to be made with respect to both mask refusal and free speech absolutism is that they spring from the same soil: the infrastructure of an asocial, abstracted individualism that drives the economic model of the online economy. The irony of commercial social media is precisely that it offloads distinctly and irreducibly social processes onto opaque technological systems where their very existence can be misrecognised and suppressed. The decision of how to curate content online is an irreducibly social and irreducibly political one. The false promise of automated systems is, by contrast, of a zero level of either the social or the political: that machines are somehow exempt from the social relations that have long provided the contours of our information environment – that they are somehow apolitical. The result is a transposition of market logic into the register of the machine (as if the market itself were neutral). 

This asocial imperative is reinforced by the operation of data-driven customisation and targeting, which envision and construct the image of a hermetic, self-contained individual. Everyone gets their own content, their own information, and their own entertainment, custom tailored for them. Whereas the mass media can be blamed for suppressing individual freedom and diversity of choice, the ideology of mass-customised media stifles recognition of sociality and the forms of interdependence that underwrite it. We know how this plays out in the realm of news and information: the grand dismantling of the shared protocols we once relied on to adjudicate between rival accounts of the world, and the consequent cacophony of accusations of ‘fake news’. Truth collapses into consumer preference, as when right-wing viewers migrated to NewsMax5  and the One America News Network after Fox News called the 2020 election for Joe Biden. The line between fact and fiction was relegated to the realm of personal taste – what other criterion could there be when news becomes simply another personalised commodity? 

Herein, perhaps, lies the answer to the question of why it might be so easy for someone immersed in a social media environment to view any request to take into consideration the wellbeing of another as an assault on personal freedom and individual autonomy. The social media bargain is not simply the offer of access in exchange for willing submission to comprehensive surveillance, it is also the promise of individualism ‘perfected’ in exchange for misrecognition of its conditions of possibility. Social media is a misnomer in the sense that it implies a heightened recognition of social interdependence on the part of the user; it is, however, accurate to the extent that it invokes the offloading of this interdependence on to automated systems, where it can be misrecognised as an unwelcome and surpassed vulnerability. This is the heart of the pathology of commercial social media – not simply that they amplify false information, not just that they privilege ‘engagement’ over accuracy, but that they embrace an incoherent fantasy of individuals ‘freed’ from their constitutive interdependence (for which machinic operations become an opaque, unrecognised substitute).

Notes:

  1. Yelena Dzhanova, ‘An 88-Year-Old Professor in Georgia Resigned in the Middle of Class Because a Student Refused to Wear a Mask over Her Nose: “That’s It. I’m Retired.”’, Business Insider Australia (blog), August 29, 2021, https://www.businessinsider.com.au/88-year-old-professor-resigns-mid-class-student-refuses-mask-2021-8.
  2. Kali Holloway, ‘The Great Hypocrisy of Right-Wingers Claiming “Cancel Culture”’, March 19, 2021, https://www.thenation.com/article/society/republicans-cancel-culture-kaepernick/.
  3. Nathan Hart, ‘Texas Senator Ted Cruz Hits Twitter, TV to Target Critical Race Theory’, McClatchy Washington Bureau, August 4, 2021, https://www.mcclatchydc.com/news/politics-government/article253116493.html.
  4. Paul Lewis and Erin McCormick, ‘How an Ex-YouTube Insider Investigated Its Secret Algorithm’, The Guardian, February 2, 2018, sec. Technology, https://www.theguardian.com/technology/2018/feb/02/youtube-algorithm-election-clinton-trump-guillaume-chaslot.
  5. Brian Stelter, ‘Newsmax TV Scores a Ratings Win over Fox News for the First Time Ever’, CNN, December 8, 2020, https://www.cnn.com/2020/12/08/media/newsmax-fox-news-ratings/index.html.

Filed Under: Political science Tagged With: Civil and political rights

AI for better or for worse, or AI at all?

Nov 1, 2022 by Kobi Leins

When I was a little girl, I was taught a song about a ball of white string, in which the white string could fix everything — tie a bow on a gift, fly a kite, mend things. The second verse of the song was about all the things that string cannot fix — broken hearts, damaged friendships — the list goes on. In all of the research I have been doing about Artificial Intelligence (AI), its governance and what it can do, this song has frequently come to mind. Many authors and researchers are doing the equivalent of repeatedly singing the first verse of the song, about all the things that AI can do, without contemplating where AI cannot effectively or, more importantly, should not be used. Probably now nearing the height of the Gartner-hype cycle, AI is often misleadingly touted as being able to fix practically everything. Although it is true that AI will expedite many business processes and engender new ways of acquiring and creating wealth through more sophisticated use of data, for the everyday citizen those benefits are not always apparent.

The reality of what AI can do is very different from what is conveyed, particularly by industry. In some instances, in fact, AI is breaking things in a way and at a speed that is unprecedented. It is impossible for businesses and governments to simultaneously maximise public benefits, service levels, market competition and profitability. Profitability is almost inevitably being prioritised in a neoliberal context, at the expense of democracy, individual freedoms and the voices of civil society and citizens. Many are voicing these concerns, but they are yet to be actively addressed at all stages of contemplation of the use of AI.

AI includes a series of component parts, both software and hardware. AI may include the following: data-based or model-based algorithms; the data, both structured and unstructured; machine learning, both supervised and unsupervised; the sensors that provide input and the actuators that effect output. AI is complicated and encompasses many things. For ease of reference in this chapter, the term ‘AI’ includes all of these components, each of which may require different considerations and limitations. The individual consideration of each component is beyond the scope of this brief chapter, but this complexity is important to hold in mind when considering applications of AI.

This chapter will contemplate the current popular dichotomy between techno-utopians (those who think that technology — including AI — will save the world) and techno-dystopians (who think technology will destroy it). I conclude that there needs to be a greater space in the discourse for questions, challenge and dissent regarding the use of technology without being dismissed as a techno-dystopian. In fact, these voices are required to ensure safe and beneficial uses of AI, particularly as this technology is increasingly embedded in physical systems and affects not only our virtual but also our physical worlds.

AI is not inherently good or bad — but it does have a past and a context

The overly simplistic and popular dichotomy often posed is between those who are techno-optimists and techno-dystopians. The reality is far more complex, and creating a notion of ‘friends’ or ‘enemies’ of technology does not foster helpful dialogue about the risks and dangers of developing and using certain applications of AI. Differing interests and profoundly powerful market forces are shaping the conversation about AI and its capabilities. What Zuboff coins ‘surveillance capitalism’ is far too profitable and unregulated to furnish a genuine contemplation of human rights, civil liberties or public benefit.1

Every technology has a history and a context.2 A prominent example from Winner’s book, The Whale and the Reactor, involves traffic overpasses in and around New York, designed by Robert Moses. Many of the overpasses were built low, which prevented access by public buses. This, in turn, excluded low-income people, disproportionately racial minorities, who depended entirely on public transportation. Winner argues that politics is built into everything we make, and that historical moral questions asked throughout history — including by Plato and Hannah Arendt — are questions relevant to technology: our experience of being free or unfree, the social arrangements that either foster equality or inequality, the kinds of institutions that hold and use power and authority. The capabilities of AI, and the way that it is being used by corporations and by governments, continue to raise these questions today. Current systems using facial recognition or policing tools that reinforce prejudice are examples of technology that builds on politics. The difference is that, in non-physical systems, politics are not as easy to identify as in a tangible object like an overpass, although they may be similarly challenging to rectify after they have been built, and equally create outcomes that generate power and control over certain constituents.

The beginning of AI as a discipline

Often cited as the birth of AI as an academic discipline is the 1956 Dartmouth conference, which ran over eight weeks, and which was a product of its time.

A list of conference participants, mainly white, wealthy and educated men, meet:

proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.3

Following this conference, developments in AI moved ahead in fits and starts. The quiet periods are now retrospectively referred to as “AI winters”.4 More recently, successes in game playing and facial recognition, among other advances, have attracted considerable attention, both positive and negative. The most rapid advances, however, have been in the growth of companies such as Google and Facebook, which configure and use data derived from AI for the benefit of tracking their users and providing them with particular features.

Corporate use of AI

Many companies use AI to deliberately pursue anti-regulatory approaches to existing legal structures governing business.5 These structures exist to ensure that society receives tax to improve the physical world for their citizens, to ensure that safety and privacy are upheld, and to express protections and norms (ideally) created by democratically elected representatives. Some companies using ‘disruptive’ AI technologies are deliberately avoiding these regulations.

No ethical limitations will ever prevent these companies from a model designed to deliberately pursue anti-regulatory approaches. In German, the word for disruptive is Vorsprung,6 which literally translates to ‘jump over’. In effect, these companies are jumping over existing regulatory frameworks to pursue market dominance. In some jurisdictions, legal action is being taken to try to mitigate these approaches. In Australia, in May 2019, over 6000 taxi drivers filed a class action lawsuit for lost income against Uber for its deliberate attempt to ‘jump over’ existing laws regulating taxi and limousine licensing. ‘It is not acceptable for a business to place itself above the law and operate illegally to the disadvantage of others,’ said Andrew Watson, a lawyer with the claimants’ firm Maurice Blackburn.7

Facebook, which collects, stores and uses people’s private data, was recently fined US$5 billion for breaches related to the Cambridge Analytica scandal.8 At the time of writing, Facebook’s market capitalisation was approximately US$584 billion.9 Facebook’s share price increased after the fine to a value that would have covered the fine, most probably due to investor relief that it was clear no further regulatory responses were imminent. The fine, which represents about three months of Facebook’s revenue, also shows that regulators are toothless, unserious, or even worse, both.10

Use of AI to avoid public governance

It has already been suggested that the use of AI in particular circumstances is not accidental, but rather often a deliberate method to avoid introspection and traditional governance through inexplicable decision-making processes.

‘governance-by-design’ — the purposeful effort to use technology to embed values — is becoming a central mode of policymaking, and … our existing regulatory system is fundamentally ill-equipped to prevent that phenomenon from subverting public governance.11

Mulligan and Bamberger raise four main points. First, governance-by-design overreaches by using overbroad technological fixes that lack the flexibility to balance equities and adapt to changing circumstances. Errors and unintended consequences result. Second, governance-by-design often privileges one or a few values while excluding other important ones, particularly broad human rights. Third, regulators lack the proper tools for governance-by-design. Administrative agencies, legislatures and courts often lack technical expertise and have traditional structures and accountability mechanisms that poorly fit the job of regulating technology. Fourth, governance-by-design decisions that broadly affect the public are often made in private venues or in processes that make technological choices appear inevitable and apolitical.

If you are not a soldier by proxy, you are an intelligence officer by proxy.

What is Mr Putin Doing In Ukraine? – Thoughts From Kiev.

Today, Sunday, I went to Maidan. Several hundre…

by Mychailo Wynnyckyj

Putin On Pause – Thoughts from Kiev

President Putin’s press conference seems to hav…

by Mychailo Wynnyckyj

The Vladimir Putin Problem – Thoughts From Kiev

The Russian invasion of Crimea has put the enti…

by Mychailo Wynnyckyj

Imminent Invasion – Thoughts from Kiev

Today, Crimeans “voted”. Given the barrage of m…

by Mychailo Wynnyckyj

Philosophy – Thoughts From Kiev

Today is a noteworthy day. Exactly 120 days ago…

by Mychailo Wynnyckyj

Each of these points remain valid. Use of AI by governments and corporates alike often masks the underlying political agenda that use of the technology enables. In the case of what has been coined ‘Robodebt’ in Australia, the Federal Government’s welfare department, Centrelink, used an algorithm to average a person’s annual income gathered from tax office data over 26 fortnights, instead of individual fortnightly periods, to calculate whether they were overpaid welfare benefits. Recipients identified as having been overpaid were automatically sent letters demanding explanation, followed by the swift issuance of debt notices to recover the amount.

This method of calculation resulted in many incorrect debts being raised. Recipients of aid in Australia comprise one of the most vulnerable groups in Australia, and even raising these debts without consultation or human interaction arguably caused profound detrimental effects.12 Senator Rachel Siewert, who chaired the Senate Estimates hearings into Robodebt, noted that, ‘[t]here were nine hearings across Australia, and what will always stick with me, is that at every single hearing, we heard from or about people having suicidal thoughts or a severe deterioration in mental health upon receiving a letter.’13

The practical onus at law is on any creditor (in this case, Centrelink) to prove a debt, not for the debtor to disprove it. Automating debt-seeking letters challenges a fundamental law derived from long-standing principles of procedural fairness.14 This is the type of automation that, even if rectified, causes irreversible damage, and is in and of itself a form of oppression. Use of social media to influence elections in the United States and Brexit has been extensively covered, but more recently, similar tools were used in the 2019 Australian Federal election, when advertisements started to appear in social media feeds regarding a proposed (and utterly fictitious) death tax by the opposition Australian Labor Party.15 Although the impact of this input on the Australian election is not completely clear, it is known that nearly half of Gen Z obtain their information from social media alone, and that 69% of Australians are not interested in politics.16 These kinds of audiences are prime targets for deliberately placed, misleading social media advertisements, based on algorithmically generated categorisations.

Warnings about AI use from the past

If we could agree on clear parameters about how we build, design, and deploy AI, all of the normative questions that humans have posed for millennia would remain.17 No simple instruction set, ethical framework, or design parameters will provide the answers to complex existential and philosophical questions posed since the dawn of civilisation. Indeed, the very use of AI, as we have seen, can be a way of governing and making decisions.

Joseph Weizenbaum was the creator of ELIZA, the first chatbot, named after the character in Pygmalion. It was designed to emulate a therapist through the relatively crude technique of consistently asking the person interacting to expand upon what they were talking about and asking how it made them feel. A German-American computer scientist and professor at MIT, Weizenbaum was disturbed by how seriously his secretary took the chatbot, even when she knew it was not a real person.

Weizenbaum’s observations led him to argue that AI should not be used to replace people in positions that require respect and care.18 Weizenbaum was very vocal in his concerns about the use of AI to replace human decision making. In an interview with MIT’s The Tech, Weizenbaum elaborated, expanding beyond the realm of mere artificial intelligence, explaining that his fears for society and its future were largely because of the computer itself. His belief was that the computer, at its most base level, is a fundamentally conservative force — it can only take in predefined datasets in the interest of preserving a status quo.

More recently, other writers have expressed similar concerns.19 Promises being made by  technology companies are increasingly being questioned in light of repeated violations of human rights, privacy and ethical codes. The United Nations has found that Facebook played a role in the generation of hate speech that resulted in genocide in Myanmar.20 The Cambridge Analytica scandal has, by Facebook’s own estimation, been associated with the violation of the privacy of approximately 87 million people.21 Palantir, an infamous data analytics firm, is partnering with the United Nations World Food Program to collect and collate data on some of the world’s most vulnerable populations.22 Many questions are being raised about the role and responsibility of those managing the technology in such situations, not just in times of conflict,23 but also in times of peacekeeping,24 including a United Nations’ response to identify, confront and combat hate speech and violence.25

Weizenbaum was also a product of his time, having escaped Nazi Germany with his family in 1936. His knowledge, and personal experience, of the use of data to make the Holocaust more efficient and effective inevitably shaped his perception and thinking about the wider use of data and computers. The first computers of IBM were used to create punch cards outlining certain characteristics of German citizens.26 IBM did not enable the Holocaust, but without IBM’s systematic punch card system, the trains would not have run on time, and the Nazis would not have been anywhere near as clinically ‘efficient’ at identifying Jews. For this reason, Germans still resist any national census, and they also resist a cashless society such as that which Sweden has wholeheartedly adopted.

Hubert Dreyfus expressed similar concerns. His book, What Computers Can’t Do, was ridiculed on its release in 1972, another peak of hype about computing.27 Dreyfus’ main contention was that computers cannot ‘know’ in the sense that humans know, using intuition, contrary to what IT industry marketing would have you believe. Dreyfus, as a Professor of Philosophy at Berkeley, was particularly bothered that AI researchers seemed to believe they were on the verge of solving many long-standing philosophical problems within a few years, using computers.

We are all products of our time and each of us have a story. My great- uncle was kept in Sachsenhausen in solitary confinement for being a political objector to participating in World War II. I was partially raised in Germany during the tumultuous late 1980s, and the tensions and histories are etched into some of my earliest memories. I returned to Germany in the late 1990s to watch the processing of history on the front page of every newspaper of the day. Watching today how data is being collected, shared, traded and married with other data using AI, knowing its potential use and misuse, it is difficult for me not to agree with Weizenbaum that there are simply some tasks for which computers are not fit. Beyond being unfit, there are tools being created and enabled by AI that are shaping our elections, our categorisation by governments, our information streams, and, albeit least importantly, our purchasing habits. My own history shapes my concern about how and why these systems and tools are being developed and used, a history that an increasing proportion of the world do not remember or prefer to think of as impossible in their own time and context.

Although AI as we understand it was conceived and developed as early as 1956, we are only just coming to understand the implications of rapid computation of data, enabled by AI, and the risks and challenges it poses to society and to democracy. Once data is available, AI can be used in many different ways to affect our behaviours and our lives. Although these conversations are starting to increase now, Weizenbaum and Dreyfus considered these issues nearly five decades ago. Their warnings and writings remain prescient.

Notes:

  1. Zuboff S (2019). The Age of Surveillance Capitalism. Profile Books.
  2. Winner L (1986). The Whale and the Reactor. University of Chicago Press.
  3. McCarthy J et al. (1955). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. Stanford University. https://www-formal.stanford.edu/jmc/history/dartmouth/ dartmouth.html
  4. Crevier D (1993). AI: The Tumultuous Search for Artificial Intelligence.Basic Books.
  5. Horan HH (2019). Uber’s path of destruction. American Affairs, 3(2). https://americanaffairsjournal.org/2019/05/ubers-path-of-destruction/
  6. Many readers would be familiar with one of Audi’s slogans: ‘Vorsprung durch Technik’ — disruption through technology.
  7. Xu VX, Australian taxi drivers sue Uber over lost wages in class-action lawsuit. New York Times, 3 May 2019, https://www.nytimes.c om/2019/05/03/technology/australia-uberdrivers-class-action.html
  8. The Facebook-Cambridge Analytica data scandal broke in early 2018 when it was revealed that the personal data of millions of Facebook users had been taken without their consent and used to target political advertising at them. It has been described as a watershed moment in the public understanding of personal data. See Kang C, F.T.C. approves Facebook fine of about $5 billion, New York Times, https://www.nytimes.com/2019/07/12/technology/facebookftcfine.html
  9. Facebook market cap. YCharts.https://ycharts.com/companies/ FB/market_cap
  10. Patel N, Facebook’s $5 billion FTC fine is an embarrassing joke. The Verge, 12 July 2019, https://www.theverge.com/2019/ 7/12/20692524/facebook-five-billion-ftc-fine-embarrassing-joke
  11. Mulligan DK & Bamberger K (2018). Saving governance by design. 106 California Law Review, 697. https://doi.org/10.15779/ Z38QN5ZB5H
  12. Karp P & Knaus C (2018). Centrelink robo-debt program accused of enforcing “illegal” debts. The Guardian. https://www.the guardian.com/australia-news/2018/apr/04/centrelink-robo- debtprogram-accused-of-enforcing-illegal-debts
  13. Siewert R (2019). What I learned about poverty and mental health chairing the robo-debt enquiry. Crikey. https://www.crikey. com.au/2019/05/31/siewert-centelink-robo-debt-suicide/
  14. Carney T (2018). The new digital future for welfare: Debts without legal proofs or moral authority? UNSW Law Journal Forum. https://www.unswlawjournal.unsw.edu.au/wpcontent/uploads/2018/ 03/006-Carney.pdf
  15. Murphy K et al. (2019). “It felt like a big tide”: how the death tax lie infected Australia’s election campaign. The Guardian. https://www.theguardian.com/australianews/2019/jun/08/it- felt-like-a-big-tide-how-the-death-tax-lie-infected-australias- election-campaign
  16. Fisher C et al. (2019, 12 June). Digital news report: Australia 2019.
  17. Analysis & Policy Observatory. https://apo.org.au/node/240786
  18. Roff H (2019). Artificial intelligence: Power to the people. Ethics and International Affairs. https://www.ethicsandinternationalaffairs. org/2019/artificial-intelligence-power-to-the-people/
  19. Weizenbaum J (1976). Computer Power and Human Reason. WH Freeman.
  20. Broad E (2018). Made by Humans. Melbourne University Publishing. 
  21. Broussard M (2018). Artificial Intelligence: How Computers Misunderstand the World. MIT Press. 
  22. O’Neil C (2016). Weapons of Math Destruction. New York Crown Publishers. 
  23. Goodman EP & Powles J (forthcoming). Urbanism under Google: Lessons from Sidewalk Toronto. Fordham Law Review https://papers.ssrn. com/sol3/papers.cfm?abstract_id=3390610
  24. UN: Facebook had a “role” in Rohingya genocide. Aljazeera, 14 March 2018, https://www.aljazeera.com/news/2018/03/facebook-role-rohingya-genocide-180313161609822.htm. 
  25. Mozur P, A genocide incited on Facebook, with posts from Myanmar’s military. New York Times, 15 October 2018, https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html
  26. Kang C & Frenkel S (2018). Facebook says Cambridge Analytica harvested data of up to 87 million users. New York Times. https://www.nytimes.com/2018/04/04/technology/mark-zuckerberg-testify-congress.html
  27. Palantir and the UN’s World Food Programme are partnering for a reported $45 million. Privacy International, 6 February 2019. https://www.privacyinternational.org/news/2684/palantir-and-uns-world-food-programme-are-partnering-reported-45-million
  28. McDougall C (2019). Autonomous weapons systems: Putting the cart before the horse. Melbourne Journal of International Law, 20(1).
  29. Digital Blue Helmets, United Nations, https://unite.un.org/digital-bluehelmets/
  30. United Nations launches hate speech strategy . SABC Digital News, https://www.youtube.com/watch?v= V8DJkGEpddg
  31. Black E (2001). IBM and the Holocaust: The Strategic Alliance between Nazi Germany and America’s Most Powerful Corporation. Crown Publishers. Discussed in Preston P (2001). Six million and counting. The Guardian. https://www. theguardian.com/books/2001/feb/18/historybooks.features
  32. Dreyfus HL (1972). What Computers Can’t Do. Harper & Row. Dreyfus wrote an updated edition in 1992 entitled What Computers Still Can’t Do.

Filed Under: Law Tagged With: Military defence public property public finance tax commerce (trade) & industrial law

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to page 6
  • Go to page 7
  • Interim pages omitted …
  • Go to page 52
  • Go to Next Page »

Primary Sidebar

Sections

  • Knowledge & Systems
  • Religion
  • Philosophy & Psychology
  • Social Sciences
  • Language
  • Science
  • Technology
  • Arts & Entertainment
  • Literature
  • History & Geography

More

  • Architecture
  • Bibliographies
  • Engineering
  • Epistemology
  • Ethics
  • Astronomy
  • Biology
  • Chemistry
  • Christianity
  • Economics
  • About Us
  • Contact Us
  • Advertise with us
  • The Big Tent Podcast
  • Terms & Conditions
  • Cookie Policy
  • Privacy Policy

© 2025 Circus Bazaar Magazine. All rights reserved.

Use of this site constitutes acceptance of our Terms and Condition, Privacy Policy and Cookie Statement. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of the Circus Bazaar Company AS, Pty Ltd or LLC.

We noticed you're visiting from Norway. We've updated our prices to Norwegian krone for your shopping convenience. Use United States (US) dollar instead. Dismiss