• Skip to main content
Circus Bazaar Magazine

Circus Bazaar Magazine

Penned from the crooked timber of humanity

logo

The Social Media Mirage

The Social Media Mirage

Nov 1, 2022 by Mark Andrejevic

During the Delta wave of the COVID-19 pandemic in the United States, an octogenarian emeritus professor at the University of Georgia abruptly quit in the middle of a class he was teaching because a student refused to don a hygienic mask.1  The teacher had legitimate reason for concern: as someone in his late 80s with diabetes and other underlying health conditions, he was at high risk for an adverse outcome should he become infected with the virus. The student, by contrast, had not come to class equipped with a mask and, after being provided one, refused to wear it correctly because she said it made it difficult for her to breathe. What was disturbingly familiar about this scenario – one being repeated elsewhere in the US – is the antisocial and absolutist version of personal freedom embraced by the student. This is not an example of the classic liberal version of freedom that informs the libertarian strains of the contemporary right – the version that stipulates, ‘your freedom to swing your arm ends at the point where another person’s nose begins’. Rather, it interprets the freedom to swing one’s arm – at least figuratively – so broadly as to render any consideration of someone else’s nose, or even their life, irrelevant.

This version of personal freedom has become a familiar staple of the right-wing response to the virus and, not coincidentally, of its critique of so-called ‘cancel culture’. Consider what Ted Cruz really meant when he demanded of Twitter CEO Jack Dorsey during a Congressional hearing, ‘Who the hell elected you and put you in charge of what the media are allowed to report and what the American people are allowed to hear?’ It was an interesting rejoinder to a commercial company media company, considering that the business of such companies has, from their inception, been to curate what information gets publicised and circulated. For Cruz, apparently, anyone should have the right to say whatever they want on Jack Dorsey’s platform. Forget for a second the question of private ownership and control (as Cruz apparently did). The model of free speech implicit in Cruz’s formulation – one that recurs in the furore over ‘cancel culture’ and ‘political correctness’ is that anyone should be able to say whatever they want, whenever they like, without any social consequences. 

This last qualification lies at the heart of emergent free speech absolutism2  and links it directly to the unqualified version of personal freedom that leads a young, healthy student to completely disregard the wellbeing of those around her. We have been confronting the symptoms of this version of absolutist individualism for some time now – so it is perhaps time to consider the media conditions that enable it. An asocial individualism – one that conveniently forgets the social conditions that enable this understanding of the individual to emerge in the first place – is a characteristic symptom of an economic model based on targeting and customisation. This model, as we are aware of in increasingly pointed ways, relies upon the collection of detailed personal information to foreground a hypertrophied individualism while simultaneously relegating to the background our irreducible interdependence as social beings living in a shared society. 

It is one thing to insist that people should be free to say what they want, whenever they like – but it is something altogether different to assert they should be exempt from facing the social consequences of doing so. That is asking a lot – much more than any existing society or community has ever tolerated. The principle of freedom of expression may, in the abstract, be considered a rigorous one (with exceptions for cases of direct harm – such as the proverbial shout of ‘Fire!’ in a crowded theatre). But it has always existed within concrete social and historical conditions. It has been used, in some contexts, to prevent so-called ‘prior restraint’ (in the form of a government ban on publishing information), but it has never meant blanket insulation from the social consequences of publication. No society has ever dispensed with all social limits on what might be uttered publicly without consequence, nor would it be desirable to create one. The right-wing Republicans who lay claim to such a version of free speech repeatedly fail to honour it in practice. We have seen what happens to Republicans who break with Trump’s lies about the 2020 election: Liz Cheney was literally de-platformed when she lost her leadership in the House of Representatives (although she retained her media access), and other Republicans who joined her were likely be targeted in the primaries by their own party. Those who most aggressively denigrate ‘cancel culture’ from one side of their mouths have been issuing direct calls from the other to, for example, gag anyone teaching about the history of racism in the US.3  

The tension between an abstract commitment to ideals of free speech and social reality comes to a head with the emergence of technologies that make widespread, socially unaccountable speech possible. Prior to the rise of the Internet, it was certainly possible to circulate all kinds of speech anonymously or otherwise, but there were significant barriers to distributing it speedily and widely while bypassing established media gatekeepers. Public distribution based on pamphlets, bootleg audio and video recordings, and self-published manuscripts depended primarily on ‘pull’ forms of circulation – that is, demand on the part of readers. 

The Internet, coupled with the rise of social media platforms, significantly reconfigured the circumstances for the circulation of anonymous content: not only do the barriers to speedy, widespread distribution decrease significantly, but also commercial algorithms ‘push’ content to viewers, often prioritising the most controversial and extreme forms of content to boost engagement4  – regardless of whether this is positive or negative. Whereas it would have been unlikely, once upon a time, to devote too much of one’s time and resources to seeking out content just because it was outrageous and offensive, social media does this work automatically to provide users with the dopamine hit that comes from doom scrolling and hate posting according to rhythms of intermittent positive reinforcement engineered to hook users. 

Online, it becomes easier to imagine an almost purely abstract version of free speech: the possibility of being able to say whatever one wants, to as large an audience as possible, consequence free. The Internet not only allows instantaneous mass circulation of anonymous speech, it does so at a distance from the audience, removed from a sense of the social context (despite the attempt of commercial media platforms to brand themselves as ‘social’). Drawing on the work of the British sociologist Anthony Giddens, we might describe this version of abstract freedom as the result of the ‘disembedding’ of communication practices. For Giddens, this process refers to the abstraction or ‘lifting out’ of social relations (or, in this case, interactions) from their social contexts and ‘their restructuring across indefinite spans of time-space’. 

When speech is embedded in social contexts, the practical limits imposed upon it are evident. One wouldn’t walk into a room full of people and deliberately insult them to their faces or lie about them without expecting consequences. Similarly, media outlets and advertisers understand that even if they are free, in principle, from prior restraint, there are social (and sometimes legal) consequences for airing material that transgresses social norms. The real social struggle, of course, comes in assessing, defining and redefining these norms rather than attempting to dispense with them altogether (which would mean dispensing with society itself). Establishing such norms and their tolerance for violation is an inherently social process. We cannot invent our own social norms any more than we can invent our own language. This is what it means to exist in relation with others: to be social beings. 

The Internet makes it possible, in other words, to bypass the gatekeepers that enforce consensus norms while simultaneously relying on algorithmic (and human) amplification to ‘push’ content to a broad audience. There are certainly avenues for response and, on occasion, violent forms of pushback – often targeted toward women or minorities (rather than toward those most likely to complain about ‘cancel culture’) – that move from the online context to the offline, in the form of stalking, intimidation and physical assault. However, the Internet and, more recently, social media, allow for the first time in human history the materialised fantasy of a space in which one can imagine the prospect of an absolutist version of free speech – one that is not only free from prior restraint, but from social consequences. That is, they construct the fantasy of a kind of post-social model of communication. 

The attack on ‘cancel culture’ launched by characters like Ted Cruz positions social norms themselves as inappropriate and illegitimate, something that can be sloughed off as we move toward a world where those in positions of privilege and power can malign whomever they like consequence-free (Donald Trump was the avatar of this version of privilege), whereas those who respond in kind from less privileged positions are accused of hatred and intolerance. But the analysis needs to push further; it is not enough to note that social media algorithms elevate the most controversial, noxious and obnoxious forms of communication.

The Russo–Ukrainian Civil War

What is Mr Putin Doing In Ukraine? – Thoughts From Kiev.

Today, Sunday, I went to Maidan. Several hundre…

by Mychailo Wynnyckyj

Putin On Pause – Thoughts from Kiev

President Putin’s press conference seems to hav…

by Mychailo Wynnyckyj

The Vladimir Putin Problem – Thoughts From Kiev

The Russian invasion of Crimea has put the enti…

by Mychailo Wynnyckyj

Imminent Invasion – Thoughts from Kiev

Today, Crimeans “voted”. Given the barrage of m…

by Mychailo Wynnyckyj

Philosophy – Thoughts From Kiev

Today is a noteworthy day. Exactly 120 days ago…

by Mychailo Wynnyckyj

The broader point to be made with respect to both mask refusal and free speech absolutism is that they spring from the same soil: the infrastructure of an asocial, abstracted individualism that drives the economic model of the online economy. The irony of commercial social media is precisely that it offloads distinctly and irreducibly social processes onto opaque technological systems where their very existence can be misrecognised and suppressed. The decision of how to curate content online is an irreducibly social and irreducibly political one. The false promise of automated systems is, by contrast, of a zero level of either the social or the political: that machines are somehow exempt from the social relations that have long provided the contours of our information environment – that they are somehow apolitical. The result is a transposition of market logic into the register of the machine (as if the market itself were neutral). 

This asocial imperative is reinforced by the operation of data-driven customisation and targeting, which envision and construct the image of a hermetic, self-contained individual. Everyone gets their own content, their own information, and their own entertainment, custom tailored for them. Whereas the mass media can be blamed for suppressing individual freedom and diversity of choice, the ideology of mass-customised media stifles recognition of sociality and the forms of interdependence that underwrite it. We know how this plays out in the realm of news and information: the grand dismantling of the shared protocols we once relied on to adjudicate between rival accounts of the world, and the consequent cacophony of accusations of ‘fake news’. Truth collapses into consumer preference, as when right-wing viewers migrated to NewsMax5  and the One America News Network after Fox News called the 2020 election for Joe Biden. The line between fact and fiction was relegated to the realm of personal taste – what other criterion could there be when news becomes simply another personalised commodity? 

Herein, perhaps, lies the answer to the question of why it might be so easy for someone immersed in a social media environment to view any request to take into consideration the wellbeing of another as an assault on personal freedom and individual autonomy. The social media bargain is not simply the offer of access in exchange for willing submission to comprehensive surveillance, it is also the promise of individualism ‘perfected’ in exchange for misrecognition of its conditions of possibility. Social media is a misnomer in the sense that it implies a heightened recognition of social interdependence on the part of the user; it is, however, accurate to the extent that it invokes the offloading of this interdependence on to automated systems, where it can be misrecognised as an unwelcome and surpassed vulnerability. This is the heart of the pathology of commercial social media – not simply that they amplify false information, not just that they privilege ‘engagement’ over accuracy, but that they embrace an incoherent fantasy of individuals ‘freed’ from their constitutive interdependence (for which machinic operations become an opaque, unrecognised substitute).

Notes:

  1. Yelena Dzhanova, ‘An 88-Year-Old Professor in Georgia Resigned in the Middle of Class Because a Student Refused to Wear a Mask over Her Nose: “That’s It. I’m Retired.”’, Business Insider Australia (blog), August 29, 2021, https://www.businessinsider.com.au/88-year-old-professor-resigns-mid-class-student-refuses-mask-2021-8.
  2. Kali Holloway, ‘The Great Hypocrisy of Right-Wingers Claiming “Cancel Culture”’, March 19, 2021, https://www.thenation.com/article/society/republicans-cancel-culture-kaepernick/.
  3. Nathan Hart, ‘Texas Senator Ted Cruz Hits Twitter, TV to Target Critical Race Theory’, McClatchy Washington Bureau, August 4, 2021, https://www.mcclatchydc.com/news/politics-government/article253116493.html.
  4. Paul Lewis and Erin McCormick, ‘How an Ex-YouTube Insider Investigated Its Secret Algorithm’, The Guardian, February 2, 2018, sec. Technology, https://www.theguardian.com/technology/2018/feb/02/youtube-algorithm-election-clinton-trump-guillaume-chaslot.
  5. Brian Stelter, ‘Newsmax TV Scores a Ratings Win over Fox News for the First Time Ever’, CNN, December 8, 2020, https://www.cnn.com/2020/12/08/media/newsmax-fox-news-ratings/index.html.

Filed Under: Political science Tagged With: Civil and political rights

AI for better or for worse, or AI at all?

Nov 1, 2022 by Kobi Leins

When I was a little girl, I was taught a song about a ball of white string, in which the white string could fix everything — tie a bow on a gift, fly a kite, mend things. The second verse of the song was about all the things that string cannot fix — broken hearts, damaged friendships — the list goes on. In all of the research I have been doing about Artificial Intelligence (AI), its governance and what it can do, this song has frequently come to mind. Many authors and researchers are doing the equivalent of repeatedly singing the first verse of the song, about all the things that AI can do, without contemplating where AI cannot effectively or, more importantly, should not be used. Probably now nearing the height of the Gartner-hype cycle, AI is often misleadingly touted as being able to fix practically everything. Although it is true that AI will expedite many business processes and engender new ways of acquiring and creating wealth through more sophisticated use of data, for the everyday citizen those benefits are not always apparent.

The reality of what AI can do is very different from what is conveyed, particularly by industry. In some instances, in fact, AI is breaking things in a way and at a speed that is unprecedented. It is impossible for businesses and governments to simultaneously maximise public benefits, service levels, market competition and profitability. Profitability is almost inevitably being prioritised in a neoliberal context, at the expense of democracy, individual freedoms and the voices of civil society and citizens. Many are voicing these concerns, but they are yet to be actively addressed at all stages of contemplation of the use of AI.

AI includes a series of component parts, both software and hardware. AI may include the following: data-based or model-based algorithms; the data, both structured and unstructured; machine learning, both supervised and unsupervised; the sensors that provide input and the actuators that effect output. AI is complicated and encompasses many things. For ease of reference in this chapter, the term ‘AI’ includes all of these components, each of which may require different considerations and limitations. The individual consideration of each component is beyond the scope of this brief chapter, but this complexity is important to hold in mind when considering applications of AI.

This chapter will contemplate the current popular dichotomy between techno-utopians (those who think that technology — including AI — will save the world) and techno-dystopians (who think technology will destroy it). I conclude that there needs to be a greater space in the discourse for questions, challenge and dissent regarding the use of technology without being dismissed as a techno-dystopian. In fact, these voices are required to ensure safe and beneficial uses of AI, particularly as this technology is increasingly embedded in physical systems and affects not only our virtual but also our physical worlds.

AI is not inherently good or bad — but it does have a past and a context

The overly simplistic and popular dichotomy often posed is between those who are techno-optimists and techno-dystopians. The reality is far more complex, and creating a notion of ‘friends’ or ‘enemies’ of technology does not foster helpful dialogue about the risks and dangers of developing and using certain applications of AI. Differing interests and profoundly powerful market forces are shaping the conversation about AI and its capabilities. What Zuboff coins ‘surveillance capitalism’ is far too profitable and unregulated to furnish a genuine contemplation of human rights, civil liberties or public benefit.1

Every technology has a history and a context.2 A prominent example from Winner’s book, The Whale and the Reactor, involves traffic overpasses in and around New York, designed by Robert Moses. Many of the overpasses were built low, which prevented access by public buses. This, in turn, excluded low-income people, disproportionately racial minorities, who depended entirely on public transportation. Winner argues that politics is built into everything we make, and that historical moral questions asked throughout history — including by Plato and Hannah Arendt — are questions relevant to technology: our experience of being free or unfree, the social arrangements that either foster equality or inequality, the kinds of institutions that hold and use power and authority. The capabilities of AI, and the way that it is being used by corporations and by governments, continue to raise these questions today. Current systems using facial recognition or policing tools that reinforce prejudice are examples of technology that builds on politics. The difference is that, in non-physical systems, politics are not as easy to identify as in a tangible object like an overpass, although they may be similarly challenging to rectify after they have been built, and equally create outcomes that generate power and control over certain constituents.

The beginning of AI as a discipline

Often cited as the birth of AI as an academic discipline is the 1956 Dartmouth conference, which ran over eight weeks, and which was a product of its time.

A list of conference participants, mainly white, wealthy and educated men, meet:

proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.3

Following this conference, developments in AI moved ahead in fits and starts. The quiet periods are now retrospectively referred to as “AI winters”.4 More recently, successes in game playing and facial recognition, among other advances, have attracted considerable attention, both positive and negative. The most rapid advances, however, have been in the growth of companies such as Google and Facebook, which configure and use data derived from AI for the benefit of tracking their users and providing them with particular features.

Corporate use of AI

Many companies use AI to deliberately pursue anti-regulatory approaches to existing legal structures governing business.5 These structures exist to ensure that society receives tax to improve the physical world for their citizens, to ensure that safety and privacy are upheld, and to express protections and norms (ideally) created by democratically elected representatives. Some companies using ‘disruptive’ AI technologies are deliberately avoiding these regulations.

No ethical limitations will ever prevent these companies from a model designed to deliberately pursue anti-regulatory approaches. In German, the word for disruptive is Vorsprung,6 which literally translates to ‘jump over’. In effect, these companies are jumping over existing regulatory frameworks to pursue market dominance. In some jurisdictions, legal action is being taken to try to mitigate these approaches. In Australia, in May 2019, over 6000 taxi drivers filed a class action lawsuit for lost income against Uber for its deliberate attempt to ‘jump over’ existing laws regulating taxi and limousine licensing. ‘It is not acceptable for a business to place itself above the law and operate illegally to the disadvantage of others,’ said Andrew Watson, a lawyer with the claimants’ firm Maurice Blackburn.7

Facebook, which collects, stores and uses people’s private data, was recently fined US$5 billion for breaches related to the Cambridge Analytica scandal.8 At the time of writing, Facebook’s market capitalisation was approximately US$584 billion.9 Facebook’s share price increased after the fine to a value that would have covered the fine, most probably due to investor relief that it was clear no further regulatory responses were imminent. The fine, which represents about three months of Facebook’s revenue, also shows that regulators are toothless, unserious, or even worse, both.10

Use of AI to avoid public governance

It has already been suggested that the use of AI in particular circumstances is not accidental, but rather often a deliberate method to avoid introspection and traditional governance through inexplicable decision-making processes.

‘governance-by-design’ — the purposeful effort to use technology to embed values — is becoming a central mode of policymaking, and … our existing regulatory system is fundamentally ill-equipped to prevent that phenomenon from subverting public governance.11

Mulligan and Bamberger raise four main points. First, governance-by-design overreaches by using overbroad technological fixes that lack the flexibility to balance equities and adapt to changing circumstances. Errors and unintended consequences result. Second, governance-by-design often privileges one or a few values while excluding other important ones, particularly broad human rights. Third, regulators lack the proper tools for governance-by-design. Administrative agencies, legislatures and courts often lack technical expertise and have traditional structures and accountability mechanisms that poorly fit the job of regulating technology. Fourth, governance-by-design decisions that broadly affect the public are often made in private venues or in processes that make technological choices appear inevitable and apolitical.

The Russo–Ukrainian Civil War

What is Mr Putin Doing In Ukraine? – Thoughts From Kiev.

Today, Sunday, I went to Maidan. Several hundre…

by Mychailo Wynnyckyj

Putin On Pause – Thoughts from Kiev

President Putin’s press conference seems to hav…

by Mychailo Wynnyckyj

The Vladimir Putin Problem – Thoughts From Kiev

The Russian invasion of Crimea has put the enti…

by Mychailo Wynnyckyj

Imminent Invasion – Thoughts from Kiev

Today, Crimeans “voted”. Given the barrage of m…

by Mychailo Wynnyckyj

Philosophy – Thoughts From Kiev

Today is a noteworthy day. Exactly 120 days ago…

by Mychailo Wynnyckyj

Each of these points remain valid. Use of AI by governments and corporates alike often masks the underlying political agenda that use of the technology enables. In the case of what has been coined ‘Robodebt’ in Australia, the Federal Government’s welfare department, Centrelink, used an algorithm to average a person’s annual income gathered from tax office data over 26 fortnights, instead of individual fortnightly periods, to calculate whether they were overpaid welfare benefits. Recipients identified as having been overpaid were automatically sent letters demanding explanation, followed by the swift issuance of debt notices to recover the amount.

This method of calculation resulted in many incorrect debts being raised. Recipients of aid in Australia comprise one of the most vulnerable groups in Australia, and even raising these debts without consultation or human interaction arguably caused profound detrimental effects.12 Senator Rachel Siewert, who chaired the Senate Estimates hearings into Robodebt, noted that, ‘[t]here were nine hearings across Australia, and what will always stick with me, is that at every single hearing, we heard from or about people having suicidal thoughts or a severe deterioration in mental health upon receiving a letter.’13

The practical onus at law is on any creditor (in this case, Centrelink) to prove a debt, not for the debtor to disprove it. Automating debt-seeking letters challenges a fundamental law derived from long-standing principles of procedural fairness.14 This is the type of automation that, even if rectified, causes irreversible damage, and is in and of itself a form of oppression. Use of social media to influence elections in the United States and Brexit has been extensively covered, but more recently, similar tools were used in the 2019 Australian Federal election, when advertisements started to appear in social media feeds regarding a proposed (and utterly fictitious) death tax by the opposition Australian Labor Party.15 Although the impact of this input on the Australian election is not completely clear, it is known that nearly half of Gen Z obtain their information from social media alone, and that 69% of Australians are not interested in politics.16 These kinds of audiences are prime targets for deliberately placed, misleading social media advertisements, based on algorithmically generated categorisations.

Warnings about AI use from the past

If we could agree on clear parameters about how we build, design, and deploy AI, all of the normative questions that humans have posed for millennia would remain.17 No simple instruction set, ethical framework, or design parameters will provide the answers to complex existential and philosophical questions posed since the dawn of civilisation. Indeed, the very use of AI, as we have seen, can be a way of governing and making decisions.

Joseph Weizenbaum was the creator of ELIZA, the first chatbot, named after the character in Pygmalion. It was designed to emulate a therapist through the relatively crude technique of consistently asking the person interacting to expand upon what they were talking about and asking how it made them feel. A German-American computer scientist and professor at MIT, Weizenbaum was disturbed by how seriously his secretary took the chatbot, even when she knew it was not a real person.

Weizenbaum’s observations led him to argue that AI should not be used to replace people in positions that require respect and care.18 Weizenbaum was very vocal in his concerns about the use of AI to replace human decision making. In an interview with MIT’s The Tech, Weizenbaum elaborated, expanding beyond the realm of mere artificial intelligence, explaining that his fears for society and its future were largely because of the computer itself. His belief was that the computer, at its most base level, is a fundamentally conservative force — it can only take in predefined datasets in the interest of preserving a status quo.

More recently, other writers have expressed similar concerns.19 Promises being made by  technology companies are increasingly being questioned in light of repeated violations of human rights, privacy and ethical codes. The United Nations has found that Facebook played a role in the generation of hate speech that resulted in genocide in Myanmar.20 The Cambridge Analytica scandal has, by Facebook’s own estimation, been associated with the violation of the privacy of approximately 87 million people.21 Palantir, an infamous data analytics firm, is partnering with the United Nations World Food Program to collect and collate data on some of the world’s most vulnerable populations.22 Many questions are being raised about the role and responsibility of those managing the technology in such situations, not just in times of conflict,23 but also in times of peacekeeping,24 including a United Nations’ response to identify, confront and combat hate speech and violence.25

Weizenbaum was also a product of his time, having escaped Nazi Germany with his family in 1936. His knowledge, and personal experience, of the use of data to make the Holocaust more efficient and effective inevitably shaped his perception and thinking about the wider use of data and computers. The first computers of IBM were used to create punch cards outlining certain characteristics of German citizens.26 IBM did not enable the Holocaust, but without IBM’s systematic punch card system, the trains would not have run on time, and the Nazis would not have been anywhere near as clinically ‘efficient’ at identifying Jews. For this reason, Germans still resist any national census, and they also resist a cashless society such as that which Sweden has wholeheartedly adopted.

Hubert Dreyfus expressed similar concerns. His book, What Computers Can’t Do, was ridiculed on its release in 1972, another peak of hype about computing.27 Dreyfus’ main contention was that computers cannot ‘know’ in the sense that humans know, using intuition, contrary to what IT industry marketing would have you believe. Dreyfus, as a Professor of Philosophy at Berkeley, was particularly bothered that AI researchers seemed to believe they were on the verge of solving many long-standing philosophical problems within a few years, using computers.

We are all products of our time and each of us have a story. My great- uncle was kept in Sachsenhausen in solitary confinement for being a political objector to participating in World War II. I was partially raised in Germany during the tumultuous late 1980s, and the tensions and histories are etched into some of my earliest memories. I returned to Germany in the late 1990s to watch the processing of history on the front page of every newspaper of the day. Watching today how data is being collected, shared, traded and married with other data using AI, knowing its potential use and misuse, it is difficult for me not to agree with Weizenbaum that there are simply some tasks for which computers are not fit. Beyond being unfit, there are tools being created and enabled by AI that are shaping our elections, our categorisation by governments, our information streams, and, albeit least importantly, our purchasing habits. My own history shapes my concern about how and why these systems and tools are being developed and used, a history that an increasing proportion of the world do not remember or prefer to think of as impossible in their own time and context.

Although AI as we understand it was conceived and developed as early as 1956, we are only just coming to understand the implications of rapid computation of data, enabled by AI, and the risks and challenges it poses to society and to democracy. Once data is available, AI can be used in many different ways to affect our behaviours and our lives. Although these conversations are starting to increase now, Weizenbaum and Dreyfus considered these issues nearly five decades ago. Their warnings and writings remain prescient.

Notes:

  1. Zuboff S (2019). The Age of Surveillance Capitalism. Profile Books.
  2. Winner L (1986). The Whale and the Reactor. University of Chicago Press.
  3. McCarthy J et al. (1955). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. Stanford University. https://www-formal.stanford.edu/jmc/history/dartmouth/ dartmouth.html
  4. Crevier D (1993). AI: The Tumultuous Search for Artificial Intelligence.Basic Books.
  5. Horan HH (2019). Uber’s path of destruction. American Affairs, 3(2). https://americanaffairsjournal.org/2019/05/ubers-path-of-destruction/
  6. Many readers would be familiar with one of Audi’s slogans: ‘Vorsprung durch Technik’ — disruption through technology.
  7. Xu VX, Australian taxi drivers sue Uber over lost wages in class-action lawsuit. New York Times, 3 May 2019, https://www.nytimes.c om/2019/05/03/technology/australia-uberdrivers-class-action.html
  8. The Facebook-Cambridge Analytica data scandal broke in early 2018 when it was revealed that the personal data of millions of Facebook users had been taken without their consent and used to target political advertising at them. It has been described as a watershed moment in the public understanding of personal data. See Kang C, F.T.C. approves Facebook fine of about $5 billion, New York Times, https://www.nytimes.com/2019/07/12/technology/facebookftcfine.html
  9. Facebook market cap. YCharts.https://ycharts.com/companies/ FB/market_cap
  10. Patel N, Facebook’s $5 billion FTC fine is an embarrassing joke. The Verge, 12 July 2019, https://www.theverge.com/2019/ 7/12/20692524/facebook-five-billion-ftc-fine-embarrassing-joke
  11. Mulligan DK & Bamberger K (2018). Saving governance by design. 106 California Law Review, 697. https://doi.org/10.15779/ Z38QN5ZB5H
  12. Karp P & Knaus C (2018). Centrelink robo-debt program accused of enforcing “illegal” debts. The Guardian. https://www.the guardian.com/australia-news/2018/apr/04/centrelink-robo- debtprogram-accused-of-enforcing-illegal-debts
  13. Siewert R (2019). What I learned about poverty and mental health chairing the robo-debt enquiry. Crikey. https://www.crikey. com.au/2019/05/31/siewert-centelink-robo-debt-suicide/
  14. Carney T (2018). The new digital future for welfare: Debts without legal proofs or moral authority? UNSW Law Journal Forum. https://www.unswlawjournal.unsw.edu.au/wpcontent/uploads/2018/ 03/006-Carney.pdf
  15. Murphy K et al. (2019). “It felt like a big tide”: how the death tax lie infected Australia’s election campaign. The Guardian. https://www.theguardian.com/australianews/2019/jun/08/it- felt-like-a-big-tide-how-the-death-tax-lie-infected-australias- election-campaign
  16. Fisher C et al. (2019, 12 June). Digital news report: Australia 2019.
  17. Analysis & Policy Observatory. https://apo.org.au/node/240786
  18. Roff H (2019). Artificial intelligence: Power to the people. Ethics and International Affairs. https://www.ethicsandinternationalaffairs. org/2019/artificial-intelligence-power-to-the-people/
  19. Weizenbaum J (1976). Computer Power and Human Reason. WH Freeman.
  20. Broad E (2018). Made by Humans. Melbourne University Publishing. 
  21. Broussard M (2018). Artificial Intelligence: How Computers Misunderstand the World. MIT Press. 
  22. O’Neil C (2016). Weapons of Math Destruction. New York Crown Publishers. 
  23. Goodman EP & Powles J (forthcoming). Urbanism under Google: Lessons from Sidewalk Toronto. Fordham Law Review https://papers.ssrn. com/sol3/papers.cfm?abstract_id=3390610
  24. UN: Facebook had a “role” in Rohingya genocide. Aljazeera, 14 March 2018, https://www.aljazeera.com/news/2018/03/facebook-role-rohingya-genocide-180313161609822.htm. 
  25. Mozur P, A genocide incited on Facebook, with posts from Myanmar’s military. New York Times, 15 October 2018, https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html
  26. Kang C & Frenkel S (2018). Facebook says Cambridge Analytica harvested data of up to 87 million users. New York Times. https://www.nytimes.com/2018/04/04/technology/mark-zuckerberg-testify-congress.html
  27. Palantir and the UN’s World Food Programme are partnering for a reported $45 million. Privacy International, 6 February 2019. https://www.privacyinternational.org/news/2684/palantir-and-uns-world-food-programme-are-partnering-reported-45-million
  28. McDougall C (2019). Autonomous weapons systems: Putting the cart before the horse. Melbourne Journal of International Law, 20(1).
  29. Digital Blue Helmets, United Nations, https://unite.un.org/digital-bluehelmets/
  30. United Nations launches hate speech strategy . SABC Digital News, https://www.youtube.com/watch?v= V8DJkGEpddg
  31. Black E (2001). IBM and the Holocaust: The Strategic Alliance between Nazi Germany and America’s Most Powerful Corporation. Crown Publishers. Discussed in Preston P (2001). Six million and counting. The Guardian. https://www. theguardian.com/books/2001/feb/18/historybooks.features
  32. Dreyfus HL (1972). What Computers Can’t Do. Harper & Row. Dreyfus wrote an updated edition in 1992 entitled What Computers Still Can’t Do.

Filed Under: Law Tagged With: Military defence public property public finance tax commerce (trade) & industrial law

A decade after, “Welcome to the Jungle.”  A retrospective.

Nov 1, 2022 by The Stroud

Almost a decade has rolled by since I was asked to pen an open-source scenario on the future of law enforcement, which I titled ‘Welcome to the Jungle’. It was well-loved, and praised, albeit treated in the same manner as the fiction that inspired it. In the years leading up to this point, I had been exploring emerging threats, from Carrington Event-level solar flares, to pandemics, to nation-state assaults on IT-enabled critical infrastructure. In hindsight, with two of these three now woven so deeply into the fabric of our reality, I feel a sense of having failed in my duty for not finding better claxons and clarions to warn of these catastrophes. But Kassandra’s curse is not to be believed, likewise the boy who cried wolf, so what to do? Well, find outlets like Circus Bazaar, is an obvious if somewhat blatant answer. But so what… and then what? Well, let me tell you about a story I wrote.

When I wrote ‘Welcome to the Jungle’, I had been closely following the trends it mentions, and was fairly sure that most of them would eventuate. The trick with strategic foresight is getting the timing right, which is more complex than it sounds. How far you place an event on the horizon depends mainly on the appetite of the audience. Too close and the audience will discount it, with mutters of, ‘I haven’t seen/heard/been told of that before’. Place it at a sufficient distance ahead in the future, though, and the details become comfortably obscured on the horizon to the extent that the event appears plausible, yet remains shrouded in a fog-of-war, overshadowed by the gloom of tomorrow.  After all, everything is possible given enough time, right?

There are three points relating to this decision – of where to place events on the timeline – that I want to unpack. First, that ideas concerning the future should be situated in accordance with the beliefs of the audience. Second, what questions should be asked? Lastly, perception is about power; therefore, it is important to gain an understanding of both perception and power.

As I mentioned, when talking about the future, timing is everything. I am absolutely certain, barring societal collapse, that humanity will develop bionics (as in a form of augmented human capabilities) that would still be regarded today as science fiction.1 Thought-controlled robotic prostheses are most certainly in our future.2 But at what point will they become industrialised, commercialised, or weaponised? Some might argue that they already are, but to paraphrase Gibson3,  only in pockets. My point is that it’s essential to consider the specifics of what you’re proposing. Waving your hands around and using ill-defined and/or ambiguous terms to discuss the perils of artificial intelligence, particularly without reference to the context of your audience, might attract some conference delegates to your presentation or serve as clickbait, but will ultimately leave audiences unsatisfied. They want to know what such developments mean for them.

Now, this type of contextualisation is extremely hard.  Often being called upon to explain emerging technology, I can attest to this request being routinely followed up with, ‘And can you tell us what this means for us, please?’ Even after explaining my background and highlighting caveats of my lack of experience in this particular domain, I am met with the audience’s firm belief that my explanation of certain technology will in fact explain what that technology means for them.  Consequently, I no longer talk to audiences I am not intimately familiar with, and take a dim view of those who do. Because that’s what such audiences want, what they really, really want. Even if it only occurs to them afterwards. And that is the basis on which they will judge you, and your message.

Now that you’re centring your message on what matters to your audience, and what they expect from you, it’s necessary to consider this matter from their perspective. I use the same toolkit here as I do for addressing risks and risk perception. For any given risk event, there are several attributes that must be articulated and conveyed to your audience, but using likelihood and consequence are unsuitable. They are academic, coarse measures of abstract concepts. Multiplying the assessment of the likelihood of an event by the magnitude of its impact creates a number, or a rating, which may provide the comforting but false notion to the audience that a complex and dynamic situation can be condensed into a rudimentary label, which in turn can be categorised, prioritised, and managed.

Unfortunately, reality is more unforgiving and expansive than risk matrices acknowledge. Risks present a variety of potential consequences, both in the type of event that may occur, and in terms of the respective impacts on various stakeholders. The frequency or likelihood of these events also occurs across a continuum that bears a multi-dimensional relationship to the spectrum of consequences. With likelihood multiplied by consequence, you’re essentially taking a histogram of these values, which vary by stakeholder, and collapsing them into a single value, thereby oversimplifying the situation and diluting its value to the decision maker.

Instead, these risk events can and should be framed as if they were – as their impact inevitably will be – physical manifestations, which is to say objects with mass and volume. Their proximity, size, velocity, and density should be given form, such that they take hold in the minds of the audience. Placing large events further away in time allows the audience to more comfortably confront them. Furthermore, by placing these risk events just over the horizon, you’re essentially gaming the assessment, but in the same way that decision makers often do when making malverse risk assessments, by which I mean they talk down the likelihood of a risk with the intention of bringing it within a politically acceptable range.4 Essentially, it’s the same trick that conventional likelihood and consequence risk assessments employ by reducing cognitive load and simplifying everything to a single value. Still, even acknowledging flexibility with the timing/likelihood, by accurately representing the other assessed attributes of the event, you will maintain the integrity of the message and uphold your duty of care.

When I wrote ‘Welcome to the Jungle’, I had recently had a conversation with a senior law enforcement official about the types of crimes currently being addressed, and I enquired why banking and finance were not being targeted.  My point was that, in terms of harms, the war on drugs doesn’t impact supply, so in the absence of prevention strategies that would impact it, wouldn’t it make more sense to turn these investigative and disruptive resources towards strengthening the integrity of societal institutions and the economy, and towards addressing the protection of the majority of citizens?

I was swiftly told that the banks were trustworthy and safe, ‘nothing to see here’ – this was prior to the Australian Banking Royal Commission – to which I riposted with an example of recent criminality discovered at HSBC. Still, it was evident that policing is a business, and like any other business, they wanted to play to their strengths and maintain their value chain. An inexhaustible supply of criminality and media opportunities therefore holds appeal as well as the key to additional law enforcement resources.

The Russo–Ukrainian Civil War

What is Mr Putin Doing In Ukraine? – Thoughts From Kiev.

Today, Sunday, I went to Maidan. Several hundre…

by Mychailo Wynnyckyj

Putin On Pause – Thoughts from Kiev

President Putin’s press conference seems to hav…

by Mychailo Wynnyckyj

The Vladimir Putin Problem – Thoughts From Kiev

The Russian invasion of Crimea has put the enti…

by Mychailo Wynnyckyj

Imminent Invasion – Thoughts from Kiev

Today, Crimeans “voted”. Given the barrage of m…

by Mychailo Wynnyckyj

Philosophy – Thoughts From Kiev

Today is a noteworthy day. Exactly 120 days ago…

by Mychailo Wynnyckyj

By reframing the role of law enforcement at the culmination of ‘Welcome to the Jungle’, I asked the implicit questions: What role does law enforcement play? What should it focus on? Of course, I placed this dangerous idea over the horizon, decades into the future, towards which the audience had been carefully and incrementally edged. I had sliced the salami into edible mouthfuls, so they wouldn’t choke on the proposition. Like a mountain on the horizon, an event placed this far beyond the present gives the impression that it can be summited. If you notice, I also highlighted types of criminality where targets were more corporate in nature. Corporate, or white-collar crime, is a current and growing problem of mountainous proportions5; please consider Enron, Madoff, and Holmes / Theranos.

There is a reason why fraud exists: it is hard to detect, and mind-numbingly boring to investigate, but this pales in comparison to the optics of a good drugs raid, or a weapons seizure. It really seems that only massive frauds, or ones that are grossly obvious, get caught. Indeed, 43% of frauds are detected by tip-offs, with half of these coming from employees6.  However, simply announcing that fraud is a huge problem will not, by itself, cause action. I used to say strategic foresight only created value when it produced strategic insight for its audience, but now I understand that it needs to result in strategic action as well.

Change takes a long time to manifest. Moreover, in a society comprised of organisms, and organisations, in an ecosystem that rewards homeostasis, the barriers impeding change are many. The lock we need to pick in order to disinhibit these entrenched actions and behaviours, therefore, is the self-interest of the individual. If I can use this knowledge to get ahead at work, or in life – they will think to themselves – then I might just do that. This gambit works particularly well if you show how the rewards may be achieved. Tell an individual what to do and he or she will resist, but lay out the incentives and procedures within a clear trajectory, and the individual will internalise them7. 

The questions that need to be posed should invoke a sense of agency in the audience and provide them with a conceptual framework around which to focus their attention. How will this promote, or benefit, their mission or business? If the story we tell in a scenario becomes a thought that occurs in someone else’s mind, what patterns and associations will consequently emerge, provoking a storm of neural activity surging forth to animate their bodies in the manifestation of action?

Is climate change real? A question that can be asked, and scientifically answered. Does it invoke action? Not by itself. How does climate change impact me? This makes it relevant. How does this impact my job? Now we have their attention. How can I make a job out of this? Now we have their future, and their agency.

When I first drafted ‘Welcome to the Jungle’, I led with my chin on the bold statement that irregular maritime arrivals to Australia would be stopped. It made perfect sense for this to happen, regardless of how it was ultimately achieved, and notwithstanding the various opinions regarding the means or the ends. I chose this event because to me it appeared the easiest to accomplish and therefore the earliest of the events likely to occur in the scenario. I personally believed it would have taken a further five to ten years down the track to realise, from that point. Also, this event was not central to my main objective – to reframe law enforcement – so even if it had failed altogether to transpire, this wouldn’t impact the credibility or influence of the rest of the narrative. In fact, I had hoped to pre-empt it, so that as the scenario circulated, and lingered in people’s minds, they would recall that I had called it first.

Placing controversial topics in the context of works of fiction is another classic manoeuvre. It veils the attribution of blame, accountability, and accusation. For this reason, science fiction has often been used as the setting in which to play out power dynamics, by relocating them so that they occur far, far away. With strategic foresight, however, we don’t have the luxury of space travel, but we do have the ability to play out events beyond the tenure of the current senior decision makers, which is a kind of time travel. Senior management doesn’t believe in a current risk? Place it beyond their likely retirement horizon, term, or posting, but within those of their juniors and successors. Flesh out the issue, especially the confluence of other trends or events likely to make the issue worse, such that addressing the issue sooner would be more beneficial. Now, if the event occurs while the current decision makers are in place, the organisation has already considered the event, even if they disbelieved that it would occur.

This exact situation illustrates the dirty secret behind the use of scenarios at Royal Dutch Shell.  Considered the poster child for scenarios and strategic foresight, Shell was credited with using highly detailed scenarios to support the profit they made from the oil crisis of the 1970s. Pierre Wack was asked by Shell to build these scenarios – including the oil crisis – which Shell’s executives then used as a blueprint for action when these scenarios actually manifested.8 The dirty secret lies in the fact that these scenarios were originally laughed out of Shell’s boardroom. Their only saving grace was that they were considered so ‘out-of-the-box’ that the scenarios were instead adopted for the executive training program, in the guise of absurd exercises to stretch creative thinking. The point is that however bizarre they were perceived to be, the events were nevertheless loaded in advance into the minds of the decision makers.

So, please enjoy all that Circus Bazaar has to offer. You never know when something might come in handy.

Notes:

  1. O’Doherty, J.E., Lebedev, M.A., Ifft, P.J., Zhuang, K.Z., Shokur, S., Bleuler, H., & Nicolelis, M.A.L. 2011. Active tactile exploration using a brain–machine–brain interface. Nature (London), 479(7372), 228-231; Valle, G., Petrini, F. M., Mijovic, P., Mijovic, B., & Raspopovic, S. (2021). A Computer-Brain Interface that Restores Lost Extremities’ Touch and Movement Sensations. Brain-Computer Interface Research, 65-73. doi:10.1007/978-3-030-79287-9_7.
  2. Bogue, R. (2009). Exoskeletons and robotic prosthetics: a review of recent developments. Industrial Robot: an international journal; Connan, M., Sierotowicz, M., Henze, B., Porges, O., Albu-Schaeffer, A,. Roa, M., & Castellini, C. (2021). Learning to teleoperate an upper-limb assistive humanoid robot for bimanual daily-living tasks. Biomedical Physics & Engineering Express.
  3. ‘The future has already arrived. It’s just not evenly distributed yet.’ – William Gibson
  4. March, J. G., & Shapira, Z. (1987). Managerial perspectives on risk and risk taking. Management Science, 33(11), 1404-1418.
  5. Friedrichs, D. O. (2009). Trusted Criminals: White Collar Crime In Contemporary Society (4 ed.). Wadsworth Publishing, p. 50. ISBN 978-0495600824. Citing Kane and Wall(2006), p. 5.
  6. 2020 Report to the Nations. Copyright by the Association of Certified Fraud Examiners, Inc.
  7. Cialdini, R. B. (2007). Influence: The Psychology of Persuasion. New York: Harper Collins.
  8. Wack, P. (1985). Scenarios: Uncharted Waters Ahead. Harvard Business Review, Sept-Oct 1985, 73-89; Wack, P. (1985a). Scenarios: Shooting the Rapids. Harvard Business Review, 6, 139-150.

Filed Under: Public administration & military science Tagged With: General considerations of public administration

Disillusionment of Ten O’Clock

Nov 1, 2022 by Zac Rogers

In his poem ‘Disillusionment of Ten O’Clock’, the great American poet Wallace Stevens hints at a warning for our time, which has inspired the coming together of this inaugural edition of Circus Bazaar magazine. Our contributors show that as scientism engulfs science, data contrives to attenuate knowledge, and technocracy subverts democracy, the dual roles of crisis and fear reverberate through late modern human affairs as iterations of the politics of science and technology. As ever, they function as both drivers and enablers, of causes and consequences alike. All may seem perpetually new, yet an old reflex can be glimpsed in operation.

The houses are haunted
By white night-gowns…
None of them are strange…

Wallace Stevens

The slide into monism is a very human reflex. The reason is, at base, the respite it offers the searching mind in an ungraspable present. However, such sweet relief comes at a cost. Fevered digital dreams of a more predictable world under the pitiless gaze of machines exact a high price in paranoia. With human horizons dimmed by the pandemonium of entropy, the ally of survival, let alone prosperity, will be a plurality of experiments in various ways of being, seeing, valuing, living with, and relating to the world, human and non-human. Monism, be it epistemological, political, financial, or ideological, makes everything more brittle than it needs to be, yet still it dominates the near horizon. In many ways, monism is the lasting legacy of modernity, and its greatest threat.

Only, here and there, an old sailor,
Drunk and asleep in his boots…

Wallace Stevens

For all its flaws, hubris, and appropriations, the vision of the future as confected under technocracy supplies the most basic human need. For the moderns, the future, like history, must have a discernible meaning. Nothing is more anathema to modern humans than the possibility of meaningless suffering. This is what the politics of technology feeds on in our present time. It is nothing if not a surrogate for assured meaning, even if that meaning is something that might resemble a dystopia. 

Rubashov, Koestler, and the Theory of Relative Maturity

Disillusionment of Ten O’Clock

In his poem ‘Disillusionment of Ten O’Clock’, t…

by Zac Rogers

ELIZA, the paperclip maximizer: A story

An unspecified government agency released a set…

by Andrea Brennen

Rubashov, Koestler, and the Theory of Relative Maturity

Awaiting the outcome of his secret trial in a c…

by Jack Goldsmith

As the modern age terminates in technological nihilism, it is by now beyond argument that modernity was undermined by its own successes. For humans, to step away from modern ways of thinking and being seems unimaginable now. But this is a delusion. As the technopolitics of late modernity close off on human horizons, a shoreline of possibility more vast than previously imagined emerges precisely as the veil of humanism is lifted. This is what lives on in Stevens’ wonderful poetry. Human horizons spread outward from the human world into the vastness and richness of the non-human things from which the world is assembled. This includes, as it always has, the machines.

Catches tigers
In red weather.

Wallace Stevens

Filed Under: American literature in English Tagged With: American poetry in English

R.A. the Rugged Man: Dragon Fire

Nov 1, 2022 by Shane Alexander Caldwell

Dragon Fire is a concept film produced by the Circus Bazaar Company and distributed by Nature Sounds Entertainment.

Visit the official release: Youtube
Internet Movie Database: IMBD

Official credits
Produced by Shane Alexander Caldwell
Directed by Linn Marie Christensen & Shane Alexander Caldwell
Written by Joe Lynch & Shane Alexander Caldwell
Director of photography | Arthur Woo
Production designer Linn Marie Christensen
Edited by R.A. the Rugged Man
Composed by Shroom
Movement direction by David Greeves
Associate producers Linn Marie Christensen & Colin Hagen Åkerland

Special effects by Doug Sakmann
Casting director | R.A. the Rugged Man
First assistant director | Philip Thomas Pedersen
Production assistant | Anthony Curry
Set designers | Morgan Shay, Colin Hagen Åkerland, Simbal Karma & Julie Filion
Costume designer | Julie Filion
Covid managers | Steven Haifawi & Faith Michaela Haifawi

Cast
Dragon Fire | Logan Marshall-Green
The Big Boss | Peter Greene
Kara Fire | Aase-Marie Sandberg El-Sayed
Ghostface Killah | Ghostface Killah
Masta Killa | Masta Killer
R.A. the Rugged Man | R.A. the Rugged Man
Kool G Rap | Kool G Rap
Xx3eme | Xx3eme
Cameos by AFRO & Eric Kelly

The Donburi House Ninjas
Brandon Kazen-Maddox
Cara Diaz
Dwayne Brown
Maurice Dowell

De Figuris Veneris
Sasha Ioselioni
Natasha Phoenix King
Sylvana Tapia
Erika Rodgers
Cara Diaz
Brandon Kazen-Maddox
Dwayne Brown
Maurice Dowell

The Big Boss Thugs
Steven Haifawi
Michael A McGrath
Danny Diablo
Joe Fatal
Danny McGrory
Johnny Peraza

Image Credit // The Circus Bazaar Company

New York Choy Lay Fut Lion Dance
Chinese Lion Coordinators | Wilsen Ng & Frank Tang
Operator 1 | Simon Wu
Operator 2 | Elijah Yong
Operator 3 | Kaitlin Cho
Operator 4 | Jesse Ng
Operator 5 | Sheck Cho

Stunt Department
Stunt Coordinator | Otto Tangstad
Stunt Performers | Niklas Brennsund, Mathias Ramfelt, Martin Lax & Colin Hagen Åkerland
Special Effects | Teo Viksjø

Production Department
Gaffer | Bart Cortright
Focus Puller | Kate Montgomery
Hair & Makeup | Kelsey Lehman
Vehicle Supplier | Anthony Aveni
Talent Supervisor | Valerie Fristachi & Anthony Curry
Equipment Rental | Ambrose Eng
Caterer & Craft Services | King of Slice NY

Post Production
Sound Design by This Old Man & Mari Åse Hajem
Mixed by Chris Conway
Colour by Shanon Moratti with the Circus Bazaar Colour Tablet
VFX Editor | Sarp Karaer
Art & Illustrations by Heath Riggs
Titles Design by Shane Alexander Caldwell
Assistant Graphics | Maria Borges
Technical Supervisor | Shanon Moratti
Assistant Editor | Linn Marie Christensen
Translation | Adrienne Beishan Seet

2nd Unit
Producer | Shane Alexander Caldwell
Directors | Linn Marie Christensen & Shane Alexander Caldwell
Associate Producers | Linn Marie Christensen & Colin Hagen Åkerland
DOP | Kristoffer Nylund Grindheim
Gaffer | Christer Smital
Best Boy | Henrik Efskin
SPFX | Inger Lina Johansen Thorjørnsen & Julie Filion

Legal Supervisors
Bull & Co Advokatfirma AS
Bing Hodneland
Cowen Debaets, Abrahams & Sheppard LLP

Shooting Locations
Linco Printing LLC
Slic Studios
Greenhouse Oslo
Thomas Heftyes Gate Air BnB

Finalisation
The Circus Bazaar Company
Nathan Andrews

Copyright ©
Nature Sounds Entertainment
The Circus Bazaar Company AS/Pty Ltd

Filed Under: Photography, computer art, film, video Tagged With: Cinematography and Videography

Circus Bazaar Magazine as a vocation

Nov 1, 2022 by Shane Alexander Caldwell

Ladies and gentlemen, children of all ages! Thank you for having blessed the universally acclaimed and much anticipated first edition of Circus Bazaar magazine with your sought-after presence. For this, I will, against all good humility, take on the role of your Ringmaster. I shall do this, for otherwise, I should stand guilty of the most terrible of literary crimes against your good self, the reader, and myself as the self-proclaimed editor. As what is an editor, if not the Ringmaster of a circus by other means? After all, this most fantabulous of publications, being by its very nature a great metaphor for knowledge, will require a guide. I shall then, henceforth, be your most glorified of buffoons.

But first, and in the interest of proper acquaintance and attention to hyperbole, I would like to give you a short history of Circus Bazaar. We sprang from the naive digital soil of 2013, as a platform for various forms of political activism and participation in political debate (later to be known as the Freak Show). However, it was with the tears of a clown, and the broken jaw of a journeyman boxer, that we realised that although we were giving many a welcome soul a ticket, the money box remained empty. It was here that the concept of Circus Bazaar as a vocation was born. The plan was simple: Open the Big Top to all who wanted to share in what we were doing. To quote many a prize fighter, ‘Anyone who wants it can get it’.

We were performing research, analysis, film production and documentary; we were growing ever more creative in our ambitions. Why not share these things with others? Why not write a political opera for the international anti-nuclear weapons movement, or produce music videos for some of the most famous acts in the world? Why not take our unquenchable fascination for critical socio-political issues and help to educate the public and government on some of the most pressing concerns of our time?

My dear audience, we have toiled under the sun in the best traditions of the beast, with the sincere hope of finding our way home to our circus tent, and with the great wish to bring back an audience to experience what we are and have become. But what is that? What have we become? All defies explanation, except for one thing. We are the unashamed secret wish of any maniacally driven publication to share the great curtain call of life and ideas that matter with the people. We wish to be you, the reader. Your curiosities, your questions, your dreams, your passions, your anger and your position on the world stage, however big or small it may be.

An original production by the Circus Bazaar Company

Journalism & sovereignty with Serhiy Tomilenko

Circus Bazaar Magazine as a vocation

The Ringmasters Speech
Circus Bazaar Magazi…

by Shane Alexander Caldwell

It tolls for thee

The Ringmasters Speech
Circus Bazaar Magazi…

by Shane Alexander Caldwell

Journalism & sovereignty with Serhiy Tomilenko

An interview with Serhiy Tomilenko. Serhiy is t…

by Serhiy Tomilenko

We are the Ringmaster who organises your thoughts, and the Marionettist who pulls the strings of your philosophy. We are the Fortune Teller that is your faith, the Acrobat that flips your language, and the Freak Show that is your politics. We are the Animal Trainer that is your will to conquer, and the Magic Acts that are the technology you use. We are the Buffoonery that is your entertainment, the Knife Thrower that is your favourite polemicist, and the herniated Contortionist that is the history we all share on this ever-changing earth.

Penned from the crooked timber of humanity itself.
We are Circus Bazaar.
Welcome to the inaugural edition of our magazine – in print!

The Ringmaster

Without further ado, I present to you, ‘Catching Tigers in Red Weather: The Politics of Science and Technology in a New Century of Fear’. A blasphemous offering from guest editor Zac Rogers, the great deconstructor of late-modern myth in a new and dangerous digital world, within which we are all fated to participate.

“When the animal trainer learns acts of magic and the freak show awaits, the ghastly mistake of history awakes.”

The Ringmaster

Filed Under: Magazines, journals & serials Tagged With: Serials in English

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to page 6
  • Go to page 7
  • Interim pages omitted …
  • Go to page 51
  • Go to Next Page »

Sections

  • Knowledge & Systems
  • Religion
  • Philosophy & Psychology
  • Social Sciences
  • Language
  • Science
  • Technology
  • Arts & Entertainment
  • Literature
  • History & Geography

More

  • Architecture
  • Bibliographies
  • Engineering
  • Epistemology
  • Ethics
  • Astronomy
  • Biology
  • Chemistry
  • Christianity
  • Economics
  • About Us
  • Contact Us
  • Advertise with us
  • The Big Tent Podcast
  • Terms & Conditions
  • Cookie Policy
  • Privacy Policy

© 2025 Circus Bazaar Magazine. All rights reserved.

Use of this site constitutes acceptance of our Terms and Condition, Privacy Policy and Cookie Statement. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of the Circus Bazaar Company AS, Pty Ltd or LLC.

We noticed you're visiting from Norway. We've updated our prices to Norwegian krone for your shopping convenience. Use United States (US) dollar instead. Dismiss