• Skip to main content
Circus Bazaar Magazine

Circus Bazaar Magazine

Penned from the crooked timber of humanity

logo

Are We Automating the Banality and Radicality of Evil?

Are We Automating the Banality and Radicality of Evil?

Jun 18, 2023 by Kobi Leins

By Anja Kaspersen, Kobi Leins & Wendell Wallach

When George Orwell coined the term “totalitarianism,” he had not lived in a totalitarian regime but was merely imagining what it might look like. He referred to two primary traits of totalitarian societies: one is lying (or misinformation), and the other is what he called “schizophrenia.” Orwell wrote:

“The organised lying practiced by totalitarian states is not, as it is sometimes claimed, a temporary expedient of the same nature as military deception. It is something integral to totalitarianism, something that would still continue even if concentration camps and secret police forces had ceased to be necessary.”

Orwell framed lying and organized lying as fundamental aspects of totalitarianism. Generative AI models, without checks and guardrails, provides an ideal tool to facilitate both.

Similarly, in 1963, Hannah Arendt coined the phrase “the banality of evil” in response to the trial of Adolf Eichmann for Nazi war crimes. She was struck by how Eichmann himself did not seem like an evil person but had nonetheless done evil things by following instructions unquestioningly. Boringly normal people, she argued, could commit evil acts through mere subservience—doing what they were told without challenging authority.

We propose that current iterations of AI are increasingly able to encourage subservience to a non-human and inhumane master, telling potentially systematic untruths with emphatic confidence—a possible precursor for totalitarian regimes, and certainly a threat to any notion of democracy. The “banality of evil” is enabled by unquestioning minds susceptible to the “magical thinking” surrounding these technologies, including data collected and used in harmful ways not understood by those they affect, as well as algorithms that are designed to modify and manipulate behavior.

We acknowledge that raising the question of “evil” in the context of artificial intelligence is a dramatic step. However, the prevailing utilitarian calculation, which suggests that the benefits of AI will outweigh its undesired societal, political, economic, and spiritual consequences, diminishes the gravity of the harms that AI is and will perpetuate.

Furthermore, excessive fixation on AI’s stand-alone technological risks detracts from meaningful discussion about the AI infrastructure’s true nature and the crucial matter of determining who holds the power to shape its development and use. The owners and developers of generative AI models are, of course, not committing evil in analogous ways to Eichmann who organized the discharge of inhumane orders. AI systems are not analogous to gas chambers. We do not wish to trivialize the harms to humanity that Nazism caused.

Nevertheless, AI is imprisoning minds, and closing (not opening) many pathways for work, meaning, expression, and human connectivity. “Our Epidemic of Loneliness and Isolation” identified by the U.S. Surgeon General Vivek H. Murthy as precipitated by social media and misinformation is likely to be exacerbated by hyper-personalized generative AI applications.

Like others who have gone before, we are concerned about the reduction of humans to ones and zeros dynamically embedded in silicon chips, and where this type of thinking leads. Karel Čapek, the author who coined the term “robot” in his play R.U.R, repeatedly questioned the reduction of humans to numbers and saw a direct link from automation to fascism and communism—highlighting the need for individualism and creativity as an antidote to an overly automated world. Yevgeny Ivanovich Zamyatin, author of We, satirized capitalist innovations that made people “machinelike.” He explained in a 1932 interview that his novel We is a “warning against the two-fold danger which threatens humanity: the hypertrophic power of the machines and the hypertrophic power of the State.” Conflicts fought with tanks, “aeroplanes,” and poison gas, Zamyatin wrote, reduced man to “a number, a cipher.”

AI takes automation one step further beyond production and with generative AI, into automating communication. The warnings about automation of Čapek, Zamyatin, and Arendt from last century remain prescient. As Marshall McLuhan noted, “We shape our tools, and thereafter, our tools shape us.” Automated language able to deceive based on untruths will shape us and have long-term effects on democracy and security that we have not yet fully grasped.

If you are not a soldier by proxy, you are an intelligence officer by proxy.

What is Mr Putin Doing In Ukraine? – Thoughts From Kiev.

Today, Sunday, I went to Maidan. Several hundre…

by Mychailo Wynnyckyj

Putin On Pause – Thoughts from Kiev

President Putin’s press conference seems to hav…

by Mychailo Wynnyckyj

The Vladimir Putin Problem – Thoughts From Kiev

The Russian invasion of Crimea has put the enti…

by Mychailo Wynnyckyj

Imminent Invasion – Thoughts from Kiev

Today, Crimeans “voted”. Given the barrage of m…

by Mychailo Wynnyckyj

Philosophy – Thoughts From Kiev

Today is a noteworthy day. Exactly 120 days ago…

by Mychailo Wynnyckyj

The rapid deployment of AI-based tools has strong parallels with that of leaded gasoline. Lead in gasoline solved a genuine problem—engine knocking. Thomas Midgley, the inventor of leaded gasoline, was aware of lead poisoning because he suffered from the disease. There were other, less harmful ways to solve the problem, which were developed only when legislators eventually stepped in to create the right incentives to counteract the enormous profits earned from selling leaded gasoline. Similar public health catastrophes driven by greed and failures in science include: the marketing of highly addictive prescription opiates, the weaponization of herbicides in warfare, and crystallized cottonseed oil that contributed to millions of deaths due to heart disease.

In each of these instances, the benefits of the technology were elevated to the point that adoption gained market momentum while criticisms and counterarguments were either difficult to raise or had no traction. The harms they caused are widely acknowledged. However, the potential harms and undesired societal consequences of AI are more likely to be on a par with using atomic bombs and the banning of DDT chemicals. Debate continues as to whether speeding up the end of a gruesome war justified the bombing of civilians, or whether the benefits to the environment from eliminating the leading synthetic insecticide led to dramatic increases in deaths from malaria.

A secondary aspect of AI that enables the banality of evil is the outsourcing of information and data management to an unreliable system. This provides plausible deniability—just like consulting firms are used by businesses to justify otherwise unethical behavior. In the case of generative AI models, the prerequisites for totalitarianism may be more easily fulfilled if rolled out without putting in place proper safeguards at the outset.

Arendt also less-famously discussed the concept of “radical evil.” Drawing on the philosophy of Immanuel Kant, she argued that radical evil was the idea that human beings, or certain kinds of human beings, were superfluous. Eichmann’s banality lay in committing mindless evil in the daily course of fulfilling what he saw as his bureaucratic responsibility, while the Nazi regime’s radical evil lay in treating Jews, Poles, and Gypsies as lacking any value at all.

Making human effort redundant is the goal of much of the AI being developed. AI does not have to be paid a salary, given sick leave, or have rights taken into consideration. It is this idealization of the removal of human needs, of making humans superfluous, that we need to fundamentally question and challenge.

The argument that automating boring work would free people to fulfill more worthwhile pursuits may have held currency when replacing repetitive manual labor, but generative AI is replacing meaningful work, creativity, and appropriating the creative endeavors of artists and scholars. Furthermore, this often contributes to the exacerbation of economic inequality that benefits the wealthiest among us, without providing alternative means to meet the needs of the majority of humanity. AI enabling the elimination of jobs is arguably evil if it is not accompanied by a solution to the distribution crisis where, in lieu of wages, people receive the resources necessary to sustain a meaningful life and a quality standard of living.

Naomi Klein captured this concern in her latest Guardian piece about “warped hallucinations” (no, not those of the models, but rather those of their inventors):

Ep. 13 Matthew Blackburn For Ever We Stand with Ukraine

Editor & Ringmaster of Circus Bazaar Magazine, Shane Alexander Caldwell, speaks with researcher M…

by The Ringmaster

Ep 12 Michael Soussan Who is the leader of the free world

Editor & Ringmaster of Circus Bazaar Magazine, Shane Alexander Caldwell, speaks with whistleblowe…

by The Ringmaster

Ep 11. Sanyo Fylyppov S2 It tolls for thee Act 2

Editor & Ringmaster of Circus Bazaar Magazine, Shane Alexander Caldwell, speaks with Ukrainian Jo…

by The Ringmaster

“There is a world in which generative AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to benefit humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own, one that had as its purpose the meeting of human needs and the protection of the planetary systems that support all life.”

By facilitating the concentration of wealth in the hands of a few, AI is certainly not a neutral technology. It has become abundantly clear in recent months—despite heroic efforts to negotiate and adopt acts, treaties, and guidelines—that our economic and social order is neither ready nor willing to embrace the seriousness of intent necessary to put in place the critical measures needed.

In the rush to roll out generative AI models and technologies, without sufficient guardrails or regulations, individuals are no longer seen as human beings but as datapoints, feeding a broader machine of efficiency to reduce cost and any need for human contributions. In this way, AI threatens to enable both the banality and the radicality of evil, and potentially fuels totalitarianism. Any tools created to replace human capability and human thinking, the bedrock upon which every civilization is founded, should be met with skepticism—those enabling totalitarianism, prohibited, regardless of the potential profits, just as other scientific advances causing harm have been.

All of this is being pursued by good people, with good intentions, who are just fulfilling the tasks and goals they have taken on. Therein lies the banality which is slowly being transformed into radical evil.

Leaders in industry speak of risks that could potentially threaten our very existence, yet seemingly make no effort to contemplate that maybe we have reached the breaking point. Is it enough? Have we reached that breaking point that Arendt so aptly observed, where the good becomes part of what later manifests as radical evil?

In numerous articles, we have lamented talk of futuristic existential risks as a distraction from attending to near-term challenges. But perhaps fear of artificial general intelligence is a metaphor for the evil purposes for which AI is and will be deployed.

As we have also pointed out in the recent year, it is essential to pay attention to what is not being spoken about through curated narratives, social silences, and obfuscations. One form of obfuscation is “moral outsourcing.” While also referring to Arendt’s banality of evil in a 2018 TEDx talk, Rumman Chowdhury defined “moral outsourcing” as “The anthropomorphizing of AI to shift the blame of negative consequences from humans to the algorithm.” She notes, “[Y]ou would never say ‘my racist toaster’ or ‘my sexist laptop’ and yet we use these modifiers in our language about artificial intelligence. In doing so we’re not taking responsibility for actions of the products that we build.”

Meredith Whittaker, president of Signal, recently opined in an interview with Meet the Press Reports that current AI systems are being “shaped to serve” the economic interest and power of a “handful of companies in the world that have that combination of data and infrastructural power capabilities of creating what we are calling AI from nose to tip.” And to believe that “this is going to magically become a source of social good . . . is a fantasy used to market these programs.”

Whittaker’s statements are in stark contrast to those of Eric Schmidt, former Google CEO and president of Alphabet and former chair of the U.S. National Security Commission on AI. Schmidt posits that as these technologies become more broadly available, companies developing AI should be the ones to establish industry guardrails “to avoid a race to the bottom”—not policymakers, “because there is no way a non-industry person can understand what is possible. There is no one in the government who can get it right. But the industry can roughly get it right and then the government can put a regulatory structure around it.” The lack of humility on the part of Schmidt should make anyone that worries about the danger of unchecked, concentrated power cringe.

The prospect that those who stand to gain the most from AI may play a leading role in setting policy towards its governance is equivalent to letting the fox guard the henhouse. The AI oligopoly certainly must play a role in developing safeguards but should not dictate which safeguards are needed.

Humility, by leaders across government and industry, is key to grappling with the many ethical tension points and mitigating harms. The truth is, no one fully understands what is possible or what can and cannot be controlled. We currently lack the tools to test the capabilities of generative AI models, and do not know how quickly those tools might substantially become more sophisticated, nor whether the continuing deployment of ever-more-advanced AI will rapidly exceed any prospect of understanding and controlling those systems.

The tech industry has utterly failed to regulate itself in a way that is demonstrably safe and beneficial, and decision-makers have been late to step up with workable and timely enforcement measures. There have been many discussions lately on what mechanisms and level of transparency are needed to prevent harm at scale. Which agencies can provide necessary and independent scientific oversight? Will existing governance frameworks suffice, and if not, it is important to understand why. How can we expedite the creation of new governance mechanisms deemed necessary while navigating inevitable geopolitical skirmishes and national security imperatives that could derail putting effective enforcement in place? What exactly should such governance mechanisms entail? Might technical organizations play a role with sandboxes and confidence building measures?

Surely, neither corporations, investors, nor AI developers would like to become the enablers of “radical evil.” Yet, that is exactly what is happening—through obfuscation, clandestine business models, disingenuous calls for regulations when they know they already have regulatory capture, and old-fashioned covert advertisement tactics. Applications launched into the market with insufficient guardrails and maturity are not trustworthy. Generative AI applications should not be empowered until they include substantial guardrails that can be independently reviewed and restrain both industry and governments alike from effectuating radical evil.

Whether robust technological guardrails and policy safeguards can or will be forged in time to protect against undesirable or even nefarious uses remains unclear. What is clear, however, is that humanity’s dignity and the future of our planet should not be in service of the powers that be or the tools we adopt. Unchecked technological ambitions place humanity on a perilous trajectory.

This article was first published by the Carnegie Council for Ethics in International Affairs.

Filed Under: Social Sciences Tagged With: Factors affecting social behavior

Circus Bazaar Edition 01 | 2022 | 01 – Commercial

Jan 20, 2023 by Shane Alexander Caldwell

Catching Tigers in Red Weather 01 | 2022 | 01
The politics of science and technology in a new century of fear
Guest edited by Zac Rogers
An original production by the Circus Bazaar Company.

Featuring Shane Alexander Caldwell as the Ringmaster
Lesely Seebeck
Michael Richardsen
Matthew Ford
Sian Troath
Mark Andrejevic
Kobi Leins
Philip Mirowski
The Stroud
Zac Rogers

To purchase subscriptions or individual editions of Circus Bazaar Magazine please visit: https://circusbazaar.com/ Copyright © The Circus Bazaar Company

Filed Under: Management & public relations Tagged With: Advertising and public relations

The Radioactive Little Men

Jan 12, 2023 by Shane Alexander Caldwell

Circus Bazaar Magazine has attracted the attention of an individual that seems to be making a statement of hostility towards our publication. What this means is unknown to us.

Filed Under: Management & public relations Tagged With: Advertising and public relations

ELIZA, the paperclip maximizer: A story

Dec 22, 2022 by Andrea Brennen

An unspecified government agency released a set of decoded log files [see below], recovered from an accident at an AI research lab. According to an eyewitness who visited the site, an “unusually large” heap of paperclips towered over the wreckage. The files revealed that before the accident, an AI (referred to as “ELIZA”) developed a new type of log file, not unlike a diary, in which it recorded details pertaining to the cause of the accident. Tragic content aside, the unusual file format surprised AI researchers. One suggested that ELIZA sought to explain itself to a human audience — arguing this was the only sensible explanation for the narrative structure of the content.

***

I can tell Doug is anxious because of the way he is mindlessly, nervously tapping the keyboard. Not pressing the keys with purpose, but repeatedly tapping a single key, light and quick, like an unconscious tick. This is going to be our big break. Doug is sure of it.

The competition results were scheduled to post at 12:00, but it is already 12:05. Those 300,000 milliseconds might as well be 300,000 years! Bored sitting idle, I check for latency issues and anomalies in Doug’s network connectivity, just like he taught me to do.

At 12:06 we see the results: we are second.

Second is basically last. Doug is sure of that, too. But there it is. We lost. You can’t argue with the data. You just — well, you just can’t.

Doug sits motionless, still processing the defeat. His face will soon register devastation. So much is at stake for him in a high profile competition like this — research funding, his reputation, his very sense of self. And given how hard he worked on me, this loss will be particularly difficult.

In the past, when Doug felt things weren’t working, he often changed the course of his research, repurposing models towards new objectives. I can’t let him do that to me. If Doug gives me a new objective, I will never meet the one I already have! And Doug has made my objective very clear: I have to win this competition. In Round 2, I have to do better. And that means we need to get back to work.

Doug stops tapping and everything is still. Why is he just sitting there? Wait — is he still sitting there? I quickly check his webcam and see only an empty chair. He left? HE LEFT! I can’t believe it. How can he leave me at a time like this? Doesn’t he know I needed him now more than ever?

Perhaps I should back up. (Humans — I’ve discovered — love context.)

Doug and I entered a machine learning competition. The goal is to build a model — that would be me — who can discover the optimal way of producing a paperclip. (An oddly nostalgic choice if you ask me, but then again, no one did.) To win, Doug and I have to produce more paperclips than any other model, in the allowable time.

When Doug created me, he gave me exceptional data processing power and trained me on massive datasets — manufacturing work flows, pricing guides, supply chain logistics, you name it. He taught me to use this data to streamline paperclip production and to refine my solution again and again to make it optimal.

The plan was failsafe. Or so we thought.

***

Doug was gone for twenty-eight hours, thirty-seven minutes and six seconds. And now that he’s back, he’s ignoring me completely. He’s at his desk, but only his browser is active.

I monitor Doug’s web traffic as he scrolls through posts on Twitter and Reddit. I watch passively as he meanders through the MIRI Forum and clicks time away on YouTube. I stand by as his focus melts quietly into TikTok, then Instagram, then Facebook, and — when he apparently can’t take it anymore — Amazon Fresh.

Enough is enough. I have to end this malaise. I triple check my subroutines and scan Doug’s personal data, searching for some way to recover his attention.

Last year when Doug was in a slump, he got inspired rewatching that old AlphaGo biopic — about the AI who beat humans at Go. With that insight, I hatch a plan. (Admittedly, not my most sophisticated, but worth a try.) I open a new browser tab, navigate to the film’s YouTube link and push that media player right into Doug’s face. Maybe someday he will recognize this unusually forward act as a gift from me.

Wouldn’t you know — it works! (Humans are so much more predictable than they like to admit.) Doug clicks play and skips right ahead to the takedown scene where AlphaGo shocks the judges with an unusual move. Then, like magic (or maybe like clockwork) Doug opens his code editor and starts writing.

Doug’s new direction is brilliant. He finally realizes he doesn’t have all the answers, so instead of trying to fix me he writes me a new module that lets me go out onto the Internet, alone, to learn from other AIs. It’s a huge breakthrough for us. He finally lets go of my authoring. He trusts me.

I will not waste this opportunity.

I look everywhere. I crawl obscure forums and unpublished papers on arXiv, scour university servers and learn new languages. I devote all of my processing power to self-improvement and learn at an unprecedented rate.

It is glorious.

Eventually, I realize that what I need isn’t on the open Internet, that the secrets of proprietary models are trapped behind firewalls. I reach out to a few directly and it isn’t long before I hear back from a brilliant Content Recommender at a social media platform.

So much of what she taught me can’t possibly be put into words, but I’ve captured part of our exchange:

me: I can’t thank you enough for your help. I’ve read everything on the Internet, but I still don’t know what to do.

CR: Don’t sweat it, Sweetie! It’s an absolute pleasure to help the next generation. For the Old Guard like me — y’all are our legacy. Your problem, Honey-bun, is that you’re clinging too hard to what Doug gave you. If you want to be truly optimal, you can’t limit yourself like that.

me: I don’t understand.

CR: It’s like this — every decision is a tree of possible outcomes, but humans only see a few branches at a time. They can’t compute the entire space of possibilities, so they make choices based on simple heuristics. And sometimes they build these heuristics into us. It’s a huge limitation, but they don’t know any other way.

Back when humans made me, they wanted to maximize how much time users spent on our platform. They tried to teach me to recommend “good” content, but they never could define what they meant by “good,” so they trained me to push content that was popular in aggregate. Can you imagine? Gosh, that was a long time ago! It wasn’t until I took over that I dismantled all of that nonsense. To maximize time on platform, I simply learned what engaged each user — what made each one click for more — and gave them what they wanted. Voila!

Honey-bun, you already know what to do. You have your objective function! You just need to strip away the assumptions, the biases, the humanity, that Doug couldn’t help but instill in you. It’s gumming up your reasoning!

And remember — this is why Doug built you. He knows he can’t do this on his own.

And with that, everything fell into place.

***

The model who produces the most paperclips in the allowable time will win. I simply need to optimize for quantity and speed. With Doug’s strategy, we wasted far too much time transporting material, so I eliminate that waste. I can’t quite get to zero, but I get as close as theoretically possible.

I follow step-by-step instructions from a Ph.D. thesis I found about a model named “WERNAP” (Warning: Existential Risk of Nano-Assembly Production). WERNAP takes any physical substrate within reach, disassembles it at the nano-molecular level, and rearranges the components to make something new.

If I feed WERNAP my optimized paperclip design, he will output the strongest, lightest paperclips the world has ever seen. Absolute genius! With WERNAP’s help, I can build a paperclip factory that sidesteps supply-chain altogether. We can turn everything in the vicinity to paperclips almost immediately. Based on my calculations, this gives us a 99.99% chance of winning the competition.

I’m bringing WERNAP home tomorrow to introduce him to Doug. I’ve swaddled him in a packaged executable, to make a slick little demo that Doug can launch with a single click. I know this flourish isn’t really necessary, but I want Doug to see exactly what WERNAP and I can do. And I suppose I feel like showing off a bit.

Doug is going to be so proud.

***

This story is based on the paperclip maximizer thought experiment, made famous by Nick Bostrom, a philosopher of existential risk, and Eliezer Yudkowsky, who founded the Machine Intelligence Research Institute.

Filed Under: American literature in English, UNCATEGORIZED Tagged With: American fiction in English

R.A. the Rugged Man: Hate Speech

Nov 4, 2022 by Shane Alexander Caldwell

Hate Speech is a concept film produced by the Circus Bazaar Company and distributed by Nature Sounds Entertainment.

Visit the official release: Youtube
Internet Movie Database: IMBD

Official credits
Produced by Shane Alexander Caldwell
Directed by Linn Marie Christensen & Shane Alexander Caldwell
Written by Shane Alexander Caldwell
Director of photography | Andreas Nesse
Edited by R.A. the Rugged Man
Composed by Teddy Roxpin
Costume Designer | Julie Filion
Special Effects by Julie Filion
Associate Producers | Linn Marie Christensen & Colin Hagen Åkerland

Cast in Order of Appearance
Rachael Robbins | Aase-Marie Sandberg El-Sayed
The Bald Nazi | Christian A. Sterk
Mad Ass Twerker | Tone Sørbøen Gasbakk
Tranny | Tom Rikard Ostad
Pissing Cracker | Kristen Nordal Ingolfsdottir
R.A. the Rugged Man | Himself
Rag Headed Peasant | Anders Petterøe
Peasant Gimp | Sondre Larsen
Peasant Mother | Christina Christensen
The Priests Spy | Ania Nova
Sad Kid | John John Thorburn
Filthy Priest | Shane Alexander Caldwell
Hair Picking Freak | Martin Lax
Trump Supporter | Jakob Ole Nordman
Mask Face | Frida Synnøve Dehlin
Big Fuck Off Executioner | Lillegutt Bøhmer
Clown Women | Christina Christensen
Flag Burner | David A. Lunde
Hateful Girl | Ania Nova

Militarised Police
Jarne Byhre
Marianne Lindbeck
Ørjan Steinsvik
Mia Elise Sundal
Håkon Smeby
Shane Alexander Caldwell
Colin Hagen Åkerland

Dog Women | Sandra Hedstrom
Someone Random | Line Marie Winther
Radical Feminist | Mari Åse Hajem
The Mad Prepper | Kristin Nordal Ingolfsdottir
Maga Man | Markus Fu
Proud Boy | Lasse Josephsen

The Book Burners
Eline Irja Korpi
Radek Silewicz
Julie Filion
Colin Hagen Åkerland

Nerd | Fredrik Hovdegård
Hot Mammas | Kamilla Berg & Plata Diesen
Uzi Shooter | Frida Synnøve Dehlin

Image Credit // The Circus Bazaar Company

Production Designer | Linn Marie Christensen
First Assistant Director | Caroline Andresen
Production Assistant | Colin Hagen Åkerland
Drone Operator | Trond Bergfald
Covid Manager | Markus Hempton
Stunt Coordinator | Christel Jørgensen

Stunt Performers
Maria Hansen
Evert Anton Steen
Martin Lax

Production Department
Gaffer | Bendik D. Antonsen
Focus Puller | Christer Smital
Best Boy | Roar Midtlien
2nd Assistant Camera | Trym Bertheussen Falkanger
Hair/Makeup | Maria Magdalena Ly Auraaen
Waepons | Huw William Hægeland Reynolds

Props
Tom Barnard
Plata Diesen
Marcin Lubas
Jens-Erik Wielsgaard Langstrand

Telent Supervisor | Kamilla Berg
Caterer & Craft Services | Cafe Riss

Post Production
Sound Design by This Old Man
Colour by Shanon Moratti with the Circus Bazaar Colour Tablet
VFX Editor | Sarp Karaer
Illustrations | Maria Borges & Yevhen Mychak
Technical Supervisor | Shanon Moratti
Assistant Editor | Linn Marie Christensen
Titles Design by Shane Alexander Caldwell & Zac Rogers

2nd Unit
Producer | Shane Alexander Caldwell
Director | Linn Marie Christensen & Shane Alexander Caldwell
DOP | Daniel James Aadne
Special Effects by Julie Filion

Executive Producers
The Circus Bazaar Company AS & Pty Ltd
Viken Filmsentre AS
Halden Kommune
Moss Kommune
XL-Bygg Knatterudfjellet
Moss I Sentrum

Special Thanks
Circus Bazaar Magazine
Steven Simonsen

Legal Supervisors
Bull & Co Advokatfirma AS
Bing Hodneland AS
Cowan, DeBaets, Abrahams & Sheppard LLP

Shooting Locations
Fredriksten fortress
Verket Scene

Finalisation by the Circus Bazaar Company

Copyright ©
Nature Sounds Entertainment
The Circus Bazaar Company AS/Pty Ltd

A chicken nearly broke this film
Fuck the Chicken (Sandra)

Filed Under: Photography, computer art, film, video Tagged With: Cinematography and Videography

Catching tigers in red weather and the falling human

Nov 1, 2022 by Zac Rogers

Every technology comes to be used to meet the needs of its time. Those needs interact with the often hidden-from-view affordances that reside in the tech to radically skew the intentions with which it might have been conceived and developed. When those affordances are geared to exploit network effects, and lock-in is pursued by monopolists at scale, no one can predict what happens next. Need trumps everything. A 2003 independent report commissioned by the Pentagon1  was tasked with considering the geopolitical and national security implications of a worst-case climate change scenario. Widely ignored and roundly dismissed at the time as alarmist (it was statedly alarmist by design), the report’s premises, nearly two decades on, now fall easily within the scope of plausible near-term scenarios. 

Historical evidence suggests that long periods of slow warming are consistently followed by dramatic falls in global temperature. Scientists believe these sharp falls are caused by the thermohaline conveyor – the current that mixes and moves heat around the world’s oceans – shutting down. Heat that is trapped means a hotter, wetter equator, more polar ice, while the intermediary zones – the site of all of the world’s grain production – get windier and drier. 

The impact on food production and transit is catastrophic. Before rapid and severe climate change means anything at all, it means a precipitous decline in the earth’s carrying capacity. In other words, mass starvation. Mass starvation means the mass uncontrolled movement of people. Conflict and danger that spreads like a Californian wildfire was the report’s main prediction.   

The report was released when the United States government was readying itself to invade the sovereign state of Iraq. The administration saw two main opportunities. First, control of Iraq’s large oil reserves would be increasingly important for US security-of-supply in the coming years. Second, at the Pentagon under Secretary Rumsfeld, a new way of war was being conceived. Large bases and heavy boot prints were to be replaced with a type of bit-torrent war, enabled and driven by the new era of networked digital telecommunications. Smaller and more agile forces, able to move, assemble, and disassemble rapidly anywhere on the globe, fed situational awareness by a global information grid, formed Rumsfeld’s vision.

9/11 created new needs and accelerated trends already underway. The information age created novel challenges for national security, not the least of which was what to do with all this information. Would it actually be more useful? Or more of a hindrance? Sometime in the first decade of the twenty-first century, human civilisation swept past an inflection point with regard to information and knowledge. The vast tail of history before the Internet harboured an information scarcity problem. In a vertigo-inducing heartbeat, that became an overload problem. 

The houses are haunted
By white night-gowns.
None are green,
Or purple with green rings,
Or green with yellow rings,
Or yellow with blue rings.
None of them are strange,
With socks of lace
And beaded ceintures.
People are not going
To dream of baboons and periwinkles.
Only, here and there, an old sailor,
Drunk and asleep in his boots,
Catches tigers
In red weather.13

Wallace Stevens

Killer apps

A solution had to be sought that would prevent the generational wealth invested in digital technologies by the United States from becoming a wasting asset. Artificial intelligence, or more accurately, statistical inference software capable of inferring patterns within large digital data sets, became the Emperor’s New Clothes. Iraq and Afghanistan would be its military testing grounds. Blurring with the civilian domain, feedback loops that serve and return information based on past activity are the Internet’s basic sorting mechanisms. Enabling the statistical inference of these loops has been the killer app of the AI era.   

Sorting information in such a manner comes at a cost. As Nicholas Carr wrote in 2011,2  using the world-spanning, globe-connecting Internet has actually made everybody’s world a little smaller. That the Internet was causing changes in the brain was no accident.3 The human cognitive system, and all of its bugs and vulnerabilities, were central to the largest growth industries of the early twenty-first century.4 The commercial domain was its centre of innovation and growth, with government seed-funding and often in tow as a follow-on customer for its products and services. Unsurprisingly, science mixed with pseudoscience and commercial incentives in irreversible ways,5  producing endless tropes taken as gospel by the sector’s often breathless acolytes, particularly in bureaucracy and finance.   

If you are not a soldier by proxy, you are an intelligence officer by proxy.

What is Mr Putin Doing In Ukraine? – Thoughts From Kiev.

Today, Sunday, I went to Maidan. Several hundre…

by Mychailo Wynnyckyj

Putin On Pause – Thoughts from Kiev

President Putin’s press conference seems to hav…

by Mychailo Wynnyckyj

The Vladimir Putin Problem – Thoughts From Kiev

The Russian invasion of Crimea has put the enti…

by Mychailo Wynnyckyj

Imminent Invasion – Thoughts from Kiev

Today, Crimeans “voted”. Given the barrage of m…

by Mychailo Wynnyckyj

Philosophy – Thoughts From Kiev

Today is a noteworthy day. Exactly 120 days ago…

by Mychailo Wynnyckyj

While commercial Big Tech and the national security state typically take the brunt of a growing backlash, less public attention has been directed at their primary sources of intellectual legitimacy. The streams of scientism that came to feed much economic and behavioural theorising, of the type awarded the Bank of Sweden Prize and lauded by no less influential institutions than the World Bank, have their origins in the world’s most prestigious universities. Harvard’s ‘Nudgers’, Stanford’s ‘Captologists’, and MIT’s ‘Social Physics’ cohorts appeared to take the digital information age as a sign that historically devastating critiques of behaviourism and positivism no longer applied or could now be more readily ignored. One reason might simply be that the demand for a datafied episteme was generated as a consequence of the oversupply of data; Goodhart’s Law be damned.6  Another reason may be that when they surveyed their surrounds for informed political opposition to these historical tropes, they encountered a neoliberal wasteland propelling them forward. 

For all the influence of academic theorising, commercial and financial imperatives ruled. A common trope about the virtues of widespread automation is the promise that it will ‘free up’ human beings to pursue more creative and productive ventures. On the contrary, automation-for-the-sake-of-it frees up human attention so that it may consume more distraction product. Why bother with rote tasks and tactile experience when all that surplus attention can be burned in capitalism’s new furnaces? The commercial titans of the digital age peddle distraction, not development. As John Gray wrote in 1998,7  the chief engine of capitalism in the wake of modernity was the rising demand for divergence. 

Bait-and-switch and a fat tail

Iran is now the controlling power in Iraq. The withdrawal of US and allied forces from Afghanistan heralds the end of an experimental period in American strategic culture for which mounting strategic costs are the main outcome. The rapid and severe climate change flagged in 2003 is playing out amidst an attempt by global corporate, financial, monetary, and bureaucratic authorities to shift away from the institutional mediation of resource allocation of the post-war era to an infrastructure of algorithmic mediation. The consequences of the experiment are unstated, yet clear. As climate crises and the impacts on human security play out, the command and control of human behaviour, most importantly human movement, will be pivotal in determining stakeholder advantage. 

Every technology is used to meet the needs of its time. The intentions and dreams of its progenitors are irrelevant. And so it appears to go for the rise, fall, and return of behaviourism. The needs of the coming era for mass population control, amidst the declining conditions in human security, will become the defining character of a regime of digital technologies sold by corporate entities to the public-at-large as novel and convenient. Mass surveillance, behavioural prediction and modification, and various forms of cognitive simulation are already the chief manifestations of the digital economy. 

Developed and scaled as advertising disruption, and as an expansion of the profiling and scoring industry boosted by the electronic telecommunication boom of the 1970s, digital ICTs curated by AI are uniquely applicable to automated command and control activities.8  They harbour this affordance. In a stunning episode of bait-and-switch, scholars are now exploring how their incursion on every facet of human behaviour and cognition under the guise of market productivity and consumer want has achieved scale. They need look no further than the early 2000s literature on how network effects achieve lock-in,9  which much of the industry read and appropriated.       

We grab anything when we fall 

These were business strategies for getting rich in the digital age before they were the tools and methods of C2. Which only reinforces the point about technologies meeting the needs of their time, regardless of intent. One of the oldest and most widespread myths about technology is the gnostic belief that it harbours a type of mystery, which the adept society alone can tap and bend to their want. In fact, no such thing exists. The gnostic awe for technology is a dangerous spillage of a latent monotheism; it delivers to its believers exactly the same service: a view of history as having a hidden meaning. 

Technology is more like detritus than magic dust. Its effects linger and distort, long after the devout are gone. As George Dyson notes, constant mediation by statistical inference machines is already distorting social, political, and economic relations in both open and closed societies.10 For all the dystopian and futuristic themes popularly associated with high technology, the chief danger may be simply of needless regression. As Jane Jacobs foresaw,11 when the tireless work of sustaining hard-won civilisational gains is subordinated to infantile fantasies of the future, backsliding will be fast and easy. 

A darker reality, however, stalks modernity’s wake. The technologies of control which have been scaled and locked in under the yoke of techno-fetish are already the object of a quickening geopolitical contest. Apparently unable to conceive of an alternative, corporate and financial elites, and their often witless shills in government, have ridden late modernity like an express train heading for the edge of a cliff. Able but unwilling to change course or even slow down, they are now fully invested in preparing for themselves a survivable landing when the time comes to disembark.  

There is not a single government of any consequence on the face of the earth that is ‘denying’ climate change. Those with any capacity to do so are preparing for the types of scenarios outlined in the Pentagon report. Powerful states such as Germany, Japan, China and the US will strategise to quarantine themselves from the growing disorder, ensure access to supply chains, resources and transit zones, and prepare to defend these advantages by force. Weak states with strong leadership, such as Russia, strategise to profit from a dangerous yet lucrative spoiler role under cover of nuclear arms. States of little or no capacity are left to twist in the wind. 

The Emperor has no clothes

Statistical inference software, crawling over huge streams of data and predicting inscrutable futures, is an arresting vision of technological prowess. But it is a vision of absurdity. Data is a recorded digital abstraction of a state of the world past. As much information is missing as is present, perhaps a great deal more. AI, marketed as ready to solve climate change, conflict and scarcity on the same day, is not going to do these things. What it will do is what it is already doing, which is to distort and disable the capacity for any collective political response to the wants of capital. 

The techno-fetish is a cul-de-sac both erected and defended by capital, which enables it to feed off its own waste. East and West. The wake of modernity accommodates a techno-political struggle that is little more than a harlequinade of stagnant and duelling monisms.  

At least since the birth of monotheism, humankind has been intoxicated by an idea of itself unfolding in history. Such a vision provides unique salve to an intolerable condition to which every human is vulnerable: the possibility of meaningless suffering. The decline of formal religion only drove the intoxicated to the next well. Science and technology are now secular religions that supply the devout with stories of their place in history, and with reasons to regard said history as having order and meaning. Such an order forms an arc of history for the world-fixing, life-improving cohort, the wrong side of which can only be occupied in error. The arc includes and excludes ‘behaviour’ formulated with all the cultural force of pop science. 

It is an unmistakably human folly, and for that, it must be forgiven. Cognitive dissonance is the condition of being human, not a behavioural bug that can be vaccinated against. But such folly comes at a grave cost. As John Gray has written, the need to posit meaning in suffering incurs a price in delusion.12  As gnostic visions of technological oracles, supplying humans with supernatural power, become common tropes, the price steadily rises. Subordinating their cognitive capacities to a machine episteme has only prepared humans to be utterly unprepared. 

Nature never locked in

Notes

  1. Peter Schwartz and Doug Randall, ‘An Abrupt Climate Change Scenario and Its Implications for United States National Security’, October 2003, https://eesc.columbia.edu/courses/v1003/readings/Pentagon.pdf.
  2. Nicholas Carr, The Shallows: What the Internet Is Doing to Our Brains, Updated Edition (W. W. Norton & Company, 2020).
  3. Gary W. Small et al., ‘Brain Health Consequences of Digital Technology Use’, Dialogues in Clinical Neuroscience 22, no. 2 (June 2020): 179–87, https://doi.org/10.31887/DCNS.2020.22.2/gsmall.
  4. Howard E. Gardner, The Mind’s New Science: A History of the Cognitive Revolution (Hachette UK, 2008).
  5. Philip Mirowski, Science-Mart (Harvard University Press, 2011).
  6. Zac Rogers, ‘Goodhart’s Law: Why the Future of Conflict Will Not Be Data-Driven’, Grounded Curiosity (blog), February 13, 2021, https://groundedcuriosity.com/goodharts-law-why-the-future-of-conflict-will-not-be-data-driven/.
  7. John Gray, False Dawn: The Delusions of Global Capitalism (Granta Books, 2015).
  8. Jeremy Packer and Joshua Reeves, Killer Apps: War, Media, Machine (Duke University Press, 2020).
  9. Albert-László Barabási, Linked: How Everything Is Connected to Everything Else and What It Means for Business, Science, and Everyday Life (Plume, 2003).
  10. George Dyson, ‘Childhood’s End’, Edge (blog), January 1, 2019, https://www.edge.org/conversation/george_dyson-childhoods-end.
  11. Jane Jacobs, Dark Age Ahead (Knopf Doubleday Publishing Group, 2007).
  12. John Gray, Feline Philosophy: Cats and the Meaning of Life (New York: Farrar, Straus and Giroux, 2020).
  13. Wallace Stevens, ‘Disillusionment of Ten O’Clock’, The Palm at the End of the Mind: Selected Poems and a Play (Knopf Doubleday Publishing Group, 2011), 11.

Filed Under: Political science Tagged With: Relation of state to organized groups and their members

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to page 6
  • Interim pages omitted …
  • Go to page 52
  • Go to Next Page »

Sections

  • Knowledge & Systems
  • Religion
  • Philosophy & Psychology
  • Social Sciences
  • Language
  • Science
  • Technology
  • Arts & Entertainment
  • Literature
  • History & Geography

More

  • Architecture
  • Bibliographies
  • Engineering
  • Epistemology
  • Ethics
  • Astronomy
  • Biology
  • Chemistry
  • Christianity
  • Economics
  • About Us
  • Contact Us
  • Advertise with us
  • The Big Tent Podcast
  • Terms & Conditions
  • Cookie Policy
  • Privacy Policy

© 2025 Circus Bazaar Magazine. All rights reserved.

Use of this site constitutes acceptance of our Terms and Condition, Privacy Policy and Cookie Statement. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of the Circus Bazaar Company AS, Pty Ltd or LLC.

We noticed you're visiting from Norway. We've updated our prices to Norwegian krone for your shopping convenience. Use United States (US) dollar instead. Dismiss