Rethinking AI through the politics of 1968


HAL in vector. Made in flash (2004). Flickr/Abel. Some rights reserved.

There’s a definite resonance between the
agitprop of ’68 and social media. Participants in the UCU strike earlier this
year, for example, experienced Twitter as a
platform
for both affective solidarity and practical self-organisation[1].

However, there is a different genealogy
that speaks directly to our current condition; that of systems theory and
cybernetics. What happens when the struggle in the streets takes place in the
smart city of sensors and data? Perhaps the revolution will not be televised,
but it will certainly be subject to algorithmic analysis. Let’s not forget that
1968 also saw the release of ‘2001: A Space Odyssey’, featuring the AI
supercomputer HAL.

While opposition to the Vietnam war was a
rallying point for the movements of ’68, the war itself was also notable for
the application of systems analysis by US Secretary of Defense Robert McNamara,
who attempted to make it, in modern parlance, a data-driven war.

During the Vietnam war the hamlet pacification
programme
alone produced 90,000 pages of data and reports a month[2],
and the body count metric was published in the daily newspapers. The milieu
that helped breed our current algorithmic dilemmas was the contemporaneous
swirl of systems theory and cybernetics, ideas about emergent behaviour and
experiments with computational reasoning, and the intermingling of military
funding with the hippy visions of the Whole Earth Catalogue.

The double helix of DARPA and Silicon
Valley can be traced through the evolution of the web to the present day, where
AI and machine learning are making inroads everywhere carrying their own
narratives of revolutionary disruption; a Ho Chi Minh trail of predictive
analytics.

They are playing Go better than grand
masters and preparing to drive everyone’s car, while the media panics about AI
taking our jobs. But this AI is nothing like HAL. It’s a form of pattern-finding
based on mathematical minimisation; like a complex version of fitting a
straight line to a set of points. These algorithms find the optimal solution
when the input data is both plentiful and messy. Algorithms like backpropagation[3]
can find patterns in data that were intractable to analytical description, such
as recognising human faces seen at different angles, in shadows and with occlusions.
The algorithms of AI crunch the correlations and the results often work
uncannily well.

But it’s still computers doing what
computers have been good at since the days of vacuum tubes; performing
mathematical calculations more quickly than us. Thanks to algorithms like
neural networks, this calculative power can learn to emulate us in ways we
would never have guessed at. This learning can be applied to any context that
is boiled down to a set of numbers, such that the features of each example are
reduced to a row of digits between zero and one and are labelled by a target
outcome. The datasets end up looking pretty much the same whether it’s cancer
scans or Netflix-viewing figures.

There’s nothing going on inside except
maths; no self-awareness and no assimilation of embodied experience. These
machines can develop their own unprogrammed behaviours but utterly lack an
understanding of whether what they’ve learned makes sense. And yet, machine
learning and AI are becoming the mechanisms of modern reasoning, bringing with
them the kind of dualism that the philosophy of ’68 was set against, a belief
in a hidden layer of reality which is ontologically superior and expressed
mathematically
[4].

The delphic accuracy of AI comes with
built-in opacity because massively parallel calculations can’t always be
reversed to human reasoning, while at the same time it will happily regurgitate
society’s prejudices when trained on raw social data. It’s also mathematically
impossible to design an
algorithm
to be fair to all groups at the same time[5]. It’s also mathematically impossible to design an
algorithm to be fair to all groups at the same time.

For example, if the reoffending base
rates vary by ethnicity, a recidivism algorithm like COMPAS will predict different
numbers of false positives and more black people will be unfairly
refused bail
[6].
The wider impact comes from the way the algorithms proliferate social
categorisations such as ‘troubled family’ or ‘student likely to underachieve’,
fractalising social binaries wherever they divide into ‘is’ and ‘is not’. This
isn’t only a matter of data dividuals misrepresenting our authentic selves but
of technologies of the self that, through repetition, produce subjects and act
on them. And, as AI analysis starts overcode MRI scans to force psychosocial
symptoms back into the brain, we will even see algorithms play a part in the
becoming of our bodies
[7].

ALSO READ   World politics explainer: the end of Apartheid

Political
technology

What we call AI, that is, machine
learning acting in the world, is actually a political technology in the
broadest sense. Yet under the cover of algorithmic claims to objectivity,
neutrality and universality, there’s an infrastructual switch of allegiance to
algorithmic governance.

The dialectic that drives AI into the
heart of the system is the contradiction of societies that are data rich but
subject to austerity. One need only look at the recent
announcements
about a brave new NHS to see the fervour welcoming this
salvation[8].
While the global financial crisis is manufactured, the restructuring is real;
algorithms are being enrolled in the refiguring of work and social relations
such that precarious employment depends on satisfying
algorithmic demands
[9]
and the public sphere exists inside a targeted attention economy.

Algorithms and machine learning are
coming to act in the way pithily described by Pierre Bourdieu, as structured
structures predisposed to function as structuring structures[10],
such that they become absorbed by us as habits, attitudes, and pre-reflexive
behaviours.

In fact, like global warming, AI has
become a
hyperobject
[11]
so massive that its totality is not realised in any local manifestation, a
higher dimensional entity that adheres to anything it touches, whatever the
resistance, and which is perceived by us through its informational imprints.

A key imprint of machine learning is its
predictive power. Having learned both the gross and subtle elements of a
pattern it can be applied to new data to predict which outcome is most likely,
whether that is a purchasing decision or a terrorist attack. This leads
ineluctably to the logic of preemption in any social field where data exists, which
is every social field, so algorithms are predicting which prisoners should be
given parole and which
parents
are likely
to abuse
their children[12][13].

We should bear in mind that the logic of
these analytics is correlation. It’s purely pattern matching, not the revelation
of a causal mechanism, so enforcing the foreclosure of alternative futures
becomes effect without cause. The computational boundaries that classify the
input data map outwards as cybernetic exclusions, implementing continuous forms
of what Agamben calls states of exception. The internal imperative of all
machine learning, which is to optimise the fit of the generated function, is
entrained within a process of social and economic optimisation, fusing
marketing and military strategies through the unitary activity of targeting. A society whose synapses have been replaced by neural
networks will generally tend to a heightened version of the status quo.

A society whose synapses have been
replaced by neural networks will generally tend to a heightened version of the
status quo. Machine learning by itself cannot learn a new system of social
patterns, only pump up the existing ones as computationally eternal. Moreover,
the weight of those amplified effects will fall on the most data visible i.e.
the poor and marginalised. The net effect being, as the book title says, the automation
of inequality
[14].

ALSO READ   Politics Report: Sports Arena Intrigue

But at the very moment when the tech has
emerged to fully automate neoliberalism, the wider system has lost its
best-of-all-possible-worlds authority, and racist authoritarianism
mestastasizes across the veneer of democracy.

Contamination
and resistance

The opacity of algorithmic
classifications already have the tendency to evade due process, never mind when
the levers of mass correlation are at the disposal of ideologies based on
paranoid conspiracy theories. A common core to all forms of fascism is a
rebirth of the nation from its present decadence, and a mobilisation to deal
with those parts of the population that are the
contamination
[15].
 

The automated identification of anomalies
is exactly what machine learning is good at, at the same time as promoting the
kind of thoughtlessness that Arendt identified in Eichmann.

So much for the intensification of
authoritarian tendencies by AI. What of resistance?

Dissident Google staff forced them to
partly drop
project Maven
[16],
which develops drone targeting, and Amazon workers are campaigning against the
sale of facial recognition systems to the government. But these workers are the
privileged guilds of modern tech; this isn’t a return of working class power.

In the UK and USA there’s a general
institutional push for
ethical AI
– in fact you can’t move for initiatives aiming to add ethics to
algorithms[17],
but i suspect this is mainly preemptive PR to head off people’s growing unease
about their coming AI overlords. All the initiatives that want to make AI
ethical seem to think it’s about adding something i.e. ethics, instead of about
revealing the value-laden-ness at every level of computation, right down to the
mathematics. All the initiatives that want to make
AI ethical seem to think it’s about adding something…

Models of radical democratic practice
offer a more political response through structures such as people’s councils
composed of those directly affected, mobilising what Donna Haraway calls situated knowledges through
horizontalism and direct democracy[18].
While these are valid modes of resistance, there’s also the ’68 notion from
groups like the Situationists that the Spectacle generates the potential for its
own supersession[19].

I’d suggest that the self-subverting quality
in AI is its latent surrealism. For example, experiments to figure out how
image recognition actually works probed the contents of intermediary layers in
the neural networks, and by recursively applying filters to these outputs
produced hallucinatory
images
that are straight out of an acid trip, such as snail-dogs and trees
made entirely of eyes[20].
When people deliberately feed AI the wrong kind of data it makes surreal
classifications. It’s a lot of fun, and can even make art that gets shown
in galleries[21]
but, like the Situationist drive through the Harz region of Germany while
blindly following a map of London, it can also be a poetic disorientation that
coaxes us out of our habitual categories.

Playfully
serious

While businesses and bureaucracies apply
AI to the most serious contexts to make or save money or, through some miracle
of machinic objectivity, solve society’s toughest problems, its liberatory
potential is actually ludic.

ALSO READ   The Atlantic Politics & Policy Daily: Cohen Cohen Gone

It should be used playfully instead of
abused as a form of prophecy. But playfully serious, like the tactics of the
Situationists themselves, a disordering of the senses to reveal the
possibilities hidden by the dead weight of commodification. Reactivating the
demands of the social movements of ’68 that work becomes play, the useful
becomes the good, and life itself becomes art. Reactivating
the demands of the social movements of ’68 that work becomes play, the useful
becomes the good, and life itself becomes art…

At this point in time, where our futures
are becoming cut off by algorithmic preemption we need to pursue a political
philosophy that was embraced in ’68, of living the new society through
authentic action in the here and now.

A counterculture of AI must be based on
immediacy. The struggle in the streets must go hand in hand with a detournement
of machine learning; one that seeks authentic decentralization, not Uber-ised
serfdom, and federated horizontalism not the invisible nudges of algorithmic
governance. We want a fun yet anti-fascist AI, so we can say “beneath the
backpropagation, the beach!”.

References

[1]          Kobie, Nicole.
‘#NoCapitulation: How One Hashtag Saved the UK University Strike’. Wired UK 18
Mar. 2018..

[2]          Thayer, Thomas C. A
Systems Analysis View of the Vietnam War: 1965-1972. Volume 2. Forces and
Manpower. 1975. www.dtic.mil..

[3]          3Blue1Brown. What Is
Backpropagation Really Doing? | Deep Learning, Chapter 3. N.p. Film.

[4]          McQuillan, Dan.
‘Data Science as Machinic Neoplatonism’. Philosophy & Technology (2017):
1–20.

[5]          Narayanan, Arvind.
Tutorial: 21 Fairness Definitions and Their Politics. N.p.

[6]          Corbett-Davies, Sam
et al. ‘A Computer Program Used for Bail and Sentencing Decisions Was Labeled
Biased against Blacks. It’s Actually Not That Clear.’ Washington Post 17 Oct.
2016.

[7]          Resnick, Brian.
‘Treating Depression Is Guesswork. Psychiatrists Are Beginning to Crack the
Code.’ Vox. N.p., 4 Apr. 2017.

[8]          Department of Health
and Social Care. ‘Matt Hancock: New Technology Is Key to Making NHS the World’s
Best’. GOV.UK. N.p., 6 Sept. 2018.

[9]          O’Connor, Sarah.
‘When Your Boss Is an Algorithm’. Financial Times. N.p., 8 Sept. 2016.

[10]       Bourdieu, Pierre. The
Logic of Practice. p53. Stanford University Press, 1990.

[11]       Morton, Timothy.
Hyperobjects – Philosophy and Ecology after the End of the World. University Of
Minnesota Press, 2013.

[12]       Keddell, Emily.
‘Predictive Risk Modelling: On Rights, Data and Politics.’ Re-Imagining Social
Work in Aotearoa New Zealand 4 June 2015.

[13]       McIntyre, Niamh, and
David Pegg. ‘Councils Use 377,000 People’s Data in Efforts to Predict Child
Abuse’. The Guardian 16 Sept. 2018. www.theguardian.com.

[14]       Eubanks, Virginia. ‘A
Child Abuse Prediction Model Fails Poor Families’. Wired 15 Jan. 2018.

[15]       iGriffin, Roger. ‘The
Palingenetic Core of Fascist Ideology’. Library of Social Science. N.p., n.d..

[16]       Shane, Scott, Cade
Metz, and Daisuke Wakabayashi. ‘How a Pentagon Contract Became an Identity
Crisis for Google’. The New York Times 30 July 2018.

[17]       Department for Digital,
Culture, Media & Sport. ‘Consultation on the Centre for Data Ethics and
Innovation’. GOV.UK. N.p., 13 June 2018.

[18]       McQuillan, Dan.
‘People’s Councils for Ethical Machine Learning’. Social Media + Society 4.2
(2018): 2056305118768303. SAGE Journals.

[19]       Plant, Sadie. The Most
Radical Gesture: The Situationist International in a Postmodern Age. Routledge,
1992.

[20]       Mordvintsev, Alexander,
Christopher Olah, and Mike Tyka. ‘Inceptionism: Going Deeper into Neural
Networks’. Research Blog 17 June 2015.

[21]       Akten, Memo. ‘Learning
to See’. Memo Akten. 2018.



READ SOURCE

Leave a Reply