good afternoon everyone uh my name is
Rob Reese I am an associate director
here at the institute for human-centered
artificial intelligence the new space
that we're sitting in
and I'm also the faculty director of the
center for ethics and Society at
Stanford Hai and ethics and Society are
the co-hosts of this year's Tanner
lecture on AI and human values
um the 10 electors are one of the most
distinguished lectures for philosophers
and we're really honored to have a
chance to host a special Tanner lecture
that focuses on the intersection between
Ai and philosophy or Ai and human values
um I'm really pleased now to introduce
the president of Stanford University
Mark Tessier Levine who will introduce
to you the event and our lecturer
foreign
thank you so much Rob and and good
evening to um all of you it's wonderful
to be here with all of you in person
um and I just I'm so delighted to have
the privilege uh to welcome you to this
special Tanner lecture on AI and human
values
um and I want to make a extended special
welcome both to those of you who are in
person and also those of you who are
following virtually
uh the Tanner lectures as I expect you
all know really seek to contribute to a
better understanding of human behavior
and human values through the latest
scholarship and scientific learning
the lecture series was established 45
years ago by Obert Clark Tanner a
scholar industrialist and philanthropist
and today nine institutions across the
U.S and the United Kingdom host the
Tanner lectures
but for Stanford this is a special honor
because of the very special bond that we
share with over Tanner he studied
philosophy and earned his master's
degree here in the 1930s he became a
member of our faculty teaching religious
studies from 1939 to 1944 and he also
served as acting chaplain at the time
he then returned to the University of
Utah in his home state where he taught
for nearly 30 years more after that but
throughout the decades he maintained his
connection with with Stanford in 1960
Obert and his wife Grace dedicated the
Tanner Memorial Library of philosophy as
a memorial to three sons who had died
young tragically
some of you will also be familiar with
the Tanner Fountain located in the
circle between mehmad and the Hoover
Tower the pound the fountain was also a
gift from uh the Tanners and a very
favorite site for Mountain hopping among
our undergraduates
um of course this lecture series is
perhaps uh obert's greatest contribution
to human understanding but we're working
Grace saw these lectures as
opportunities for Scholars to contribute
to the intellectual and moral life of
the university and of humanity
appointment as a Tanner lecturer
recognizes uncommon achievement and
outstanding abilities in the field of
human values
so we're very honored that Professor
Seth Lazar joins us today to deliver
this year's lectures focused on
algorithmic government governance and
political philosophy
Professor Lazar is the professor of
philosophy at the Australian National
University
an Australian research Council future
fellow and a distinguished research
fellow of the University of Oxford
Institute for ethics and AI
Professor Lazar leads the machine
intelligence and normative Theory Lab at
the Anu where he's responsible for a
team of Scholars and students studying
the morality of and politics of data and
artificial intelligence
in his work Professor Lazar evaluates
how big data Ai and related Technologies
create new modalities of power
and asks whether and how these new power
relations can be legitimated and
Justified
he considers how we should design those
systems both to account for human values
and to mitigate for human biases
in addition to his work in artificial
intelligence Professor Lazar studied the
ethics of War self-defense and risk he's
the author of sparing civilians a
philosophical defense of the principle
of Civilian immunity in war
Professor lazar's work has appeared in
many top journals including ethics
philosophy and public affairs the
australasian Journal of philosophy and
more
is on the editorial board of Oxford
studies and political philosophy as well
as the executive committee for the
association for computing Machinery
conference on fairness accountability
and transparency
he's one of the authors of a study by
the U.S national academies of science
engineering medicine which reported to
Congress on the ethics and governance of
responsible Computing research
so as you can see we really have the
perfect lecturer for this series uh this
week
Professor Lazar received his PhD in
political Theory from Oxford University
in 2009 his talk this evening is titled
governing the algorithmic city it will
focus on the ways in which we're
increasingly connected to one another
through algorithms and the power
relations that result from those
connections they'll explore whether and
how such power can be exercised
permissively and how those relations
have the potential to reshape our
society
following Professor lazar's lecture will
hear a response from Marion forecast
professor of Sociology at UC Berkeley
Professor forcad received her PhD from
Harvard University she taught at NYU and
Princeton University before joining the
Berkeley sociology Department in 2003.
a comparative sociologist by training
she's interested in variations in
economic and political knowledge and
practice across Nations she's written
about new forms of social stratification
and Morality In the digital economy as
well as about algorithmic societies
Professor forcad is the author of
economists and societies which explores
the distinctive character of the
discipline and profession of economics
in three countries
she's also an associate fellow of the
Max Planck Sean suppose Center on coping
with instability and Market societies
and an external scientific member of the
Max Planck Institute for the study of
societies
after her remarks she will join
Professor Lazar in conversation and
together they will take some questions
from the audience
now tomorrow because as you all know
this is a series tomorrow Professor
Lazar will offer a second lecture on
communicative Justice and the
distribution of attention
arvind Narayanan professor of computer
science at Princeton University will
give the response then on Thursday we
will conclude with the discussion
seminar elaborating on the theme the
themes from this week's lectures Josh
Cohen faculty at Apple University and
distinguished senior fellow at the
University of California Berkeley and
Renee Jorgensen assistant professor of
philosophy at the University of Michigan
will join Professor Lazar for the
seminar
both tomorrow's lecture and the
discussion seminar will be here it in
Gates
um uh in the the this building and
Wednesday's lecture will also be live
streamed and I encourage you all to join
us for those events as well
and before I close and hand it over to
Professor Lazar I want to acknowledge
the lecturers co-hosts the McCoy Family
Center for ethics and society and the
Stanford Institute for human-centered
artificial intelligence thank you for
making this exchange of ideas possibly
thank you also to Professor Lazar and to
the accomplished Scholars who will be in
conversation with him over the next
three days and with that please join me
in welcoming Professor Lazar
foreign
okay so I like to give computer
scientists the cultural experience of a
talk without slides there is a handout
though so don't worry so if you look at
the QR code
um you'll be able to find the handout
for the talk I couldn't print up enough
because it's not very sustainable to
print so many for such a big group but
if you go there you'll find it the the
handout in philosophy is essentially a
[Music]
yeah it's a it's a prospectus it's uh
it's a guide rope and it's a set of
receipts
um so you can click through and see
what's coming you can follow along if
something I'm saying doesn't seem clear
to you
and then at the end in the Q a you can
sort of pinpoint your devastating
objection precisely to the place where
it
um where it needs to be targeted
um okay so let me just start
by thanking
um president Mark 10 to Tessa Levine for
that wonderful introduction and the
folks at Hai and the McCoy Center
ethical Society Rob my distinguished
respondents this is an incredible honor
I've been looking forward to this moment
with equal sort of fear and anticipation
uh for about a year and a half I'm so
excited to be here and to be doing it
and I've been saying to people that in a
couple of days time I expect that I'm
going to sort of
um you know grow two inches in height
from the the weight that's being lifted
off my shoulders from getting through
this so
um
I want to start by just sort of raising
the question of what philosophy should
contribute to our understanding
of AI and human values what should we
expect from us because you know we've
got this long tradition of thinking
about human values you might think that
that means that we have a kind of
Monopoly on the subject so obviously of
course you know our human values that's
our home turf right
um but there has been a fair amount of
work recently by philosophers thinking
about AI that is strained moral
political and philosophical credulity
in the sort of pursuit of promising uh
what I think of as Venture Capital
philanthropy uh 10 to the 48 return on
their moral investment
and this this means that we actually
have to defend our place in this debate
I think it's important not just to take
it for granted we should think about
what we're really trying to do what we
can add
and I think that what we do best when
we're doing philosophy about these
topics what I call normative philosophy
of computing
um is not to tell people what to do but
sort of help them understand better the
challenges that they face not kind of
decide and allocate vast amounts of
resources unilaterally but help
Democratic publics understand the moral
problems that face so figure out like if
something feels intuitively wrong the
use of some new technology like why is
that what are the reasons that underpin
it should we trust our Instinct should
we trust our gut and help us figure out
whether something is really new and I
know we're gonna have a lot of
discussion about that among the thoughts
um whether you know there are principles
that we can already provide that we've
already established that are relevant to
this new demo
and help us articulate positive ideals
to amap and things that can actually
provide us something to sort of unify
around as we collectively together
pursue social change
and I think we do this best when we do
it in a way that is empirically and
technically grounded so this means if
we're talking about AI then
understanding to some degree the you
know the underlying technology I think
it's important though not to over
fetishize the mathematical and
Engineering aspects of this to
understand the technology is not only to
know how to manipulate its internal
component it's to understand how it is
used in society the sort of impacts it
has when it's used it's predictable
failure modes
you know so I think that what we should
be doing is people working on the
normative philosophy of computing is
taking these inputs from these other
fields from from computer science yes
but also law STS media studies
communication studies sociology and we
should be outputting work that is
legible to and useful to those
disciplines that provides a sense of
sort of a greater depth maybe to the
normative critique and a an articulation
of the positive ideals that we're trying
to aim at
um so that's what I've tried to do and
it's just such a wonderful privilege not
only to speak to all of you but to have
such brilliant Scholars from sociology
and computer science and philosophy
engaging with my work over the next few
days and that's that's the aspiration of
the of the overarching project and so I
should start
okay
so my little corner of normative
philosopher Computing at least for for
these talks is focused around
um political philosophy
um
ultimately the task of political
philosophy is to help us figure out how
to live together
um it's about sort of normatively
evaluating our social relation and a
good starting point if you want to
normatively evaluate something is
probably to understand right you want to
know the thing that you're evaluating in
order to be able to tell whether you're
evaluating it correctly but
unfortunately political philosophy is
kind of slow to update like it's model
of the social relations that obtain
between us
um hasn't really been revised very
significantly over over the years from
you know whether it be John rules or all
the way back to
um you know sort of Bob's Lock Russo
Plato
um we're still working with very similar
understanding of what our social
relations are and what the task of
political philosophy is in particular
um we haven't updated to accommodate the
the ways in which information
Technologies and Computing have changed
Society so at the very broadest level
the overarching ambition of this project
the these lectures and the broader book
that's part of is to figure out how
political philosophy can be updated to
take these things into account
especially in the age of AI
so what I'm going to try and do in this
lecture in particular is make a start on
that project and first I'm going to
develop analytical tools
which are distilled from the empirical
literature that I've been reading
um
which will help me sort of characterize
the
the key element of the ways our social
relations have been changed by computer
and sort of pinpoint the specific thing
that I think is going to be especially
interesting a political philosophy to
engage with it's only one thing among
many possibilities but it's it's the one
that I think is most significant
and this is the rise of algorithmically
mediated social relations that's the
algorithmic city that I'm talking about
very very clear not smart cities I mean
it's just as a metaphor for the network
of algorithmically mediated social
relations so I'm going to characterize
that I'm going to characterize the
algorithmic intermediaries that
constitute
um the algorithmicity I'm then going to
show how they introduce new and
intensified power relations
and I'm going to illuminate both the
nature and the justification of that
power I'll tell you a bit why about why
I think political philosophers should
attend to the justification of power
relations then
and then the last thing I'm going to do
in the talk is to show how
um thinking about these I'm trying to
think about how this type of algorithmic
governance can be justified uh will lead
to um and demand first order advances in
political philosophy
that's going to be a bit of a tasting
moment so the the essay that this is
drawn on is uh is huge and
um uh if you want to look at it if you
want to read it scroll down to the
bottom of the handout and you'll find a
link to it there
um I also want to say that like one of
the things about doing interdisciplinary
research or doing philosophy that is
based on interdisciplinary reading is
that you have this sort of Magpie thing
going on where you find it like you find
a brilliant thing in some other
discipline and then you you do stable
sampling you follow its references and
you find the things that it refers to
and it's really easy to miss stuff so
there are going to be areas that I talk
about in the two lectures today and
tomorrow where some of you are going to
be way more experts in that particular
area than I am as a philosopher and I
welcome suggestions for readings and
things that I might have missed things
that might enhance my understanding of
these questions
um I think it's a really important part
of doing this kind of research that
you're kind of open to and able to kind
of talk about
um the the gaps in other people's uh
kind of readings from other fields
um without that being causing the
defensiveness and what have you say
um so let me start right so the
algorithmic city
um okay why introduce a new metaphor
here we have a bunch of them cyberspace
you know Network Society there's some
some good ones there's some bad ones
um I talk about like you know sort of
metaphors in the paper as an analytic
philosopher the role of the network here
is literally just to give me a um a way
of describing the longer thing that I
want to say in fewer words I don't want
to kind of I'm not buying into a broader
set of implications of this notion of
the algorithmic city I mean very
specifically
uh the network of social relations that
is mediated by algorithmic intermediate
okay and I'm going to explain what all
of those terms mean now
so social relations those are stable
patterns of communication and
interaction so you know intimate social
relations like those with your family
and friends
um or sort of less intimate ones like
with your fellow co-workers with
co-citizens even the sort of shared
participation in a common discourse is a
kind of social relationship obviously
they're more interesting and important
the deeper the more rich and persistent
they are
intermediaries intermediaries are go
between I always think of Iago when I
write about this
um and lately a little bit missande from
Game of Thrones too
um intermediaries are people who
communicate all their they communicate
information or action from one party to
another and they can be individuals they
can be institutions like a stock
exchange
um they can be you know Realtors Brokers
and they can be computational systems as
I'm going to be talking about
because I'm interested in algorithmic
intermediaries and so what do I mean by
algorithmic okay so this is the most um
sort of the least useful
uh usefully defined word I think in this
whole field like a formal set of
step-by-step instruct instructions to go
from an input to an output
um that's you know okay that's a normal
definition of an algorithm so here I get
to draw on my undergraduate degree in
English literature and so that I'm using
it
um in two ways one is synec the key and
and one is methylene I really like the
distinction between these two figures of
speech so synthetically is when you uh
um you you mention a part of the whole
you refer to the whole by referring to
the part and algorithmic intermediaries
I focus on the algorithmic part but it
is part of a whole so algorithmic
systems are computational systems that
involve lots of different software and
Hardware
besides just the sort of the pointy end
what I think of as the tip of the spear
and that goes back to not only
um sort of computational aspects but
also human labor resources the kinds of
things discussed at length in Kate
Crawford's book Atlas of AI
so it's synthetically it's part of a
whole it's the tip of the spear but it's
also metton uh metanim is when the part
kind of stands for the whole it
represents the whole it's the kind of
the interesting bit and the algorithmic
part of algorithmic intermediaries is
interesting because that's the part of
these computational systems that I
describe as being agential just in a
sort of functionally agential sense in
the sense that it represents its
environment it operates on the basis of
that representation and the base of a
set of goals to realize and now
so it can kind of dynamically update
depending on what the set of
circumstances are in order to achieve a
goal given given its conditions that's
going to prove to be a really important
part of of why algorithmic
intermediaries are interesting that's
why it's important to call them that
rather than refer to sort of
technological intermediaries or you know
socio-technical intermediaries even
though it's also true that everything
I'm talking about could be described in
that way
now this is a set of lectures on AI and
human values but I'm I don't want to
kind of rare file fetishize just the
particular type of algorithm whether it
be kind of machine learning of a
particular kind many of the moral issues
that are raised by these systems are
consistent across different types of
kind of computational implementation so
I'm talking about AI but I'm also
talking about machine talking about
blockchain I'm talking about search I'm
talking about hash matching like you
find in some forms of automated content
moderation
and copyright protection
so the basic empirical thesis of the
paper is that many of our social
relations are now partly or wholly
constituted by algorithmic intermediate
this includes all of the things you
would immediately think of so social
media for example
messaging these days now I think
generative AIS can be a really
significant it's going to be a really
significant role for generative AI one
of the ways in which it's going to be
used is to connect to people by
persuading them marketing and propaganda
are going to be the two primary use
cases for a lot of these tools
the idea that generative AI is a
universal interface to information is
going to lead to it being a kind of
universal intermediary
but not only not only that also
e-commerce two-sided markets are kind of
paradigmatic examples of this so you
know Amazon eBay Uber
um two-sided failed Market here in
Stanford where it takes 15 minutes to
get an Uber at any time I don't know why
that is
um
search is really important like your
intermediary to the information that is
generated in the world app stores
security cloudflare I don't know if it's
just me but maybe
um like something about my computer
always means I'm being stopped and
scanned to see where I'm about
um I'm not sure sometimes
um but it also sort of non-internet
companies like management recruitment
these are also out in the kitchen these
are computational systems that are
mediating social relations and where
we're relying on algorithmic systems to
sort of shape them and to enable us to
connect to one another and to share
information and actually with one
another
so this is a lot I'm intending to cast
like there's a there's a lot to kind of
take in for political philosophy and I'm
intending to cast a wide net if you want
to sort of pin it down and have a clear
example in your mind throughout you
won't go wrong if you think about
digital platforms okay digital platforms
are paradigmatic algorithmic
intermediaries
um they have significant impacts on our
lives so if you think about whether it
be Google Facebook I often think about
Twitter rip
um
but these are sort of they're
paradigmatic they're not the only
examples but if you want to get to sort
of keep something Concrete in your mind
then then that's a good one
um not all internet companies are
algorithmic intermediaries and not all
academic intermediaries are our internet
company so algorithmic management for
example is a is a clear case of using
computational systems to transmit
communication and interaction between
people
um
but this is
most of the things that I'm talking
about most of the sort of the
socio-technical systems I'm describing
are in fact private for-profit
systems that have been generated you
know within a few miles of where we
stand
um but it's not this is not just about
big Tech
um and this is an intentional thing like
I was trying to think about what what is
the right kind of level of analysis for
political philosophy coming at this
problem there is some work and I've done
some work which is sort of focused on
kind of engaging with and critiquing
um big Tech as it stands um now but what
I wanted to do in this project was
rather sort of try to identify you know
the underlying structures that are
represented at the moment uh by a
particular configuration sort of
economic configuration but which
um will be realized differently at
different times and even in the course
of writing this um this essay if you
think about sort of the the market
capitalization of meta Now versus a year
and a half ago and where open AI on the
other hand has come from and and where
it is now in those terms and in terms of
its influence there are these
fluctuations in you know the the
fortunes of these different companies
what I want to do is a political
philosopher to try and see what kind of
underpins those
and so the model of social relations I'm
describing where the ways in which we
interact with and communicate with one
another are mediated through these
algorithmic intermediaries it's not
unique to this particular economic
configuration it also describes what's
happening in China with a sort of techno
authoritarianism
um and it describes sort of other
aspects of
um you know our kind of our digital
lives now so you know the FED of Earth
that was something I hadn't heard of to
be honest before when I started writing
this essay and which is now a big sort
of looms large in my mind
decentralized technology it's like
decentralized Finance decentralized
um uh autonomous organizations web three
if you know if web3 ever becomes more
than
um a slogan
um the metaverse likewise
um but also sort of more positive
Visions for the future like digital
democracy and this idea of AI as a
universal intermediate these are all
examples of cases where what we've got
is people relating to one another in a
way that is now mediated by
computational systems so that's the
thing I want to zero in on and I want
sort of front and center in all of your
minds
okay so that's the picture the empirical
kind of Target
okay and I want to think about how to
sort of apply political philosophy to
this new this new Target
um and I think that the first task of
political philosophy when evaluating
changed social relations Is to think
about how they
um change relations of power
and now you know that's not the only
question the political philosophy can
answer but it is a really important one
and you'll see why I think it's
important in a minute it has to do with
these foundational values of Liberty
equality and Collective
self-determination
um so let me tell you what I mean by
power roughly one-way control
um so the ability to shape others
behavior and that can be through
directly impacting on them harming or
benefiting them it can be through
shaping their options
um making them possible or impossible
attractive and attractive and through
shaping their beliefs and desires so the
starting point is to say that
algorithmic intermediaries exercise
power over us in each of these ways
right so you know they have direct
impact so this can be just any of the
things which you've all talked about I'm
sure many many times here about how AI
can can benefit and can harm people so
surveillance might be an example or
Google changes its search algorithm and
your business goes from being on page
one and being viable being on page 10
and being non-viable
now that's where algorithmic
intermediaries directly affect people's
interests and Prospects
um perhaps a slightly more interesting
way in which they do that is by shaping
our options making some things possible
other things impossible this is
something that Larry lessig called
attention to
um and others sort of late in the in the
90s and early 2000s about the way in
which code can regulate behave fine
making something's possible something's
impossible something's easy something's
hard
um and we see this with
um with with computational systems so
quick check
um I guess everyone in this room has
probably used chat GPT at some point uh
put your hands up if you've been uh if
chat gbt has not answered a question for
you because it's sort of you know you're
being naughty
come on be honest
no yeah because it thinks you're being
naughty it's like I won't respond right
you're being governed okay that's
algorithmic oven
um okay way more of you have I know
maybe if I had said darling it would be
different
um so that's like making something's
possible and something's impossible you
can't you can't generate a response to a
certain kind of prompt within the system
another nice example is in Facebook's um
Horizons or metas Horizons virtual world
where they introduced a four foot
boundary around Avatar so that it was
impossible to kind of approach the
personal space than other Avatar without
their consent that's another kind of
power over where you're removing options
uh Roger brownsword calls this um who's
at KCl calls this
um technological management
so there are all these ways in which
algorithmic intermediaries exercise
power over us okay
even more important to the ways in which
they shape power relations between us
now this is really important because
often we think about and when people are
writing kind of critically about
technology describe it as though there's
this sort of digital thumb that's
pressing us down that we need to sort of
push back against but actually most of
the time when things go wrong with
algorithmic intermediaries it's often to
do with stuff that we do to one another
right so social media is the obvious
example of this you know it's you know
people worry about whether the algorithm
is manipulating people not sure but the
it's clear that some people are
manipulating other people some people
are deceiving other people some people
are bullying or brigading other people
right so how can the kind of mediators
can shape these power relations they
make it possible or impossible to do
these things to see this you only need
to see the way in which Twitter has
changed since the acquisition by Elon
Musk you know that's a perfect example
of like at one time there was a certain
kind of set of social relations between
people certain kind of power relations
between people
Elon Musk changes things and then it
completely switches in the other
direction and loads of people get
basically crowded off the platform
either because they're being brigaded or
because people are abusing the reporting
system in order to get out people of
different political views
so how do the intermediaries shape power
relations between us that's really
important equally and important is the
fact that by shaping our social
relations over time they're actually
significantly impact on our broader
social structure so this is an area
where some of the work in sociology and
communication studies is particularly
useful there's a bunch of references in
the essay about kind of supporting those
thesis but in each of these areas of
politics sociality culture
um
uh education oh education like how many
people like quick hands up how many
people have been in meetings about chat
GPT and and plagiarism anyone here okay
yeah good few of you thanks open AI
um yeah so reshaping education reshaping
culture
um
now I want to be very very clear this is
not like a one-way deterministic thing
like algorithmic intermediaries don't
just kind of operate on us in a way that
just shapes us and would shape anybody
any other Society the same way
um what's happening is you know there's
a complicated interplay of push and pull
from acid from the things that the tools
that we use but I think it's very clear
that Society has been very significantly
Changed by these systems now in a way
that it's really hard to do
intentionally changing social structures
is really difficult
Okay so we've got the picture the target
phenomenon the algorithmic said I've
told you that it involves these new and
intensified power relations so what do
we do about right my my role as a
political philosopher is to come up with
sort of normative stuff to say about
so there's two two ways to look at this
first of all at any time there are new
and intensified power relations they
raise New Practical questions this is
true in general with new technologies
they create new capacities and you've
got to figure out whether and how to use
those capacities
um but I think there's also a way in
which the nature of algorithmic
intermediary power invites a certain
kind of theoretical progress too
this is a little bit in the weeds for
the computer scientists in the audience
um I'll be able to talk about it more I
hope in the Q a and the discussion but I
want to draw this contrast between what
I call extrinsic power and intermediary
power
so extrinsic power operates from the
outside it governs social relations from
the outside in it creates boundaries and
barriers within which you operate they
might be laws right so a law that
provides a sort of a baseline of
permissible conduct below which you
mustn't fall
or it might be about creating a space
like with a Marketplace for example
right and these two things can work
together for example with the
marketplace of ideas
um notion so that's extrinsic power and
it usually operates through law and
architecture and it is
um not uniquely but it is certainly
often the province of the state
intermediary power is slightly different
so I have this metaphor from jury that
runs through my mind so John Dewey uh
where he talks about governance as being
he analogizes it to the way the the
rivers Banks govern the flow of the
water so that's how he thinks about
governance it sort of channels Behavior
okay that's extrinsic power so if that's
extrinsic power then intermediary power
is much more like the way in which the
the molecular bonds form the water
molecules right it operates From the
Inside Out not from the outside in it
shapes what's possible and impossible
um in the way that it constitutes the
social relationships that it mediates
it frustrates some things it encourages
other makes something's easy something's
hard
um it does it through design in a way
that can preempt choices and
um enable sort of perfect recording and
monitoring to enable sort of perfect
post-doc enforcement
so this I think is an interesting
feature of intermediary power that is
brought to the fore by algorithmic
intermediaries it like I said govern
social relations from the inside out
it's not unique to algorithmic
intermediaries
um you know there are other
intermediaries and there always have
been and they have been able to exercise
power so that sort of reliable Trope
which is used in in Game of Thrones
where the translator will sort of
translate only part of what is said and
then at some point obviously the person
reveals that they actually understand
the language that's being translated
like that's just you know that that kind
of power is always that for an
intermediary
um and I think that actually analytical
political philosophy the kind of work
that I do has has never really taken
this sort of power seriously
um
Continental philosophy derived from
Foucault has thought about it um to a
certain degree and some philosophers of
Technology have for sure and so if you
think about the way Foucault talks about
language
and the power that language has or the
power that this course has it's to a way
of sort of saying that these things that
mediate between us shape what's possible
for us and if you look at the work of
people like winner and Mumford on the
affordances of Technology they also talk
about how tools shape what's possible
um but analytical political philosophy
has been sort of Fairly ignorant on this
part but I do think there's something
interestingly new about how can we get
to meet you like it's important to
qualify what I mean by interesting you
right because in one sense a certain
level of abstraction there's nothing new
Under the Sun there's like you know
three moral Concepts and they apply to
everything or if you're a utilitarian
there's one you know
um and like so whatever everything else
is just an applied version of that
um and then there's another another
level of abstraction where like
everything is new every moment like
right now that was just new okay
um and that's obviously not interesting
either right there's a sense in which
it's true so where's that where's the
sort of level of interesting novelty
um and I think there's something
interestingly new about algorithmic
intermediaries in a way because they
extend and perfect
um intermediary power
you know so take comparisons like so the
law can be an intermediary it can make
some things possible something's
impossible you can't get married except
in the way that the law makes it
possible for you to do so
um but you can have like the physical
correlate of a marriage you can have
everything that is descriptively the
same as it just without the legal
recognition so the law can control that
legal recognition that can be really
important but it can't shape whether the
underlying substrate is possible but
algorithmic intermediaries computational
systems they can do that they can make
it literally possible or impossible if
you do to do something
human intermediaries can shape your
behavior they can extract information
they can you know shape your beliefs and
desires but they can only do so at a
pretty limited speed and scale like a
door-to-door sales person or a realtor
or even a newspaper
editor has a limit to the ability to
deploy their kind of framing of relation
social relations at a massive scale well
that's not true with algorithmic
intermediaries that are you know where a
single sort of flick of a switch
metaphorically speaking by Mark
Zuckerberg can change the news feed and
change the way that people relate to
information and one another for two and
a half billion people probably going
down now but at one point two point
seven billion people so they can operate
that kind of scale and but then I think
the thing that's really interesting
about them comes out of the comparison
with with Foucault and with winner and
Mumford and this pre-existing idea of
intermediary power which is that
it was possible for us it's that they
are as I said before they're agent
they're able to adapt and dynamically
update in response to new information
about whether it be the people that
they're mediating between or the
circumstance so there's a sort of a
quality that's different from what
Foucault was talking about what winner
and Mumford were talking about where for
them the kind of power that was at stake
in those cases was much more sort of
structural it was a feature of the
artifact or it was a feature of language
and that's why with with for current
with language
um it was never made much sense to do
sort of directed normative critique
there wasn't like an agent who you could
task with we are fixing the ways in
which language enables some to exercise
power over others
um but here there is there the ways in
which these abdomenic intermediaries
mediate social relations
is something that is subject to agential
control to a certain degree by the
academic systems themselves that's
controversial but obviously by the
people who design them and then you know
that it's super complex it's really hard
to do but they are able to adapt and
dynamically update in response to the
goals that are imparted to them
um and that's different and it means
that they're subject to a kind of direct
normative evaluation which might be less
appropriate
um when we're just thinking of
structures or artifacts on their own
okay which leads us to the question of
what do we do about this kind of power
okay how do we justify it should we
justify
and here again I said at the start that
one of the roles of political philosophy
can be to help us sort of uh dig deeper
into our normative critiques of new
technologies say
and there's definitely some people who
think that
um just naming power is enough to
criticize it that's sufficient once
you've said power then we know it's bad
right but that's not true generally I
mean there's forms of power that are
Justified
my power over my kids for example such
as it is
um
it would be justified for me to have
power over my children let me put it
that way
um so they don't agree
um
I think that we can do better than this
and we can dig into uh the sort of the
reasons that underlie the suspicion of
new power relations I'm going to do it
very quickly I go into it in more depth
in the paper but it comes back to those
values of Liberty equality and
Collective self-determination damage
and I think each of these values gives
us reason to be presumptively suspicious
of new relations of power that doesn't
mean that they're absolutely wrong it
just means there's a reason for why
people are skeptical
um so if a has power over B then
presumptively that undermines b is
Liberty by subjecting them at the very
least to the risk of wrongful
interference that's what Liberty is is
one of the one of the aspects of Liberty
is protection against the risk of
wrongful interference
if a has power over B then they stand in
uh presumptively they stand in
hierarchical Social relations and he has
power over B that's contrary to the
underlying ideals of relational equality
the idea that you know what as Nico
colodny says they should be rule over
none
and if a has power over b c and d then
again presumptively a b c and d together
are not collectively self-determining
okay so if one person has power over the
three of us if Sam Altman's decision to
release chat GPT
um as he has puts all of the rest of us
on the hook for coming up with new
policies with regard to plagiarism
um then we're not collectively deciding
how we're going to handle these sorts of
problems okay
and so the next step is okay so there's
a presumptive objection
um without something to can Avail it we
might have reason to just eliminate
algorithmic intermediary power and
there's definitely some kinds of
parallations where that's just what we
should do you know it's hard for me to
see how anybody could think that you
know the the power that Rupert Murdoch
has over the overpolitics here in
Australia and less in Australia this
year they got kind of stuffed and it was
really good
um but the amount of power that he has
over politics in this country and in the
UK
um how that could ever be justified I
don't think a just society would have
that kind of power
but I don't think that's the way to
think that we should think about
algorithmic intermediary power I don't
think the task is to eliminate uh for
three reasons first I think it's just
super implausible that that relating to
one another throughout the
intermediaries is the kind of thing that
we could just permissively like Outlaw
right it's not like
um Contracting into slavery you know
like maybe it's bad in some ways but it
isn't that bad
um and people like doing it so it's
going to happen so there's going to be
Atomic intermediaries that exercise
power over people so they've got to
figure out you know the conditions under
which that can be permissible
more importantly
we relate to another because we relate
to one another so significantly through
these systems now it's incredibly
important that there be an agent that is
exercising power within those contexts
to ensure that we maintain egalitarian
social relations among us to prevent us
from manipulating and deceiving and
abusing one another right that's a
really important role these things need
to play we need someone to govern these
digital spaces and in many cases that
can only be done directly through the
algorithmic intermediary it can be
indirectly kind of mandated through
State regulation but in terms of the
actual practical implementation of it
that has to be done on platform
so there's a really important role there
and then the last thought is just that
you know I said that algorithmic
intermediaries are able to kind of
reshape social structures over time we
don't have a lot of tools that can do
that intentionally like that's not an
easy thing to do it's really hard
um and so far they've been deployed in a
way such a way as a sort of maximize
value for a certain set of
um you know owners and Founders and
shareholders but imagine if it was used
for good and to realize kind of
collective social values that would be
extraordinary and it's a sort of a power
that we shouldn't leave on the table
so then I think
there are grounds for thinking that
especially governing algorithmic
intermediary power algon that governance
the topic of the the two lectures as a
whole
um yeah we want it to be we want to be
able to find ways in which it could be
justified we have reason not to just
eliminate it so then the question is
well what by what means what does that
look
okay so here I've got a simple formula
uh or approach to justifying governing
power that kind of brings together a
bunch of threads in political philosophy
the first step is
um to answer what I call the what
question of substantive justification
right so I've told you that there are
these presumptive objections I've told
you there's some good that we could do
you know so maybe you then just say well
let's just you know cost benefit
analysis let's realize the good things
minimize the bad things
um that's the sort of basic Trope of uh
any number of Grant proposals or policy
proposals or anything to do with
technology and Society I'm sure you've
seen plenty of these
um
but that's that might override the
presumptive objections from Liberty
equality and Collective
self-determination it might give you
enough kind of good on one side that you
might think well okay we go ahead with
it
um but it doesn't resolve them
they persist as kind of moral thorns in
the side
um that haven't been pulled out okay
it's better to resolve the objection and
just override both because of that and
also because like often enough you're
not actually achieving enough good to
genuinely override the object so how do
you resolve the objections well that
comes down to answering two other
questions the how and the who crust our
question has to do with procedural
legitimacy the way in which power is
being exercised
okay this is a way of us kind of
exerting power over those who have power
over us you have power over me you have
power over us rather
um and such that
um that sort of creates a hierarchy
between us but we have power over you
because you can only use that power in
accordance with certain clearly defined
limits and that's what procedural
legitimacy is about so we want to make
sure that algorithmic governance is not
only substantively Justified not only
used to do the right things but that
it's also procedurally legitimate but
that also isn't sufficient
um and as just an example of this sort
of out of the algorithmic space think
about the Facebook oversight board okay
really interesting
um uh institutional Innovation
um I think they've made some pretty
substantively Justified decisions they
are procedurally just chef's kiss
um but their Authority essentially comes
from I don't know the fact that they
were on Nick clegg's reading list at
some point I mean it's like that's how
it was originally formed so Nick Clegg
decided what you have to think
um and and so they did it and it's good
but their Authority is derived from that
it doesn't derive any Authority from you
know the users of the platform or from
democratic policies in general it really
matters who is exercising power and
that's a really important thing for
people working on AI and Society to to
remember because it's not just a matter
of like I'm not just saying
participatory design is good here like
that's true but it's or it has it's
sometimes true but that's not that's not
the point the point is that you have to
think when you're making these big
evaluative decisions in the design of
Technologies whether you have the right
Ai and the authority to do so
and oftentimes we we don't
um
so that's the task of this broad area of
political philosophy that I kind of want
to you know Embark upon and launch and
and and encourage others into the idea
is you've got to figure out what should
our governance be used to do how should
it be used and by whom
okay so that then leads me to the last
decostation menu of the paper where I
argue that
I think that because of the nature of
algorithmic intermediary governance it's
going to invite us to rethink some
questions and political philosophy
um and the way to think about this is
this right political philosophy has
basically been totally centered on laser
focus
um on one kind of exercise of power
evaluating and justifying that
there are obviously counter examples but
this is a central question of political
philosophy focuses on justifying the
state's exercise of extrinsic Power by
means of law so the ways in which the
state coerces Us and how that can be
justified
our government governance operates in
different ways it shapes your beliefs
and desires it removes some options
makes some some feasible some infeasible
make some frustration harder I'll talk
specifically about the kind of
dimensions of that but it's just kind of
interestingly different and what happens
when you have a set of normative
theories that are tailored for a
particular space of possibilities and
then you just expand the space of
possibilities is that it's kind of an
interesting question as to whether those
normative theories will just transfer
that new area it's kind of a problem
with transfer learning in a way
um what I think is likely to happen as
we do that is that we're going to
discover stuff that we didn't know
before we're going to discover stuff
about you know the justification of the
power of the state
um where we take in certain practical
features as like parametric sort of
shaping what's possible and so we
haven't thought about kind of
Alternatives I think we're going to
discover stuff about the power of the
state and then we're going to discover
interesting things about the power of
algorithm because of media areas and one
of the things I'll do in this last
little bit is just a little bit of
compare and contrast like where are the
ways in which it sort of seems other
things equal easier to satisfy the what
who and how questions if you're doing
extrinsic governance by law or
intermediary governance by way of
algorithmic intermediary
so this will be a Whistle Stop tour
um
but it'll be fun
there is a silly example in there
um so okay
um let's start with authority and
preemption
um
so like I said you know your central
questions for political philosophy some
people call it the central questions
political philosophy is do I have an
obligation to obey the law is the
state's Authority Justified is the the
does law have authority
this comes up in all of the central text
and analytical political Theory over the
last 50 odd years and obviously it has
much deeper Roots as well
the interesting thing about algorithmic
governance and algorithmic
intermediaries is that they're able to
govern without law coercion
so in the in the extreme this can be
just through preempting choices only
enabling permissible options not
enabling um impermissible ones this is
something that Jonathan zitrain called a
long time ago preemptive governance
um it's also in the work of Murray
Hildebrandt around the same sort of time
and also of course in Larry lasik's work
um
the interesting thing I guess about
algorithmic intermediary governance is
that it could be kind of personalized
updated revised responsive regulation by
design has always existed you know you
can you can prevent people from going
into a room by making a law that says
don't go into the room or by locking the
room right you can use physical
artifacts in order to shape Behavior
it's just vastly easier to do that
within computational systems where
you're defining the environment as you
go along
so let's just imagine
um which I'm going to shout out to my
postdoctors in the audience for the name
preemptopolis
um preemptopolis is a you know let's
just sort of take the most most
optimistic Vision uh on one sense of
optimism for what the metaverse might be
you know you get up and get in and then
you spend the day in your virtual world
um and let's say that in this virtual
world like all impermissible options are
just not enabled you know you can't take
anyone's property it's just not an
option that's available to you you can't
you know incur on people's personal
space without their consent because
there are these boundaries so in this
world
um there's no law there's no coercion
it's not necessary
because your shape your behavior could
be shaped just through technological
management
this means that you don't have like
there can be no obligation to obey the
law there's no option to disobey like
the obligation to obey the law
presupposes that you have an option to
discipline but still all of the
questions of political Justified
Authority clearly matter in this world
even more so because it sort of power is
hidden you don't know who's controlling
everything whereas with the law as I'll
say in a minute it has to be public
so this just suggests to me that this
way of framing the central question of
political philosophy has been
insufficient it's just insufficiently
general we've thought about it as being
about justifying the coercive power of
the state actually that's just one
modality of governance among others now
we see that there are other ways of
governing using other different sort of
Technologies besides the law we realize
that there's a further kind of a higher
or more fundamental question which is
not you know what justifies the state's
right to coerce but what justifies the
right to govern
that's the key thing that matters here
um
governing I should say is making
implementing and enforcing the
constitutive roles of an institution
okay and that's the thing that's really
at stake is justifying what justifies
that governing power
I think that I won't do much of the
compare and contrast this one but I
think that actually realizing Justified
Authority is going to be harder with
academic intermediaries
um then in the other case largely
because of the way in which power is
obscured and it's so easy to hide you
can just make it sort of an invisible
design Choice
um that nobody nobody has the
opportunity to question
okay so I think that we need to rethink
Authority in order to handle different
modalities of government I think
something similar is true for procedural
legitimacy the how question
this is especially plausible because our
theories of procedural legitimacy
are almost all sort of more or less
forms of legal proceduralism like they
they really derive from the fact that
the way we exercise power is through the
law and so they're often about like
making the law public in advance now
applying it consistently in media
threads having accountability and
contestability after the fact it's all
sort of defined in terms of what the the
main modality of governance that we have
is but I've been looking into major is
that like I said they exercise
preemptive power they also are able to
do a sort of a very different kind of
coercive power you know um coercive
power through law relies on kind of it
goes through the will right if you if
you had to sort of actively force people
to obey with every one of your
injunctions again I'm thinking about my
children again here
um then then it's like really really
costly and hard and that's no that's no
good right what you need to do is you
need to get them to actually want to
stop using the iPad at that time right
um that's that's you want you get need
to get people to actually internalize
Norms rather than just sort of feel them
as external kind of constraints
and that's a key feature of the law but
algorithmic governance need and be like
that so I can remotely turn off my son's
iPad from here
um you know I can it enables us to sort
of not invoke or invite people's will to
be engaged anybody who's tried to stream
a film from their phone to the TV when
like by all accounts it seems like you
should be able to but apparently
somewhere in the in the copyright
protections you know some license that
you've agreed to says that you've only
rented it for the purposes of viewing on
a mobile device and not on a static
device or something like that and they
don't need to really tell you and make
that clear because they don't need to go
through your will because they can just
disable that option for you okay the
same is interestingly true for um kind
of enforcement you know you can monitor
people's behavior on algorithmic
intermediaries very easily there's this
interesting distinction between
surveillance and capture that Paul Agri
came up with where stuff that's
happening on platform can just be
recorded and monitored and then
automatically intervened against
this is especially true with for example
the sort of large language models and
content moderation
um now I think it's important to
recognize that all of these Technologies
kind of suck in various ways and don't
necessarily realize this sort of
idealized goal of perfectable governance
but they certainly make it seem
attainable and and one that people might
want to aim at
um so there is interesting differences
uh between argument governance and
extrinsic governance by law and they
invite us to sort of think what are the
underlying values that kind of motivate
procedural legitimacy for me I think it
has to do with the idea of ultimately
limiting power
and then we get to look and see like how
do algorithmic individuals fare relative
to Extended cover
um and I think this this just you know
seemed to me like an interesting thing
to realize was that the law has a kind
of accidental virtue and now in a way
this is a way of framing a central point
in you know lawn Fuller's morality of
law but I think that when we think about
the justification of law in that context
we're not thinking about having
alternatives
like if you've got two different ways of
governing some particular domain
um
one virtue of governing by way of laws
is that in order for it to be effective
it has to be public
and then if it's public then people can
sort of demand a say over what law is
governed and they can challenge they can
resist them that's that's an interesting
virtue out in the intermediaries don't
require that they don't have to be
public in order to function and so you
lose in particular this virtue of
resistability
I don't mean sort of you know January
the second style of a disability I mean
like Civil Disobedience style
resistability I mean the fact that when
laws are exercised in public people are
able to sort of respond to them and if
they think they're unjust they're sort
of unifying to form sort of bonds of
mutual trust in such a way that they can
challenge
with algorithmic systems that's a lot a
lot harder it's sort of you're an
atomized individual maybe you know some
of the people in this room are very
smart and um can sort of hack their way
around systems of algorithmic governance
you know get that video playing on there
on their TV
um but the whole point about algorithmic
into media is that they connect us with
other people and so while you might be
able to have your sort of interstitial
freedom from alchemic governance it's
not possible to do it in such a way that
you're actually connecting with all the
other people that you want to connect
that's the nature of platform power
so again this is just sort of about
highlighting the contrast between the
ways in which we can govern through
algorithmic systems and through legal
systems and in this case there's just
this interesting virtue of legal
governance that it is public and it
affords resistability whereas
algorithmic governance kind of can
operate in secrets and you have to
intentionally enable the possibility of
resistance it's important to be clear
that these are just affordances they're
not sort of determinative you know you
can push against affordances they don't
just decide what the outcome will be but
if you're not aware of them and if you
don't think about them carefully then
you'll just sort of slide in the way
that they that they Direct
okay so that's the what question uh of
substantive justification so there's
this concept in political philosophy
it's not very fashionable now but I
think it still has some appeal to of
justificatory neutrality so when you're
thinking about substantive justification
you're thinking about like what are the
values that you can appeal to in order
to
um when you're when you're when you're
governing a politics okay so another
Game of Thrones reference and because
it's such it's old and because no one
should watch season 8 anyway
um I don't mind that there's a spoiler
here uh but there's a bit at the end of
Game of Thrones where where you know
um Daenerys and Rob are in they buy the
Iron Throne she's achieved her objective
and she says to
um our sister John uh not Rob she says
to Jordan I'm going to go and break the
wheel I'm going to take I'm going to
realize Justice everywhere and he says
impression coming up who decides who
decides what what do they think that
they've got their way yeah and she says
they don't get to decide right and this
is the point where she thinks she's
substantively Justified she's morally
she's on a moral campaign and she just
wants to go and use her power to realize
Justice everywhere
um and John raises the very natural
point that the other people can have
different ideas of Justice
um and she's just prepared to kind of
drive over them so John when he
you know and puts a knife in is being a
political liberal right he's saying that
we shouldn't have people just imposing
their substantive moral values on us and
when you when you govern you should do
so on the basis of
um you know reasons that are broadly
speaking acceptable and like
justification neutrality is as
undoubtedly an impossible goal but it's
a reasonable ideal and we all do it we
do I mean I'm sure many people here when
you're thinking about how to incorporate
values into the systems that you're
designing you're looking for some kind
of sense like well like how do we make
sure that we're not just putting in our
own particular perspective you know
that's an important thing to aim at and
I think the thing that's interesting
here is that
again there are ways in which governing
by way of law makes it just slightly
easier to Aspire to justification
neutrality and does governing by way of
algorithmic intermediaries
um so in a nutshell the idea is really
this
extrinsic power like governing power
through law enables you to create kind
of parameters or boundaries within which
unmediated interaction can take place so
this is the idealized notion of a market
or a Marketplace and the goal then is
that you just sort of
um you just need to get agreement on the
boundaries
you know and that's still hard but it's
easier than having to get agreement on
everything that goes on Within
so that's one part of it and another
interesting feature of the law is that
in the you know the analog Series in the
physical world there's this sense in
which
um there's a default of freedom of
action like you're you're you don't need
the law to tell you something is
permissible you do stuff and you know
you're you're able to move around you're
able to access freely
um and then law will tell you whether
something is prohibited okay and this
means there's this interesting sort of
Middle Ground that is possible with the
law where it's like something is is
legally permitted it's not prohibited
but it isn't actively endorsed it's just
you know this is the default people are
able to do stuff unless it's prohibited
so this means the law can be more or
less silent on something
obviously people disagree about how you
know whether that is really being silent
but if you're a deontologist like me
then you think that the sort of the way
in which you causally relate to
something it's morally important like
and it's different to you know to
endorse something or enable something
versus just you know allowing something
to happen and sort of even turning a
blind eye to it maybe versus again
prohibiting and preventing
okay so in these ways we find that you
know governing by way of law enables a
certain kind of it gives justificatory
neutrality a boost relatively speak and
governing by way of algorithmic
intermediaries does the reverse makes it
harder so in a way this is kind of a
version of the general thing that people
often say technology isn't neutral
um so that's definitely part of it but
more specifically
everything that happens in the
algorithmic city has been intentionally
enabled
um by the organic intermediaries that
constitute it
like you don't have a sort of a default
of freedom of action because you're
interacting with one another through a
constructive system
so and this is something where anybody
who's ever taken like a a an offline
phenomenon put it online like doing
online conferences in the start of the
pandemic versus um doing them in person
there are so many ways in which you have
to think oh crap we need norms for this
that some of these Norms are just given
by the fact that we were in buildings
with walls that create space between
people and then like a corridor where
people meet it's like how do you how do
you replicate that in a digital context
like you're implicated in all of these
different choices and if you don't make
them one way then you're having
significant impacts on things so I think
that this sort of way in which when
you're constituting an environment
you're responsible for every element of
the environment is really important and
partly that is because
it sort of removes this option of being
silent about something because the
default is is unfreed and default is
that you can't do something so if you
can do something then it has to be
positively enabled
um and that's that then if you want to
prevent it from happening you have to
sort of there's no difference between
prohibiting it and not enabling right so
it just isn't it really it's just an
interesting subtle change on the one
hand you've got the default is freedom
um you can prohibit or you can
um sort of commit without endorsing or
you can endorse on the other hand you've
got the default is it on freedom you can
only enable and enabling as effectively
as good as endorsing
um and then you've got prohibit or not
enable as your two options and I think
it's pretty hard to see any difference
between prohibiting and not enabling in
terms of their their moral significance
so I think that removes this option of
kind of being silent about something but
it makes it harder to achieve so again
justification neutrality is sort of a
good idea to aim at for substantive
justification it's just that little bit
harder to achieve dragonlic
intermediaries than it is for extrinsic
governance through law
okay so I'm going to wrap up in just a
moment there is a lot here it's a huge
paper there's many things to there's
many more things to say and in the essay
I can consider a bunch of objections to
the argument uh focusing on things like
um you know whether we should really
apply such stringent standards to the
justification of algorithmic
intermediary power in virtue of the fact
that it's possible for you to sort of
jump ship from any given platform and
hop over to another one right so you
have this power of exit why doesn't that
mean that you don't need to apply these
Sanders or one can argue that
um you know the stakes just aren't high
enough here like I'm talking about
governance but you know we're not
talking about like the power like the
power of the state the state can take
away your life it can take away your
property your freedom you know how to
make intermediaries can take away your
followers you know it's not the same but
again I have responses to that I think
that there are ways in which the states
are interestingly high but also where I
think that this just forces us to think
about like what is it about state power
that actually generates these high
demands of legitimacy and Authority and
I also consider a set of objections
concerning whether we should focus like
I do on governance by algorithms or
whether we should be more centrally
focused on governance of our group the
two things work together but um it's a
valid concern to sort of think that I I
should be focusing on one more than the
other
but I'm going to wrap up and I want to
leave you with just a few takeaways okay
so these are the things I want you to
remember from tonight
there was an empirical bit the
algorithmic city is the network of
algorithmically mediated social relate
and I asserted that political
philosophers have largely ignored its
arrival
so it's a bit of social theory in there
I argued that algorithmic intermediaries
exercise intermediary power over those
that mediate between they shape power
relations between them that's really
important and through them they reshape
Society
I think the political philosophy has
overlooked intermediary power in general
um whether it be kind of now or in the
past
new power relations this was the start
of the normative bit
um aren't necessarily objectionable
um but
new governing relations are
presumptively so there's a hurdle they
need to meet in order for them to sort
of to resolve these objections based in
Liberty equality and Collective
self-determination and the way that we
do that is by appealing to these three
standards the What the how and the who
standard
it's really important in that context
that substantive justification is
insufficient so anytime you're talking
with someone about for example aligning
AI systems and they're only talking
about what those systems are going to do
they're only talking about substantive
justification the next question is how
will they exercise this power and who
has the right to exercise it do they
have the right to exercise it this is
one of the reasons why I think that we
shouldn't even take the design and sort
of pursuit of AGI as like a regulative
ideal whatever you think about how
realistic it is as a Target I don't
think we should want it because it's so
hard to see how it could satisfy the how
and the who questions if it ever had
real power
the fourth point was that there are some
interesting differences between
algorithmic intermediary governance and
the kind of governance by law that the
state does that we've mostly thought
about in political philosophy
um that will lead to some interesting
Reflections on the nature of these these
substantive questions
um but also will invite us to think
about whether you know other things
equal things will vary in different
cases we have reason to be skeptical
about the possibility of satisfying
these standards with algorithmic
intermediaries that those reasons don't
apply in the case of extrinsic
governance by law
I think this does have one sort of
normative upshot you know on the whole I
mostly want to just try and give you
Resources with which to better answer
the normative questions that we Face
collectively but I do think there is one
a pretty substantive normal develop shot
that we should be pretty skeptical
um about aspirations to increase the
role of algorithmic governance in our
lives whether that be by you know
aspiring to a value-aligned AGI or
whether it be some of these sort of
ideas in
um sort of for new approaches to kind of
online social spaces like web3 that rely
heavily on uh the sort of the illusion
and the aspiration towards perfectable
governance
I think that the broader project of
which this talk has been part
um answering the what how and who
questions for algorithmic governance is
the work of a whole subfield of of what
I'm calling normative philosophy of
computing in lecture two I'm going to
apply this methodology to evaluate a
particular area so the algorithmic
distribution of attention in the digital
public sphere but that's just the start
of a much bigger project one that goes
much beyond my own work so this can't be
a conclusion there's far too much to do
so instead I'll just stop
foreign
it is a very special honor to be
commenting on this lecture by Ces Lazar
and even more to do so at Stanford where
some of the processes that says analyzes
with such great care of course were
sorted and I want to thank rodrigue and
the Tanner lecturer on AI committee for
thinking of me in this context
it is also a distinct pleasure to be
here in the flesh
Seth and I jointly could share the
pandemic era 2021 AI ethics and Society
conference
but we have never met in person and
actually arvind was the keynote speaker
at that same conference so today is a
reunion of sorts intellectually to be
sure because we have already recognized
each other as intellocuters who
appreciate
you know chazo's work but it is also
Iranian on a molecular level thanks to
the now increasingly rare event of being
physically co-present
the pandemic of course firmly relocated
our professional lives to a digital City
a complex and rounded clock space of
flows that bustles with activity and
spread out unevenly around the globe
like says I do not meet mean City
literally the city here is a metaphor a
model that stands in for the global
network of computer-mediated social
relations to paraphrases description in
the opening of the first lecture
what he calls the algorithmic CD the
subject of his lecture today should be
conceived as a subset of the digital
City it is constituted by those social
relations relations that Transit by way
of what he calls algorithmic
intermediaries
the definitions are precise and the
words are carefully chosen this is
philosophy after all
I wish you could read the whole lecture
because it is quite an analytical to the
force it is built like clockwork and I
think you could you could really see
that in the way that he delivered
more significant and most interesting to
a sociologists like me is says this
careful conceptualization of what he
calls the constitutive power of
algorithmic intermediaries on social
relations I guess that's why it was
called social relations
he argues that this power Bears two
fundamental characteristics
it is preemptive
and it intimidates
the first character characteristic
brings up a comparison with the law
the second characteristic brings up a
comparison with other
conceptualizations of social relations
so let me discuss each of these in turn
by preemptive power says means that
technology endorses some ways for people
to act on one another
while making other ways not simply
undesirable and liable to sanction the
way the law does but technically
impossible
we are not talking about brute external
repressive Force but neither are we
talking about what Michelle Foucault
calls governmentality
the power to arrange things in such a
way that people will subjectively align
with the demands of what Haggerty and
Ericsson call the surveillance
assemblage
for the project the process of
subjectification to take place
analysis has to assume the existence of
an agency a subjectivity to be trained
and what strikes me
in um in the uh incest's description of
the specificity of algorithmic power in
its preemptive Dimension and I want to
emphasize that in that Dimension alone
is that it actually bypasses human
subjectivity entirely
in other words
algorithms have no use for free or even
trainable human agent who can make their
own determination of whether a norm
ought to be followed
the rigidity embedded in the
Technologies in some ways and affront to
our freedom to our Humanity all the more
of course since that rigidity is not
suigeneres but the result of a design
decision
of course the law constrains us too but
the law at least can be broken
by contrast code rules in this more
inflexible manner compliance is insured
by default much like a straight jacket
or a prison wall would
the build environment actually offers
good analogy because like code it is
man-made
I was reminded of London winners famous
discussion of the decision by CD planner
Robert Moses to build Low Highway
overpasses in New York City
this prevented public buses and
therefore the racialized poor from
traveling to local beaches priced by the
white middle classes
the public infrastructure was engineered
to keep these two populations apart
until that is distort the poor started
being able to afford cars too
similarly algorithms make some ways for
humans to relate to one another possible
and other impossible
says canonical example in the paper uh
is actually that you just cannot hug a
friend in meta Horizon's virtual world
the reason is that in the virtual world
as in the social one the line between
hugging and groping is a fine one
both involve a shrinking of personal
space
meta has no way of enforcing a rule
specifically against groping
except for going through the labor
intensive and therefore very expensive
work
or of determine making that
determination and of course policing
individual harassers
the preemptive default the four feet
safety bubble for avatars in Horizons
must be understood therefore first and
foremost
as a particularly economical and
efficient way of managing a real problem
and importantly it does not solve other
aspects of a virtual assault like
offensive talk and threatening gestures
which can still accompany the Avatar
right you can have all the personal
space you want you know if somebody is
talking to you offensively you know
you'll hear it
that does not seem to exist a preemptive
fix to that
and there might come a time when the use
of certain words will automatically boot
the person of the platform but we are
not there yet
preemption must be contextualized in
light also of you know a longer history
the internet
and especially the gaming World which
would be particularly attracted to
Virtual environments like Horizons
is a wash with harassment and bullying
we know that there is pornography
obscenity hatred and lies lies lies
after years of finding themselves
sinking ever deeper into the ocean
Stables of content moderation tech
companies have gotten cautious
certain forms of preemptive governance
come out of that history so I think it's
important to sort of contextualize that
so for instance the image generation
program daily will not let users
generate a pornographic or offensive
image
likewise the text generation program
chat GPT will not allow them to produce
a defense of fascism I tried
preemptive power of course does not in
here in computer code but code is the
medium through which designers of online
products exerted and as such it is the
subject is it is subject to the kind of
thoughtful societal feedback that says
is calling for at the end of this
lecture
foreign
criticism about the personal safety
boundary in March 2022 meta made it
possible for users to disable it for
their friends or if they so choose for
everyone
stability AI has actually opted for a
different governance regime than open AI
with open source code that lets users
build what they want in this case the
source code has some safety protections
to present prevent and savory or hateful
products but because it is made
available these protections can be
easily hacked and bypassed the same way
we can despite the legal prohibition run
a red light or take a one-way Street in
a forbidden in the Forbidden Direction
more generally if your preferred program
or service does not allow you to do
something chances are there exists one
that will somewhere in the darker
corners of the web
in other words
the algorithmic city is a patchwork of
neighborhoods you know some of them most
strictly governed than others
even in authoritarian environments where
code has been weaponized for political
repression such as China built-in
constraints on communication can be
circumvented with appropriate
technological roundabouts
so these suggests three conclusions on
the question of preemption which can be
taken as so many moderation senses
argument which I find compelling but you
know I think the moderations that can
are useful first the flexibility of code
relative to law should not be overrated
because of this you know multiplicity of
environments meta's version of the
metaverse may not be someone else's and
today's version of meta's code may not
be tomorrow's
second
exclusion or what says calls the
inability to explainability to do
something as the default mode of
governing in the algorithmic city makes
sense under certain specific economic
conditions only you know it is because
other possibilities like content
moderations are to uh are prohibitively
expensive
three the proprietary nature of code as
in the meta and the open AI cases is
actually one important way that the
strictest preemptive character of code
is constituted
and I shall return to the this point at
the end of my comments
so let me now move to the second Point
around mediation or intermediation
rather
the example of the so the example that
says gives of the forbid and virtual hug
raises another question about the
intermediary power of algorithms what we
could call with Foucault their
productive power
it says his argument
algorithms shape how we live together
how we access others how we interact and
communicate with them it's the flip side
of the argument about exclusion
I know I cannot hug others in Virtual
space but what is it that I can or
indeed must do what is explicitly
allowed and encouraged and channeled
so for that let me recall says very
elegant tripod typology of power
configurations in algorithmic mediations
which I find very very useful
the first type is power over
for instance a piece of code determines
whether my carefully designed home page
rather than some trash talk recorded on
an obscure website will appear at the
top of the Google search algorithm
I am painfully aware that my job
application or my dating prospects may
depend on it
power between
a piece of code decides who gets to show
me something on Instagram I have chosen
many people in this network but many
others who get to connect with me via
VIA ads or recommendations are unknown
to me
and yet they have this power over me
um because you know because of the
algorithm implementation it's a between
power
power through
a piece of code tells me something
important that perhaps changes the way I
think about the world
ah or about myself
so for instance if tick tock's algorithm
keeps serving me videos that celebrate
you know for instance being trans that
is if it connects me to people who
Define themselves as trans
then maybe that will teach me something
about sexuality in general
and perhaps even about my own sexuality
in particular
so in that way tick tock's algorithm May
shape the evolution of identities and
culture
so I find this extremely useful
and I but at the same time I wanted to
offer some concrete examples here
because I think concreteness is
sociologically relevant
here
you know here's the thing that actually
jumped at me when I read the lecture
this was a piece about power
and yet the entities in these power
relations bore for the most part an
abstract and impersonal character
for instance says Loosely models various
counts of para relationships between a
and b right but who A and B are
sociologically speaking does not matter
to the way that he poses the problem or
only marginally and of course the
sociology was going to jump at that I'm
sorry theft
the entities in the relations he
describes are devoid of particularities
power flows actually squarely from the
pattern of communication and interaction
so there's a configurational orientation
if you will in the talk
um and that I think that derives from
says his own definition of social
relations as rooted in the network of
impersonal con interpersonal connection
he is interested in the nature of the
connection much more than in that of the
notes
translated into sociological jargon
says his formalization of the problem
reproduces the characteristic
anti-categorical imperative of network
analysis
abstracting away sociological
differences helps with formal Clarity to
be sure
but it is not analytically neutral
it helps shine a bright light on some
problems but it obfuscates others
so for instance a bright light you know
on the bright light side you find
Sassy's abiding concern about Freedom
manipulation dizzy formation which are
really present in the talk you know that
arises I think directly from this from
this framework he worries as we all
should about privacy and cell
determination and self-fulfillment this
is incidentally is all very much in line
with the philosophy of John Dewey whom
says cites a lot and who has a similarly
interactive UF society as a web of
person-to-person relationships mediated
by communication
but such a method I would argue also
obscures other and I would say darker
aspects of social life and as such it
might impoverish our imagination when it
comes to identifying the problems that
may arise in the algorithmic city
to put it simply
the algorithmic CD is not made of formal
agents it is made of people in Flesh and
Blood who not only look talk behave and
live in widely different ways but who
also occupy vastly different positions
in the social structure
somehow Rich some are poor
some manage some are managed some are in
some are out
finally people already relate to each
other directly or indirectly through
these differences
so in the next section of my of my talk
I would like to take a step sideways and
ask
what would happen if we decided to apply
what I will call a structural lens
to the study of social relations right
and the way that algorithmic mediation
shape them
so here I would
prop propose my own definition the
structural of the structural power of
algorithm
by that I mean the way that algorithms
shape the distribution of resources in
society
it is a little bit different but not
unrelated
this is another way of saying that
algorithms are implicated in the
formation and reproduction of
inequalities
I am not arguing here that algorithms
make inequalities worse that's actually
I believe very firmly
that it's a matter for case-by-case
empirical investigations
but they certainly make inequalities
different and understanding how they do
is essential
be fair
Seth does not ignore this broader
framework
in a reference to a paper I Pub I
published with Jenna Burrell says writes
on page four
quote
the algorithmic city is different from
the Society of algorithm that the term
that Jenna Burrell and I used
the latter concept aims to Encompass
every way algorithmic systems Infuse
Society at Large
political philosophy must urgently
address the full gamut of algorithmic
impact on our social lives
but the algorithmic city focuses on one
specific Dimension the network of social
relations mediated by algorithmic
intermitters
a lot about whether to even address this
quote this lecture is not about me after
all
but it bugs me in two particular ways
one it dismisses the whole of political
economy on account that it is too much
to chew on
this in turn allows us to pretend that
we can reduce social relations to
people-to-people relations
or more seriously that people-to-people
relations exist outside of the social
structure
the truth is they don't
so the question is you know how might
the destruct change concretely
you know what are the ways in which it
changes as a result of the influence of
algorithmic intermediaries so let me
illustrate the point through two
examples
first example is how algorithmic and
temperature is reconstitute How We Live
Together by reconstituting property
relations
and the second is how they change how we
live together by allocating people to
social positions and changing the way in
which this allocation takes place
on the first point we must interrogate
not simply how algorithms are designed
but also by whom and for who's
benefiting you did mention that
but if we look at them this way as
objects of property rather than simply
as technical communicative mediations
we see a new economic structure emerging
one that is organized around the
ownership or like their oath of code
in the Society of algorithm Jenna
Burrell and I discussed the emergence of
a basic Society social divide between
what we call the coding Elite and on the
other hand a cyber Target which supports
it through wage and non-wage labor
including the unacknowledged labor of
producing the data that feeds the
algorithms
but that fundamental social relation is
the very condition of possibility of
algorithmic mediation
when people relate to one another by way
of an algorithm they relate by way of a
form of capital that multiplies itself
through that very relation between them
right so the more a message says on
Discord and let's be modern even if
that's not true you know I know my kids
use this code I mean
um so the more Mr says on Discord the
more Discord knows about both of us
right so what this means is that the
algorithmics the algorithms owners have
a peculiar interest in quarreling and
closing and especially in intensifying
our social relation right
the algorithmic intermediary is a is a
generally structure so as to meet these
objectives
so when we think about the power that
algorithms may have over me
we should think not only about the ways
that they shape my social life
you know which we discussed extensively
but also about the power that the owners
of the algorithm acquire by virtue of
their Capital expanding through me
ancest
right
so in other words algorithmic
intermediaries we constitute more than
sociality they've rebuild how economic
power operates and I think that's
important when we're thinking about
Which social relation we're thinking we
should also think about them
structurally
I will now turn to my second example
algorithms are allocative mechanisms one
of the most important functions is to
sort and Slot people into categories of
performance deservingness risk
desirability and likeness
in that task they promise to be
inclusive that is they try to break
everyone under their purview
they promise to be objective
they know no personal favorite the
machines after all right and they
promise to be efficient they work fast
all the time and for a fraction of the
cost of people right
that is why indeed we use them that's a
big deal right
says correctly remarks that these
qualities are not sufficient to
establish their legitimacy and he's
right
for algorithmic intermediaries to be
granted the right to govern us we must
govern them that is we must demand that
they abide by certain rules and he then
proceeds to offer Rich set of principles
for the governance of algorithmic
intermediaries which you have just heard
but the stubborn sociological reality is
this
no matter how sophisticated the
procedural guard rates or how pure the
intentions
algorithmic methods never fully succeed
at doing away with a social structure
that is the residual of the history of
exclusion stigmatization and expectation
that structures present the institutions
and patterns of behaviors across social
groups and categories and of course time
and again you know sociologists and
legal Scholars and others and computer
scientists are finding that you know
these patterns right these social
differences stand in the way of
algorithmic fairness and you find that
in practically every domain
now this does not mean that you know
progress has hasn't been or cannot be
made but it means that you know there's
something that will always bump against
that you know there's a hard constraint
on that ideal of government of
governability and it is that generated
you know those inherited social
inequalities
this in turns motivates a second and
final point
because they classify and allocate
positions through in a very particular
way through quantitative rankings and
scores
algorithmic intermediaries help
institutionalize
what fabina community calls a
hierarchical gaze at the heart of all
social relations
in other words algorithms you know they
normalize a view of the world in which
people relate to one another through
Quantified comparisons
you know that's the main that's the main
change that they affect right
you know numbers of followers credit
score personal rating you know my Uber
driver a fitness score and much more
right
so as I tell you they'll swear this
implicit celebration of orderliness and
Quantified difference makes it quite
difficult to see what it is that we all
share right it changes the way we live
together in that it makes it hard to
treat each other as equals you know if
the whole point is to look at for
difference right to look for the
smallest difference between us
it's hard to treat it as other as
Vehicles who are equally worthy and
consequently to Envision and build the
solidaristic institutions that will
reflect these beliefs
instead what we are seeing with
algorithms is personalization and
individualization
preceding a pace in sector after sector
you know from insurance to work to
education chipping away at the common
soul and leaving us where we thought we
would never be in a hyper-connected
world
increasingly alone
facing the algorithm
and sadly a virtual hug is not a
solution
all right thank you very much Professor
if I could invite the both of you to
come up um and and join us here uh Seth
I'm going to ask your Indulgence out to
refrain from an immediate response uh to
these to these stimulating comments
since we'll have ample time at the
discussion seminar and would like to
solicit a few questions from the
audience so I'll bring you a microphone
if you can raise your hands I'll come on
by I'll come back here in a minute Why
Don't We Begin here with you if you give
us your name and then your question be
lovely hi thank you very much that was
uh excellent uh and very
thought-provoking and didn't I have my
name is David I'm student at the GSB
um so at the intersection of Law and
algorithms and the hierarchy of of which
should govern which uh what about
situations where the coercive power of
law isn't up to the task of prohibiting
or curbing unjust actions
thanks to the algorithmic decisions made
in jurisdictions with a set of normative
and legal behaviors that differ from the
jurisdiction where the harm is being
caused based on reporting by Max Fischer
in the New York Times on the role that
Facebook played in the Sri Lankan
anti-muslim riots
Facebook algorithms reportedly
accelerated posts about an invented
threat
um by Muslims to uh dose the population
with sterilization drugs and this led to
rapidly spreading riots that resulted in
the murder of Muslims uh Facebook had
previously responded to the government
officials request to deal with the uh
dangerous posts by claiming that the uh
hate speech didn't violate Facebook's
Community standards it was only when the
government used its power to shut down
social media sites uh that Facebook
actually got in contact with the uh the
officials uh Fisher points out that the
root cause is there where people do not
feel they can rely on police or the
courts to keep them safe that research
shows that the panic over perceived
threat can lead some to take matters
into their own hands and point out that
Sri Lanka is Asia's oldest democracy so
what principles of communicative Justice
should govern
um that kind of intersection between
algorithms and law
yeah it's a really it's a really good
question I mean I will actually talk a
little bit about things like that
tomorrow but
um what it raises is this question of
proper Authority
um so you know where where is the source
of the authority to govern like where
does it come from and you know my my
standard approach to that is is always
uh grounded in Democratic authorization
I think that Authority comes from you
know we the people
um and there's an obvious problem that
arises when you have this sort of um you
know these transnational
platforms that don't overlap with any
particular Democratic quality
um you know one can reach for ideals of
platform democracy you know not sort of
some silly poll that's just flooded by
Bots but you know some there are genuine
proposals for how to achieve a platform
democracy the problem is that um most
people don't particularly want to spend
their time
um on the internet sort of figuring out
what Norms to be governed by
um so you know in that event I think
that the um you know the the degree to
which we come to or the role that these
platforms play in our lives
um sort of uh creates a balance that has
the tip at some point at some point even
if we don't want to we need to have
certain measure of democratic governance
over these things prior to that point it
can be possible for authority to be
justified on the grounds of there being
no better alternative available
um and you know that's one of the ways
in which people justify the power of the
state is on a sort of you know more or
less Samaritan grounds no no one else is
going to is going to do it no one else
is able to do it better
um so someone has to govern so the
state's going to do it
um and so there's a sort of at the very
least a kind of pro tem grounds are
saying that like the authority can come
from the platform if they're the only
ones that have the ability to do it but
it's a very sort of unsatisfying
um outcome because of how it just sort
of um just doesn't mesh with the
underlying democratic ideals that we
have obviously the you know the another
approach is to have appropriate
regulation within each country
um and that's certainly an aspiration
and I'll talk about tomorrow you know
how I think that should work in a way
that doesn't sort of lead to
um you know lead to the sort of thing
that we're seeing uh with the the case
of Texas
um kind of the Supreme Court cases
pushed back where you know if you allow
democratic governments too much
influence over these things then you can
get some real problems on the other side
or you you know the Banning of critical
race theory in Florida schools for
example is a similar phenomenon so you
want to kind of find the right balance
between these things of maintaining
sufficient sort of uh sort of
independence of the digital realm in
order to prevent that kind of
um government overreach well at the same
time having Democratic authorization but
I will I will be talking about that
tomorrow in some depth so um hopefully
you'll be here for that
I'd like now to collect two or three
questions before we wrap up for the
evening so let me see hand here one more
here look for someone else
uh hi uh thank you for the great talk
um my name is Julian I'm a postdoc at
NYU just uh passing through but
um I'm curious about this question of
who exercises the power in the context
of algorithmic intermediate
intermediaries that are in some sense in
the comments right like there's this
move towards decentralization of for
example social media where maybe you
have totally decentralized protocols
that are totally open source and the
user opts in right but that is an
algorithmic intermediary and so I wonder
who is exercising the power
get one more question
uh hi my name is Glenn Fajardo I teach
at the Stanford D school
um a lot of algorithmic intermitters
intermediaries would describe themselves
as just a neutral platform right there's
like these claims to neutrality uh that
they're not exercising power if you
could wave a magic wand how would you
want them to describe their relationship
to power
do I have one
um yeah cool questions thank you
um
so look I think that there is a sort of
Illusion in some of these decentralized
models that we're getting rid of power
you know like do away with
intermediaries that's kind of the uh the
Mantra of crypto and it's like no you're
not doing away with intermediaries
you're just deciding now what the
intermediary is going to be in your
creation code that's going to play that
role
um I think that like it's a sort of
controversial thought to assert that
algorithmic systems themselves can
exercise power that's philosophically
controversial but I do think it's true
especially in these cases and when
they're not under the effective control
of any individual and when they are able
to kind of dynamically update and
respond to the different circumstances
that they face so I think that in these
cases it is the intermediaries
themselves of exercise power that have
power and I think that we sort of uh you
know it's an interesting question
whether it's different being governed by
an algorithm versus being governed by a
person you know like some people might
say well the thing that is problematic
about power is that some people have
power over other people so if it's an AI
system that's governing you then that's
not there that's definitely not the way
that I I go but um you know I think that
it's that's the aspiration underlies at
least the sort of the crypto side of
that
um on the decentralization part which
obviously doesn't need to have
um the same measure of kind of
governance as with uh as with with
crypto where it's more like you know the
fediverse
um I think that's a really interesting
aspiration I have this long discussion
in the paper about sort of the role of
exit in legitimating power
um in a way it's a kind of zombie
argument that just won't die despite all
of the uh the many the many scholars
working on uh sort of technology and
Society who have argued that consent is
not sufficient to justify or legitimate
you know whether it be Daisy collection
or or whatever it might be like it just
keep coming back but I think that
they're interesting really interesting
reasons why it does also good arguments
against relying on consent one of them
being just the sort of
you know decentralizations or giving
lots of options is in some ways a way of
avoiding governance and that can be a
bad thing right and that's something I
know that um the oven is written about
sort of
um that you know when you have a ton of
collective action problems to deal with
you kind of want a centralized Authority
you know that's that's the way we solve
Collective action problems generally
um so so yeah really great question
thank you if I had a magic magic one
certainly I wish that people would stop
they would stop and if if folks even do
um calling them neutral platforms like
you know they're they're interesting and
important legal context here and I'm not
a legal scholar especially not a US
legal scholar
um so there are there are aspects of
that relating to section 230 of the
communication decency act that are kind
of you know important in that context
which I don't pretend to sort of make
judgment on but certainly
um you know since Titan Gillespie's
um sort of work in the in the Thames you
know his his observation that platforms
intervene
um you know I think I think I'd probably
want them to acknowledge that and the
reality is people acknowledge that
you've got to recognize that you're
exercising power and then you've got to
figure out how to use it in ways that
are Justified and that's a burden but
it's also you know it's an opportunity
for us collectively in this goes back to
you know some of the things that that
Mario was saying where you know it is
incredibly hard to change these social
structures and they will inevitably
um curtail our opportunities for things
um but these are one of the very few
tools we have that have some promise of
making some sort of
um sort of moving the needle at least a
little bit and I think that the
opportunity to do that in a way that is
democratically legitimate and that
actually kind of draws on the shared
values of the society is one that should
be kind of recognized and embraced
um not sort of hands washed and say
we're just a pipeline
all right we reassemble tomorrow at 5 PM
in this room for the second and final
lecture in the series and another
discussion comment and then once more on
Thursday at 10 A.M for a much longer and
open discussion with two further
discussions for now would you please
join me in thanking Seth and Marion
[Applause]