hello
thank you we thought i'd stand here for
a moment to get a photograph
for the m for the royal society archive
do you mind don't wave though
i think you should probably just look as
though you're interested
and intellectual and uh
how's that oh keep looking interested
now you can now you can wave if you want
hello
is that good all right thank you for
that um
so thank you for joining us this which
is the final event of the royal
society's you
and ai series and i want to welcome
everyone here tonight at the barbican
and the people well uh watching online
so thank you for joining us i'm
professor brian cox the royal scientist
professor for public engagement
in science now you have to go back to
in fact we found in research homer's
iliad
in 800 bce to find the first accounts of
automata and over the centuries those
ideas have developed into
the more familiar ideas of robots
cybernetics and now
artificial intelligence and it was royal
society fellow alan turing who began to
grapple with the notion specifically of
a machine based intelligence
throughout the 1940s and in the 1950s he
posed the question which has
become known now as the the touring test
which is can
machines think and the idea is that a
machine
could be understood or presumed to think
if it exhibits an intelligence which a
human
might think was actually human and
in reality this is something we'll
discuss tonight that might
be considered to just point to the idea
that
humans are gullible rather than
a measure or a good a perception of
machine intelligence if you like
but perhaps this tells us that our
relationship with ai might be
just as important as the relative
intelligence of the machine
itself as the use of artificial
intelligence grows and spreads
throughout society
how do we feel about it making decisions
on our behalf and doing jobs for us well
last year
the royal society launched a landmark
report on machine learning
the technology that drives many of the
current advances
in artificial intelligence and since
then the society has been supporting
a well-informed public debate it's a
bold idea isn't it
a well-informed public debate
could use that in many other fields i
think
about the development of ai in order to
help create a society in which the
benefits of ai
are shared equally now with me this
evening are some of the world's leading
thinkers on ai and together we'll be
discussing ai's potential
to revolutionize fields like healthcare
and education
but also the challenges and issues
surrounding the ethics
of mass use of the publics that is our
data now before i introduce the panel
i'd like to thank
deepmind who've kindly supported this
whole u and ai series
but now without further ado could you
please welcome our panel
let's see
so by way of introduction professor
succia is the
joint john c malone assistant professor
at john hopkins university
department of computer science uh
professor peter donnelly
f med sci frs is professor of
statistical science
the university of oxford and ceo of
genomics plc
and dr vivian ming is a theoretical
neuroscientist technologist
entrepreneur and co-founder of socos
now the first question before we really
start i think
is to ask each of the panel for for a
definition
ai has captured i think many of our
imaginations perhaps our
nightmares in some sense but it does
mean different
things to different people so perhaps if
i could ask each of you first of all to
define
what a.i is
oh go okay um
so uh you know artificial intelligence i
think can go by a variety of definitions
uh one of the dullest but perhaps most
accurate is it's
any autonomous system that can make
decisions under uncertainty
so if there's a problem that has no
right answer and there's no human there
to make a decision
an ai would be a system which could do
that there's no right chest move
there's no correct answer to how fast
should i take this right turn in my car
or should i give this person a loan or
not they're just fundamentally uncertain
decisions
i actually like something a little more
practical
when i describe ai particularly because
i do a lot of work
in what's called the future of work and
to me then
ai is particularly modern ai is any
brief
expert human judgment read a contract
decide whether to hire someone is there
a mass on this x-ray
made increasingly faster cheaper
and often better than a human can make
it
i think vivian's totally nailed it i
don't have a lot to
um add except to to mention a specific
form of ai
and one of the developments that's
driving lots of the things that impact
on our lives now and that's something
called machine learning
so that's a form of ai and it's the idea
of computer systems that can
learn for themselves from examples and
data and experience the idea that
instead of as in the old days
programming a computer to tell it what
to do in every eventuality
in machine learning you program a
computer so that it can study the
examples
and it can learn for itself what the
patterns are to help it make the sorts
of decisions that vivian was talking
about
i think they've mostly captured most of
the essence of what ai is just sort of a
little bit of historical context which
is
ai as a field emerged in the 60s where
it was primarily the ambition of
researchers from a diverse group of
fields all the way from physics and
cognitive science to
computer science if you will engineering
where the goal was to build computers
that could
behave intelligently the way humans did
and now one might argue well do humans
behave intelligently
and so essentially you can see the goal
was ambiguous and over the course of the
last 50 years
or 60 years we worked hard
to determine what intelligence is what
is human intelligence how should
machines how should we build
machines that are intelligent and should
we be mimicking humans or should we be
determining
what is the right thing to do in a given
situation and and that's
sort of the field as a whole you know
and it's worth noting that many of the
things we really thought of
as being the exemplars of intelligence
like can you do a math proof or can you
play chess
where in fact some of the very first
newell and simon
invented programs back in the very early
60s that could do these things
whereas are any of you
in the front row smiling turned out to
be an enormously difficult problem to
solve for a very long time
although now it turns out we can throw
it onto your phone
as a free app but it turned out it's
it's a much more complex problem that we
uh originally appreciated it to be
not just because we can't see them
because of the lights yes the glaring
lights although the ones up front
are clearly waiting to get their money
uh value for this performance
you've all um described i think in a
sense what we might call general
ais which is you know something that's
very human-like and multi-multi-purpose
but there are also ai systems on machine
learning systems that have very specific
tasks so would you like to comment on
the difference between those two
am i going to do this again all right um
so
so there are these two big concepts in a
lot of modern ai discussions
one is called artificial general
intelligence this is something that kind
of thinks like we do it really
understands and navigates to the world
and i'm going to contend
no such system like that exists in the
world today
no one is on the verge of inventing it
however
these systems that can take a very
targeted approach to understanding the
world
for example is anyone in the front row
smiling
is there a loophole in this legal
contract is there a giraffe in this
picture
building very targeted systems like that
have been
exactly where these systems have been so
effective
but i would contend that the one that
can recognize giraffes
understands nothing about giraffes it
has no
giraffe concept and when you look at the
mistakes
made between people looking at
photographs
and kind of looking at ambiguous
photographs to make a decision is there
a draft here or not
you can understand why they might be
wrong maybe this is a llama
with a head cold or something whereas
when these ais make these errors
frequently
it looks nothing like a giraffe or um it
thinks something that looks exactly like
a giraffe
is not and you come to appreciate we're
really making decisions in pronouncedly
different ways
so i would contend that these very
simple systems that are really the core
of ai today
are approaching problems in a
dramatically different way than
all of us are does that mean that the
word intelligence
is it's a complicated word isn't it i
think there's a lot of baggage
associated with it when you say an
intelligent system
like a lot of people think of something
like us
rather than something that's good at a
specific thing i think um
you know from having moved traveled
across
you know from india moving to the us at
the age of 17
i got to learn new things that i didn't
get to see as a child
and i distinctly remember so humor humor
is an example of something where you
know and it's very cultural and
contextual
and something i find funny now i might
have not found it funny when i was 15
and lived in india but i distinctly
remember thinking of it as a problem
where i was back solving
you know somebody was saying something
people were laughing i was trying to
figure out what's going on
and then eventually i understood the
context so what i'm getting at is i
think
one thing that's difficult about
intelligence is
almost any you know this ability to
reason through things like solve a
difficult math problem
or to be able to recognize giraffes from
llamas you can
pretty much sit back and come up think
of it in some deductive way or logical
way in which you can
i think train a computer to do it or
even perhaps ask children and ask them
how they're doing it and they'll
describe it in a way that you could get
computers to do it
so i think this notion of what is human
intelligence and what separates
you know what is that essence of human
intelligence we're trying to capture i
think is an open
question from my point of view i don't
know what do you think
um no i completely agree and and also
just
actually to pick up on both points and
brian's earlier question
in some sense the reason we're having
the discussion i suspect
many reasons but it's because of the
advances in
the machine learning in particular
approaches to very specific
tasks and problems the sorts of things
that we interact with on smartphones
every day
the ability to speak into a phone and it
i don't think it understands what we're
saying but it knows how to react to what
we're saying
recommender systems when we shop online
the ability to have
friends tagged in in photographs on
facebook or other
apps all those sorts of things are
because of progress in these very
specific tasks
image recognition is one of them and
it's been again to look back over the
history for most of my academic career
you know for example computer vision has
been a very active field
but throughout the 80s which sort of
dates me from early in my career to
through the 90s and even the early
and much of the 2000s um computer
systems were just not very good
they'd get a little bit better each year
but they were nowhere near as good as
people
and then over the last six or seven
years they went from being not very good
at all
at that specific task to now in in many
tests
as good as or better than people um
whether that's spotting giraffes whether
it's
it's looking for pathology in an x-ray
of some kind
um computer systems because of machine
learning can now outperform people and
similarly with
voice recognition similarly with some
translation tasks so
it's that massive progress over the last
five or six
or so years which has meant that we
interact with this stuff all the time
but we're interacting with it as you
said brian in in terms of the ability to
do very specific things
now remarkably well okay well um let's
move on to
your question so we've split them up
into three sections and the first one
we've titled who benefits from ai at the
moment
and the first question is from uh anya
and
the question is who does the current
system of ais
currently benefit the most
so who likes it i can lead this off
every time um i'm a paid pompous jackass
so um you know i think one of the truths
of almost all of technology uh is that
at least initially
it always benefits the people that need
it the least
uh the very nature particularly i've
done a lot of work in educational
technology in fact applying artificial
intelligence to education
and everyone that goes into this field
does it because
let's be frank we're out to save the
world we're out to say some little kid
can we build a little ai tutor imagine
that
and every kid in every home around the
world and the simple truth
is a company like that is successful
because
it's able to sell its product to a you
know
wealthy set of parents that want to get
a few extra points on a standardized
test
which will change nobody's life
whatsoever
um and so we build these things and they
tend to be adopted
and part of the heart of this is because
when we build them we don't actually
understand people
many of us understand a lot about
machines although there's so much more
to learn
but how machines and people interact
what do people think
is valuable i will certainly say in the
education world
if you've put an app in the app store
you have solved
nothing because the people that truly
need that help
will never go there they will never
download that app they'll never make use
of it because they don't believe in
their heart it will actually make a
difference
and so you know we look again and again
we see that this
really more is a concentrating power it
inevitably
increases inequality at least in the
near term and
ai in particular is interesting because
it can substitute
for human judgment at least in specific
tasks and in particular
often highly trained professional human
judgments
it itself is profoundly inequality
increasing in ways that we've never been
able to do before
because now not to pick on someone who i
don't think is a bad guy but
now jeff bezos can substitute his ais
for a whole load of people he would have
had to have paid to make those judgments
before
and that also has a big influence on
inequality
so you're essentially suggesting there's
a subtext to this question which is
it's not necessarily benefiting the
public
you're essentially taking a cynical view
that many of them are just increasing
corporate profits
i wouldn't build this stuff if i didn't
think it was worthwhile and
and i hope we have a chance to talk
about that sort of work but
um you know i'll be really clear
building something because you
think it will do good in the world is
dramatically different than
actually doing good in the world and we
can build
tremendously powerful systems they're
tools they're not
truly autonomous in the sense that they
don't understand the world
but they're tremendously powerful tools
but if we simply think that a hammer
by its very existence builds a house
then we really have lost an
understanding of how the world works
i think um i think this partly has to do
with the way funding is structured right
now
so most of funding in this field right
now comes through
um individual research grants which
means
no single professor or single lab has
enough grants to build anything of
significant
consequence on the other hand venture
capital
and private corporations have a fair
amount of
funding but also the vision they see the
vision for how
this can bring good to the world but
then
like vivian said the challenge is if you
are going down that route
the you have to have a sustainable
business model which means you're
naturally heading off in a direction
where in the beginning
you're going to cater to audiences where
they're able to pay for it
i work a lot in healthcare and
i think one of the promising things i
see in healthcare is because of the
centralized nature of it which means
data
held in large enterprises or here the
nhs for example
if we're able to take advantage of this
data and identify ways to
you know diagnose diseases earlier treat
patients in a more targeted way that
technology can really be distributed at
scale and be
can benefit many so i i think i'm
optimistic
um well i agree with both the previous
comments i
suspect i'll say that many times you're
going to stick out your first next time
but so it's a kind of complex
and layered question who benefits from
the systems at the moment there's a
sense in which most of us benefit
because they make our lives a bit easier
through our smartphones
and the systems we interact with there
there's a convenience i mean
those of you who would be my age and
have kids they find it impossible to
imagine how you ever managed to meet
anybody in the days
before you could use these systems to
find out where you are and where they
are and
converge but somehow other we muddle
through so there's a sense
in which our lives are more convenient
at the moment because of these systems
i think there's a a real sense and i'm
hugely positive about it about the
potential benefit in really important
areas
healthcare is one of them and education
is another one and the people on each
side of me are making a difference in
both of those
where the possibility to improve
so many aspects of our lives is real
and yet i think vivian is absolutely
right that we
need to think very hard as a society
about how we want to play out
i think if we let this happen without
intervening or worrying or stewarding it
in any way there's a very real danger
that the short-term and medium-term
consequences will be not to improve the
world as
as we hope but to increase inequality
and
i think as society we have
a massive duty to be active and to work
out how to steward this we need to be
stewards we need to look ahead and think
here are different scenarios as to how
this might play out which are the ones
where the society want to happen
and uh which ones are we less keen on
and how can we try and
use the levers we have to drive things
in the right direction because i think
it's pretty clear as vivian said if we
don't do very much
then the impact in the short to medium
term will be to increase inequality
to make lives better for the people who
are already pretty well off and not to
help the people who aren't and to make
those gaps bigger
which i don't think we want but we need
to be active in working out and by
increasing equality you mean
by the the most obvious which is
replacing low-skilled jobs is that
essentially
or is it is there more to it i would be
even more provocative
uh you can go read the
lunch with the ftps i did that got very
provocatively titled
uh the professional middle class is
about to get blindsided
turns out it's not impossible to build a
robot to pick strawberries or to drive
cars
but it's actually very difficult and
much more difficult than building a
robot to do financial analysis
or to read x-rays so those latter things
seem like very sophisticated jobs we pay
people an enormous amount of
money to make those sorts of decisions
even though often those decisions are
very rote
well if they wrote and economically
valuable and people make them on a
regular basis
you have just described a machine
learning training set
and so it turns out it's a lot easier to
build an ai to do those jobs than to
build an ai to pick strawberries
and it's it has interesting implications
because in a lot of the discussions that
i'm sure many of you have read about
or heard on the radio it's very much
about what do we do
with all of these low-skilled workers
that will be put out of jobs
which is still a legitimate question
even if ai is good
i grew up in salinas california which is
where john steinbeck is from
so i grew up in the realm of the grapes
of wrath
and you know the devaluation of humanity
out on the field 12 hours a day and it
still goes on
every day and it would be a human good
for that to end
it would be good if we could build
robots they could go out and
collect all of this for us um and
and so it's still legitimate question to
say what do we do with
millions that this the largest job
vertical in the world is agriculture and
the next is transportation
but what do we do with a group of people
that we're told if they went to
university
and worked really hard they would have
an amazing job that would take care of
them for the rest of their life and it's
very possible
that many of those jobs they won't
disappear but they will be as i call it
de-professionalized
where it'll be possible to hire a much
less educated person
and essentially stick an ai in their
hands to do the hard judgments
and then pay them so much less we've
seen it happen through globalization
we've seen it happen through
automatization of factories the the
first instinct
of cfos everywhere is chase labor cost
to zero
and ai won't necessarily it may well
create more jobs than it destroys
but how many people will be qualified
for those very elite creative jobs
and how many will fall down into a sort
of low agency
service sector where they really are no
distinguishable from anyone else
because has this got anything
specifically to do with
ai in the sense that you you described
many instances in history when
technology has caused these problems so
it's displaced
particular sets of workers and so on um
is this really the first time though
that
that we know we're thinking about that
when a new tool has become available
or is there something unique about ai
and machine learning systems
that you think will cause a bigger
dislocation than the dislocations of the
past
i've got an opinion but i'm going to
shut up for a little month well
i mean i think one thing to echo one of
vivian's comments
one thing that's likely to be different
this time around is that
so artificial intelligence machine
learning will impact the world of work
probably in substantial ways
and ways i think that aren't so easy to
predict in any kind of detail and there
are like
10 different learned reports which also
diametrically opposite things so that
it's more evidence that it's hard to
predict but i think one thing that is
clear is it'll have a much bigger
impact than previous revolution like the
industrial revolution and so on
on kind of white-collar work uh in the
way vivian was describing uh
many of those tasks can be done
or many at least many parts of those
jobs can now be done
uh or will soon be able to be done as
efficiently or better using machine
learning systems
than the people who have had years of
training uh
they're interesting questions about how
that whether that will change roles in
the way
you know first of all calculators uh
helped lots of people but probably
didn't drive
too many out of work i mean there were
were people in the old days who were
actually people called computers who are
responsible for doing sums
um they don't exist anymore but they've
been uh
they were a small segment but
calculators and then spreadsheets they
probably augmented
our ability to do things rather than
replaced but i think there's a real
chance
that the impact this time will be on uh
across the piece but much more than
previously
at that level compared to say the
industry just so i can give it a little
bit of a face this is one of my favorite
examples it's not my own
personal project but there was this sort
of notorious little competition run
recently at columbia university between
a startup that had made
an ai to read contracts and a bunch of
human lawyers
and in the competition they'd engineered
these non-disclosure agreements a form
of contract with a whole bunch of
loopholes in them and then they had the
two groups go at it
the a.i and these are very rough numbers
the ai found 95
of the loopholes the human lawyers found
88
so whatever they're only human so let's
call it a tie
but the actual much more interesting
number is the human lawyer took 90
minutes to read each contract
and the ais took 22 seconds
now what you do with those loopholes how
you make judgments
that's still a fundamentally human task
but the vast majority of lawyers
particularly junior lawyers
spend their time reading contracts and
finding loopholes or doing case study
work
doing busy work to learn how to be this
other kind of lawyer
and it may well be that that kind of
work disappears those tasks
disappear uh and then it becomes a
choice
do we do what peter says which is all
the lawyers all of them that are out
there become sort of
super lawyers augmented in the moment
they know all the loopholes and they
begin thinking about what to do with
them
or do we say well gosh then you know
let's take
keep the five best lawyers and we'll get
rid of all the rest well wouldn't it be
great if we could
you know have instead of lawyers just
everybody's a super lawyer
and effectively if we could have our
contracts we read in
half an hour with solutions as opposed
to five days of reading and cost us
you know a tenth of what it costs now it
seems like
everybody would be more productive so
in a way i think um in that i think
the role will change so it's that
version of you said right now we need to
we don't have a choice you have to
spend your your junior lawyers have to
spend time reading those documents
for loopholes and they'd rather be doing
more interesting things
so i really appreciate i mean i
genuinely appreciate because when i say
this is a choice i think it's
genuinely a choice and the ability of so
we
i'm not going to describe it in the
moment but we built a little system that
can analyze kids artwork and it adds
uh to our ability of one of our systems
to do some education interventions
so there are like six people in the
world
that study children's art and look at
its implications for their cognitive and
emotional development
and we took their work and we were able
to build it into a system and train it
on these kids artwork
and now we give it away for free as part
of this little system which
itself we give away for free um and
in that sense yeah you can reach out
probably even the better example is
the idea that you could just even have a
dumb mobile phone if it can take
pictures take a picture of a mole you're
worried about and have it tell you
whether it's cancerous or not
or at least whether you should go in and
see a doctor that would be amazingly
valuable
but in this question of have we been
here before
well it really depends because we've
been here in multiple ways
uh in america we have this chain called
jiffy lube you bring your car in forgive
me if you have it here also but
you bring your car in and they change
your oil and your
air filter and do all this sort of work
well that used to be a middle class job
where an actual you know auto engineer
would come in and do the tune-ups on
your car
now it's a job you get if you didn't do
so well in high school
and you have no agency you just follow a
script a computer does all the
diagnostic work you do a little upsell
on an air filter and the car goes out
and the next one comes in
so all i'm saying is the overwhelming
economic trend of the last 30 years
has been towards deep
professionalization it doesn't have to
be
it could be very much what you're
talking about but it requires an
explicit choice
on all of our parts and let me tell you
if you leave it up
to the entrepreneurs of the world we're
going to try and extract
the wage value out of the system and
keep half for ourselves and half
offer half the other half to you as a
discount and as a result
that class will disappear when's the
last time someone used a travel agent
for example so 10 or 15 years ago uh
travel agents were absolutely essential
in the world because they were the only
people who knew the complicated things
and could read the airline timetables
and so on and they still exist but but i
think we interact
with them much much less um because it's
possible to do most of these things
yourself
um through various apps online we should
um we've
obviously gone over time already almost
done the first
i just went so very very briefly we've
got a demonstration i'm going to go to a
minute but
just just a question from emily the last
question in this section which is
uh so if you can be brief how do you
think artificial intelligence in films
has impacted our view on ais and the
potential they have
to to how many people here have heard of
a
deep reinforcement learning model
okay there's a few hints how many people
have heard of skynet
so there's a partial answer i think most
people's conception of what ai
ends comes from dark mirror which is
actually not a bad
depiction of some of the ethical choices
if not great depiction of the technology
it comes from the terminator movies it
comes from many and and probably
reflects more of our fears
uh i mean it feels a little more deep
spiritual
than anything else and and i don't know
that it's done a great
service to anyone and trying to get them
to understand the implications of these
technologies
do you find this when you say with a
field that you work
and do people tend to use movie
analogies and film analogies
to imagine what you're doing oh yeah
absolutely people get very excited and
fascinated and they want to know more
and
immediately they think of not
mathematics statistics and algorithms
they think of
powerful robots and so
absolutely and do i take advantage of
that 100 kidding
it's interesting how often you talk
about the future of work and people
think what you're talking about is c-3po
is gonna come
and literally tap you on the shoulder
and say you're out that's my seat
uh and there's nothing like that
actually taking place yeah i should say
actually
there is a report a piece of work that's
been done by the leverheum center for
the future of intelligence today which
is available
it's published today it's available on
the royal society website so if you want
to
look at more of the these questions dig
deeper into that question you can go
there to the royal site and have a look
i am going to go to this demo
over here which is a demo it's been set
up it's a new another example
in fact of a machine learning and ai and
so i'd like to invite to the stage
professor adrian hilton and his team
from the university of surrey
thanks hello i'm gonna i'll say hello
perhaps you could just uh introduce the
team actually so um we have
charles and marco and hannah's going to
perform live for us yeah so so what are
we going to do
this is uh using ai machine learning for
motion capture
yeah so so what we're doing is
converting video
into 3d models of people's movement and
over on the left hand side you can see
the kind of video input what happens
then is that's
transformed into a a three-dimensional
representation of the person's movement
and then we can map it onto a character
both indoor and outdoors so this is a
very portable system
yeah and then what was the the
breakthrough here what are the
difficulties in capturing human motion
so so what machine learning and ai has
enabled us to do is take this video data
and really extract
some high level understanding so in this
case what we're doing is understanding
that there's a person in the video
and on top of that we're understanding
where their joint positions are in that
video
and that's something that we've not been
able to do until the last few years and
it's
it's a powerful technology um in this
case going into the entertainment
industry
yeah because i suppose i used to see
these things in film
and they used to be little dots all over
everybody you know so you could see
how what the visual system was doing but
in this case it is just looking
at a person is it so there's a lot of
intelligence in there to recognize
that's a hand
that's a head is is that is that exactly
so so what
is happening is the ai machine learning
is taking the video
and detecting that there's a person
there and then labeling the body parts
of the person purely from the video
and the challenge really is to be able
to do that in complex scenes so in
someone's home for instance
if you're detecting their motion or in
an outdoor scene like we just saw
so we're going to see what are we going
to see here so
so up on the top left then we have yeah
we have
it's just calibrating itself and then in
the middle we have
now hannah moving and her movements
conveyed in 3d
and then on the right hand side we have
a a computer generated scene
of the barbican courtyard which is
directly above us here
and the model being animated in that
scene
i see so so there's got to be there's
the recognition part of it but there's
also a human model in there to say
this hand is moving in this way so this
is how we would
that so it's so the ai is mapping
out where the person is in each view and
then we combine that together into a
three-dimensional model
what are the applications i mean
obviously for filmmaking
this is useful but what are the other so
this system was specifically developed
to get around some of the problems in
filmmaking
um and games production and things like
that so to have portable systems that
you can take
and put on a set but the technology
itself is applicable to a wide range of
things so for instance
one of the things we're looking at is
the use in healthcare and how you can
have
very passive sensing technologies that
will be able to understand
your movement and behavior and why is
that important
well if if you want people to be able to
live at home
independently for longer then you have
to have systems that can
understand their behavior and
particularly when there's changes in
behavior
because that signals that maybe they
have an infection or something like that
so we've actually shown with some of
this technology that using
again machine learning you can pick out
some of those
characteristics from relatively simple
behaviors yeah
well i think do you think i should have
a go just to see so you can see that it
doesn't only work with a
trained dancer i'm going to go in
so so could you talk through what's
happening so as
as i stand here so the first thing at
the moment
so what's happening so first of all you
can see the images up on the screen
there
and what we have to do first is just
calibrate the two models because we've
got
uh brian in in the picture now and then
the skeleton it should be that it's now
picking up his motion
and he's being converted into this
lovely skeleton he's changed into
another character here so you've got a
a very simple pipeline without wearing
any sensors or anything like that that
can really interpret someone's
movement and behavior um and that's
that's one of the
kind of powerful technologies from ai so
so in terms of processing power um how
long have we been able to do this what's
the difficulty is it the algorithms is
it the computer power
is it the is it a collection of all of
those so so the real challenge
has been how do you analyze an image
in a general scene and understand that's
a person
and we've all got used over the last
maybe 10 years
to our smartphones having ways to detect
faces
but this goes a long way beyond that and
it's only been the last couple of years
where we've really been able to pick out
people in general scenes
and then convert that or analyze that
motion
and that's down to machine learning
understanding from
very large groups of images what a
person looks like in an image
you know and you know the variety of
people in the audience here with
different clothing
has to deal with all that complexity in
the understanding
yeah oh very impressive so yes thank you
thank you very much
thank you
thank you
clever stuff i thought that was
receiving a lecture from a superhero
with parkinson's
yeah yes i know they chose a very
flattering sort of
i don't know you know iron man type
thing for me didn't they
it could have been anything anyway the
next question this is a question from
alex who is asked who will be the last
oh it actually we've talked about this a
little bit it's interesting because you
talked about
i made the mistake i think of thinking
that new technologies always displace
less skilled labor less skilled john you
said no it's
it's um perhaps the more professional
classes that should be more
concerned and the question relates to
that is who will be the last
employed right but there's a question is
it creative artists
or software engineers so i suppose go
into the essence of what uh
what's the most i suppose what's the the
most difficult thing for an ai to
replace
um uh i've now taken the lead here on
the whole thing
um so this will be an absolutely
shameless plug i have a book coming out
next year called how to robot proof your
kids
um and the heart of that story is that
there are some good answers to that
uh at least for the time being until
something advanced uh
more more advanced comes along um but we
look at the sorts of things
that generalize the best um you know
this idea that the future is unknown
well let's build people for the unknown
rather than trying to guess what you
ought to know i think
all of those reports the one thing that
those future of work reports seem to
have in common is everyone should go
learn how to program
which is one of the skills i've
literally seen
an ai do where a designer just describes
the website that they want and it writes
the code
uh and so in 10 years if that isn't
most of the code isn't written by ais i
would be rather shocked
um there'll still be people writing code
to build
really novel database structures and
going out and exploring
the unknown but there probably won't be
people writing a bunch of boilerplate
code
just to fill in websites which is the
core of these jobs but that's points to
the other side
um i no one's really developed ai's
in a sense to explore the unknown so
when you look at the qualities
associated with that
things and i'm going to be very broad
here like emotional intelligence and
social skills
creativity and metacognition
those are the things that even talking
about ai's doing
those sorts of qualities doesn't even
make sense because they don't really
have
you know emotions to have intelligence
about
so when you look further at who are the
people
that have the most creative jobs in the
world today
and i mean creative very broadly defined
so scientists are creative
and so are engineers but so are the
people that are usually on this stage
those are the things that will be the
hardest to automate
and really focusing our education system
and even our hiring
on those sorts of qualities rather than
focusing on a bunch of rote skills that
two years from now you're gonna have to
retrain again uh
is the heart of the sorts of qualities
that we need to look i think that's
the point that you made earlier isn't it
that you open up possibilities by
enabling people to
focus more on those really productive
areas
even lawyers you said actually that's
the creative nature of
the legal profession absolutely and i
think in general if you think about
you know lsats and mcats the entrance
exams for medical school or law school
at the moment they mostly are testing
for you know what we typically
describe as iq but increasingly i think
schools should change their admission
criteria to focus more on eq
and identify individuals who can balance
the two because that's where these
professions will shift
you're going to agree with everything
i'm going to agree again and i'm going
to come to you first next
i think uh i think this question of how
you believe in the nazi philosophy
i don't agree with that okay that's
right
[Laughter]
that true i'm not a machine or that i am
a machine do not quote me out of context
things are bad enough in america as it
is is it's being broke
it's been broadcast online people are
just going to clip it out
and put it on twitter it's
i i think thinking about the right way
to
train and retrain people is a major
challenge for us
i mean there are obvious things about
helping people
learn to to to think in the kind of
old-fashioned sense rather than just to
know and learn things that's bound to be
increasingly important and we need to be
thinking about that throughout
both the standard educational curriculum
and then the rest of people's lives
let me add in a really fascinating
finding so for a little while i was the
chief scientist
one of the very first companies using ai
for hiring which
i hope we talk about because it is one
of those very profoundly controversial
things
and we had this really interesting
finding we built a database of 122
million people
of which 11 million were pretty much all
the professional software developers in
the world
so i mentioned software uh social skills
are
one of those things that are robust to
displacement by computers
so in fact we found social skills
empathy perspective taking communication
skills you could go on and on
we're very predictive of people's
quality of work and in fact
just as predictive of the quality of
code written by software developers
as the amount of sales by sales people
yes it was much less common for software
developers to have super strong social
skills
but when they existed they were just as
predictive
so one of the misinterpretations of a
question like that is oh well then we
need to train everyone to do social jobs
you know care for the elderly which is a
wonderful job but the economics aren't
great
uh you know greeter at a store but in
fact
every job is made better by
understanding other people and that's a
really important thing to remember
yeah okay well let's move on to
to section two which we call that how
could society benefit from ai we've
covered many of these issues um but
malcolm has a question so i'll come to
you first peter
so the question was uh if a computer
gives me a diagnosis
should he also have to explain how it
reached that diagnosis
it's a really really interesting
question um
and i think there are a couple of
different levels of there's a specific
question in the context
of diagnosis there's a much more general
issue about these ai systems
which is how important is it that we
understand why they're reaching a
decision
and it's worth saying by way of
background that many of the incredibly
successful systems
we there's no sense in which we can ask
them why did you get that decision we
can just measure how often they get it
right
and there are very interesting questions
about the extent to which
in different contexts we value the
ability to be able to understand the
reasoning behind the decision
and i think those things tend to be
context-specific the the question was
explicitly i think about medical
diagnoses um
and as part of the royal society's work
on machine learning as brian mentioned
earlier
one of the things we did was outreach
and we we talked to
people we itself samara helped us to get
people's views
and actually those views were very
interesting in the context of medical
diagnoses
um let me tell you what they are let's
actually do the experiment except i
can't quite see so imagine you're
it's going to be slightly hypothetical
and you've got to vote for one of the
two possibilities here
you're sick with something pretty
serious and you have a choice
of having your treatment decision made
on the one hand by you know the
consultant at the local hospital
who's good at his or her job and
we know from lots of data that they get
this decision right ninety percent of
the time
so that's choice one choice two is um
you can have an ai system uh examine
your symptoms
and make a treatment decision and again
we know from lots and lots of data that
the ai system gets it right
97 of the time and except that there's
no way you're going to understand the
decision the ai pus
that the ai system has made okay so
you've all got to vote unfortunately we
can't see very easily
um where you're voting so in those
situations you're seriously ill
slightly hypothetical as i said who
would choose the doctor
who gets it right ninety percent of the
time hands up the doctor
and who would choose the ai system wow
this is a really biased audience
yeah obviously it was it was fairly it's
about a third or two thirds yeah
so i think i think how many of you are
wearing a badge that says
i love c-3po right now
uh i think this is a this is a royal
society event so it's just a
statistically literate audience
it's good you said nineteen seventies i
haven't quite finished with my
experiment so let me finish um
so so i think different people take
different views as we've seen
now if i give you a third option which
is actually you can have your doctor who
gets it right 90
of the time who knows the result of the
ai system that gets it right 97 of the
time
and can revise his or her opinion who'd
go for that
yeah so that's kind of easy um then
and it that's the sense in which
probably these systems will augment um
at least in that kind of medicine
that'll augment uh what doctors are
doing
uh final version of the question um
those of you who have
oh let me ask you a different way so
here's something complex that gets used
all the time in medical systems mri
scanners
so they work on pretty
well depends on how into physics you are
but from a distance so fairly
sophisticated physics
there are complicated algorithms that
interpret the raw signal from the
machine
that gives something that gets fed back
to the doctor i think it's probably fair
to say that many of the doctors
who do a brilliant job of reading the
outcomes of mri machines
or ct scans or pet scans or whatever
don't understand any of the details of
the algorithm
now somehow that doesn't seem to worry
us currently
we just know that they work well because
we've checked so i think
as i said it's an interesting question
that that ability to understand the
decision
is a good one last version of my kind of
audience participation
how often do you think in those complex
situations when the doctor says
we think we should do x because and then
there's a kind of short simple statement
how often do you think you actually
really understand the doctor's thinking
and the reason the doctor came to a
decision
it's about two some of you they're both
doctors
probably some of you do
[Laughter]
and and so the question for us is when
should we want
different uh when should we impose
different criteria about decision making
from algorithms from that of people i
mean people can give an explanation
but we know from our daily experience
that most of the time the explanation
well
that's not aimed at the people i
interact with all the time but sometimes
we experience uh the fact that someone
comes up with an explanation after the
fact
which isn't really an explanation it's
just you know it's a way of them saying
or just justifying what they've done
anyway long answer but i think it's
complicated
and we have to think hard about it sushi
i think
um sort of building on what peter said
though i don't entirely agree with him
because i have to
disagree with him um the
so this uh notion of you know i don't
actually think we need an explanation
what we need is the ability to trust
and the ability to work together it's
that example the lawyer example
uh contract reading example we got
earlier from vivian but also the same
thing in medical diagnosis
if you had a way of knowing you know
when you interact with a trust colleague
you understand how they work you
understand how they think they're going
to say something to you helps you build
your own thinking which allows you to
evolve and say something that they react
to
and that notion of collaborative
reasoning is what we need
um with for such systems a thing that's
really exciting and powerful here is
that
you know computers possess the ability
you know these algorithms
can look through tons and tons of data
to determine
in any given scenario what has happened
to other patients who were in the same
scenario and what was done and how they
reacted and can summarize it in a very
nice and succinct way so what this means
is if we could figure out a way to build
complementary expertise where
we can use that knowledge to be able to
collaborate
arrive at a decision i think that's
where we want to be at but i think this
is an open science and
something we're actively working on yeah
so i can give a very personal answer
here
um seven years ago uh just before the
u.s thanksgiving holiday
my son got sick and it wasn't clear what
it was at first
um but that was a sunday
by that wednesday he'd lost 25 of his
body mass and couldn't stand up
so we rush into the hospital uh and it
turns out in retrospect it should have
been obvious you could actually just
smell what was wrong with him he had
type 1 diabetes and his sweat was sweet
now i got some fancy smancy degrees but
it turns out being a neuroscientist
doesn't mean you know anything about
diabetes per se
uh so that was a very hard long four
days in a pediatric intensive care unit
my wife and i both happen to be
scientists so the minute we come out we
record
everything we were crashing google docs
on a regular basis
recording everything he ate to the
ground did he have the sniffles that
morning what was his blood glucose
readings what's his heart rate
everything and then before i go in to
meet with his new endocrinologist i
don't know how many people here get to
have an endocrinologist but that's a fun
part of your life
um and and she's someone i really
respect and she's still
his doctor uh i emailed her all this
data
thinking she's gonna love this
and and we got no response so i figure
what's what's going on here i mean i
love data then truly
all of you love data what what's wrong
with this woman
so um then i realize what it is so
i print up about an inch thick of
spreadsheet and bring it in with me and
plonk it on the desk in front of her and
they were pissed
um this was not what they wanted what am
i supposed to do with this data um
so uh instead they gave us a little
photocopy
sheet diabetes has gotten best better
even in the last
seven years but at the time they gave us
a little photocopied cheat it had five
days
three boxes for each day morning
afternoon evening write
a blood glucose level in each box 15
numbers we had 15 000 numbers
but what a human can't really process 15
000 numbers
um but i'm gonna admit
uh and forgive me if i'm reading the
room wrong here but this is my genuine
feeling
you've got to be kidding me i
make models of the brain are you telling
me diabetes is more complex in the brain
so that night i bought a book on
endocrinology and the next morning
we hacked all of my son's medical
equipment
turns out we broke several federal u.s
laws
and i redirected the data to my personal
server and then i took a model of
predictive coding in the retina
i don't know if you realize it but your
retina literally predicts the future
um and i repurposed it for diabetes the
details don't matter
um except that it really helped i mean
it profoundly helped
it allows the the insulin pump to sort
of make its own decisions
and there's all sorts of implications it
was it was wonderful and and we got to
give it away and all sorts of things
um i gotta admit
i don't care what my endocrinologist
has to say about the treatment for my
son i care what my model has to say
her job is all the stuff that's not day
to day
because it's day to day literally every
five minutes it gets a new number and it
makes a new judgment
uh and it updates the model and there's
a very exciting
lots of new work in this space um we've
been able to do the same thing
with bipolar disorder and work in
parkinson's
so what i'm getting at is i actually
think models ought to be explainable
both in medicine and a variety of other
domains i need to be able to tell you
why i didn't hire you
or maybe why i did but it doesn't mean
you have to understand the model
it doesn't mean that we have to probe
the
and sometimes incredibly complex
interrelations of a very
very big system but somewhere along the
line
if you've been denied a loan or job
or if a judge doesn't believe you for
some reason that won't be disclosed
you should have a right to understand
why and if they can't provide you with
one
you should be able to second-guess it
and in the case specifically of medicine
this is why because there's no right
answer to the treatment
so understanding you what it means to
get this diagnosis what kind of cancer
do i have
what does it imply how does that
interact i think there's
cool ideas about how ais can explore
vast possible treatment plans also
and then linking a whole bunch of them
together but somewhere
i believe there should be a person and
there should be a why
and it should be something that you
actually understand because
otherwise we need to think about what
happens over time when we get
very used to the idea um that
there is no you know intelligence the
way we understand it in a system
making these decisions it is interesting
isn't it because it's again it's
some of the issues we're discussing are
not specific to
ais i mean in this case it's a the human
need to understand if
if a mistake is made which i might mean
a wrong diagnosis
even though 99 of the time the expert
system might get it right as opposed to
90
for the human we want to know why the
mistake was made
don't we and the the brutal statistical
approach would be
it doesn't matter because it gets it
right more often i mean you see this
with driverless cars actually
as well the question is well driverless
cars will be much safer than drivered
human controlled cars but when the car
runs over
and kills a pedestrian we want to know
what the decision-making process was
but it's it's it's our it's probably
because they've seen
something nasty about elon musk yeah
it's kind of in a sense not particularly
logical and not about the ai isn't it
we're more talking about the need to
know why it may have made a mistake
i don't i don't know if that desire to
know
that in that instance is logical in
other words i'm conflicted at
this choice of if i had a system that
was 99
accurate versus using a system that was
only 90 accurate
but one could make up an explanation
like peter said
though we had no way to assess if that
explanation was correct
versus a different system that happened
to say
i don't know though i disagree i
actually think we can hold
these algorithms to a much higher bar
and
there's a lot of creative work going on
and being able to elicit explanations
that can allow you to collaborate with
the machine
but even in that dire scenario where the
system said
you know this is my decision then unless
the issue is that when it gets it wrong
it gets it very wrong and it's very
damaging
i would say we should be willing to
operate with the 99
accurate machine yes i i agree but it's
it's uh
yeah so that's why i was thinking the
heart of this question is a
should should the thing have to explain
how it got to the well maybe
that's that's the point isn't it what
we're saying is well no if it gets it
right most of the time and more often
than the human
it doesn't matter is that what we're is
that the consensus i think for some
people it'll matter a lot and for others
it'll matter
left and and you know people might want
to
take their choices there um
i mean as sushi was just saying there's
a very very active
area of research within machine learning
which is about uh
building systems that that are much
better at explaining why they got there
and it must be in everyone's interest
for that research to be
to be supported and strongly developed
so it's fascinating
peter and i were actually an event a
royal society plus national academy of
sciences event
in palo alto and there was a young man
who is
making this very provocative statement
that the understanding of explainability
of ai
on its face is silly and gave all of
these
very very good reasons for that uh
all of which apply to us
every single one reason he gave for not
bothering to understand
ai applies completely validly to
understanding our own minds
uh and i i hope we don't
take that as a policy position of why
bother
understanding why we make decisions
either because
really it feels very parallel to me
okay so let's get another question from
um
jeremy perhaps let's start with you
sushi uh the
question is given how much google
already knows about us
should we be worried about them getting
into
health oh boy um
who is sponsoring this event again
um they did pay for my flights somebody
did
um i suppose well i mean i suppose we
don't have to that was the question but
we don't have a
specific company what we're saying is
big companies
have a lot of data about us in general
apple apple also have a lot of data and
they have nothing to do with this event
at all good point
but but it's a good way isn't it it is
so it's so healthy it's the most
personal of data in many ways i suppose
that's the point
yeah i do think i think that's a great
question it's a very
interesting and a tough question i don't
know if i can really answer that
question with like
five seconds of thinking but i'll take a
crack at it
um i am concerned about i do think it
should be concerning to us if a small
number of corporations or decision
makers have
access to both a large amount of data
and a lot of the
requisite skill set and personnel to be
able to develop these types of
approaches
so i do think it's important for us to
decentralize
but i don't think that should happen at
the cost of us not having access so in
other words if i had to choose a world
where we could use machine learning we
could
take advantage of a group of individuals
who could develop really smart
algorithms that could then allow us to
improve health care
i would absolutely support that and i'd
prefer that over a world where we didn't
have it
but my preference would be we
decentralized the development of these
and we
broaden our education we make funding
available broadly
and build systems where you know
like the data isn't sort of held or
bound by a single organization and
uh people involved yeah and this is
quite specific isn't it because
google as a company are providing
services right
the nhs and such like in the uk's yeah
and i i think there are really
interesting and special issues about
healthcare data i mean it's true that
google and many of the other tech
companies have a lot of information
about us
in a certain sense that's because we
consent to it now
you know we we tick a box after pages
and pages of consent forms we don't
read but but i think everyone thinks
differently about
healthcare assist the data in healthcare
system certainly that the healthcare
systems themselves do and i know
in in our case in the uk it's something
the nhs
and our politicians think very hard
about
uh there's huge potential uh in the data
from the nhs
to analyze that in ways that will lead
to
improved efficiencies to better outcomes
for individuals for people
uh living longer avoiding
complications being given the right
drugs and so on the potential is massive
in the data in large healthcare systems
and in particular
we're in a very special place in the uk
because we have a single provider
system so the potential's there
nobody wants to kind of give that data
away to companies
well for two reasons first of all i
think because
we naturally think of the nhs as sort of
our resources something people in the uk
care strongly
about rightly i think so any version
of that data being used must have
benefits both for the nhs and and for
those of us
in the uk who are who are patients
within the nhs system
but also because we all appreciate that
healthcare data is
special and even more private so
there must be and encouraging that there
is a kind of serious debate going on
as to what are the right levels of
safeguards what are the right levels of
anonymizing data
that might then allow that data to be
made available exactly for these
potential benefits to accrue
and i think the other thing to say which
is obvious is that i don't think anyone
would say
it should only be the big tech companies
that can benefit from that there'll be
lots of small startups
actually run one myself but there'll be
lots of small startups who can play a
role in that
as well so so when you think really hard
about healthcare data we've got a
fantastic opportunity
in the uk but we absolutely need to get
it right and understand
um the right way to respect privacy and
anonymity the right way to
have a dialogue with uh those of us who
are all of us
uh effectively who are patients in the
nhs to make sure we're happy with
that the benefits to do everything we
can to minimize the
potential downsides and that they're
real that we can do things to minimize
them
and then to make sure we're happy that
the upsides are justified and my own
view is that potential is huge
but we need to get it right go ahead um
just to add to that i think there's also
this um i mean
i could imagine as um this is it's
certainly this way in the u.s where
large
health systems which are large
enterprises they prefer to interact with
large corporations or companies because
they
inherently think that they're probably
able to
store the data more securely and um
keep it private and as a result
you know implicitly they're creating a
scenario where they're locking down the
data with one or two or three large
organizations
um i went to school in california in the
land of
you know startups and you know
historically my experience has been that
some of the most innovative and
disruptive ideas come out of small
groups of people who are mission
oriented
who are completely committed to making a
difference and so as a result i think in
this area at least what i see is
this mismatch in potential where you
know the ability to make change by small
groups uh small companies
who may be well equipped but large
enterprises are you know uh very
skeptical
or uh afraid to work with them
so i want to expand this a little bit uh
healthcare is great but
it goes beyond that in my opinion and
also expand beyond the question of data
this is data has been a
big issue here the whole idea of gdpr
is to protect people's data uh health
and education data data about kids
are people everyone gets very productive
um
in all over the world you know the one
place where they're really productive
about data
in china for example is around kids
but here's the thing if you don't
actually have any ability
to actually exert a right
around your data and yes i realize
through gdpr there's a lawsuit mechanism
but you know right now someone's
missing using your data what are you
going to do about it file a lawsuit so
that six months from now
we'll lose in court um you know that's
not very satisfying
uh right now one of the biggest
advantages
these large companies have and i don't
think any of them are bad guys i've done
collaborative work and been recruited to
work at almost all of them
uh it might say something that i never
said yes but nonetheless they're not bad
guys
they are all guys for the most part um
is that they don't simply have
masses of data no one else has available
to them they have
masses of talent because they have
raided the university system
and i'm not out here to tell anyone they
can't go get a a big paycheck but when
google will throw a million dollar bonus
at someone
just to keep them out of another company
that tells you something about how they
value talent
and they also have infrastructure which
is hard to find anywhere else
even for my own companies we can't run
massive scale
high throughput gpu systems on our own
we don't build that stuff
we use google's so they have a monopoly
on multiple multiple dimensions in this
new ai space and that does genuinely
concern me
now we can actually throw in china and
america themselves as formal entities as
well
a very small number of entities
essentially control
all of the computing power around
artificial intelligence in the world
and i could build a company and my best
hope is to get it bought the likelihood
that i will become a massive
new competitor to google or facebook is
virtually zero
um most of my work is philanthropic but
i'm still under these same constraints
i have thoughts about this but i just
want to put out there instead of my own
personal philosophy
how do we think about how we as
individuals are able to exert
our own rights to how decisions are
being made about us
uh not just control of our data per se
but the right
the same way we have a right to judicial
review the same way
that we have a right to seek a second
opinion from a doctor
i should have a right to how ais are
targeting me with their ads i should
have some participation in how these
systems
interact with me and right now none of
us have any and i'm as empowered as you
get in this space
as you heard through my diabetes story
but um
i i am not going to engage in that
because i don't have enough time in my
life to build
a separate ai to deal with everything
and i think we really need to think
um right now our homes are filled with
these little embassies
embassies from amazon embassies from
google from
apple from baidu from alibaba
and these are our phones and our smart
home systems
and they operate under their own laws
even though they're in
our homes wouldn't it be great if we
could think
of uh how we could operate in the public
interest in the public trust i don't
necessarily mean uh governments because
i'm throwing them in as part of that
concentration of power
how do we actually exert our right to
make decisions about our own life
okay well let's um
i didn't i didn't mean it like that i
thought i could
the point is we could talk about that it
raises so many issues
but we've got about what 20 minutes left
or so
and quite a lot of questions and a whole
section to get through i
just want to so very briefly that
there's a couple of questions here which
is sort of linked about the future
um so maybe we could briefly address
these one is um
to look into the future to see where
where aio is ai
is going in a in in a biological sense
the question is will there be artificial
intelligence created on such a level
that it can perhaps uh fight cancer and
various of the diseases perhaps
begin to i suppose it's looking at the
potential
in in biology um so maybe
briefly take that one how far are you
away from
ai's being used in that respect they're
already being
used not kind of just in their own right
so science
uh in biology
cancer biomedical sciences there are
increasingly larger and larger amounts
of data our ability to
read exquisite detail about biological
systems
is unparalleled and it's exploding
rapidly we can study
individual cells and realize in a cancer
tumor for example
we can realize what's going on that's
different in each of the individual
cells
in the tumor and how some of those have
some properties that will ultimately
lead to resistance to a particular drug
regime and so on
so that area of science is massively
massively data rich
in a way that just hasn't been true for
most of of
uh certainly biology human biology's
history uh
where ai systems will be incredibly
helpful there is in helping us
as scientists to make sense of the data
to learn things from the data and that's
already happening
and it'll keep happening and and it will
be one of the big drivers
of the progress we make you know in
fighting against cancer and in improving
medicine more generally
and that was from jaffa by the way that
question um
do you ever yeah um i i agree with peter
there i think
the um i kind of think of it as the
spectrum from discovery to delivery so
discovery is
you know discovering new ideas about
human biology how our body works
how do we characterize disease to
delivery where we will
figure out new and more efficient ways
to get the right medications and the
right therapeutics and
to the right people and um i think that
entire spectrum in the last
five to ten years has gone has been
going through a transformation i think
in the next
10 to 15 years we'll see some of the
most exciting discoveries
and even in our own work we've seen
disease areas where
physicians had a hard time diagnosing it
but now by working with machines they're
able to not only diagnose it sooner by
getting treatments to the right patients
they're actually seeing improvements and
conditions
i just want to ask this final question
in this section it's a fascinating
question actually it's almost a
almost like a blade runner asked
question in a sense it's from will
and he asked that in the future will
there become a will it be a section of
society he calls it an upper echelon of
society actually maybe the very rich
who completely shun ai
for for a more costly human experience
so does it does it yeah the ubiquity of
ai
yeah this is this is the version of
wanting to use a travel agent
um when you can go on trial it may go
even deeper than that i suspect human
trafficking
let me just go back so in the royal
society's work
you know we tried to engage and get
people's views and one of the things
people were genuinely worried about in
the growth of ai
was a kind of de-personalization of
experiences um so
it's a legitimate concern and it's
something that people in general worry
about and i think we should be thinking
out about
i actually think yeah i will make
experiences more personalized not less
personalized because
by knowing a lot about you you're able
to identify
you know imagine when you go on a trip
you want to find the people who are
very much like you see what they enjoyed
and based upon that determine what would
be fun to do and
right now it's not really easy to figure
out who these people are
you know i um so again i've done a lot
of work in education in fact an
education has a term for this
personalized education and the whole
idea is to use technology and in
particular
artificial intelligence to target kids
with just what they need
but this is what i brought up before the
difference between
our aspirations and then the way it
actually gets used
so the term personalized education has
frankly in common
technological terms simply means
where on a fixed track are you and so it
in a sense it profoundly
depersonalizes the experience of
education by putting on everyone on the
same educational track
and then it's personalized because it
places
you at some level if you're different
than that
sort of uh temp that trajectory
then it doesn't account for you at all
so it is possible for us to build those
models and i i do a lot of work that
myself
but the truth is it's a lot easier to
build a much dumber model
that doesn't actually personalize
everything it just has it as a marketing
term
uh and that gets really disturbing and
disappointing
but i actually want to offer a contrary
proposal i
i i very much understand this idea of
and people have thrown it out there
that imagine some day when only the
wealthy can get their hair cut
by an actual human being you know you
get to exert your authority by ordering
people around
i i have a real assistant not alexa
i've got a real alexa um i actually
think it will be the exact opposite
so very early on i answered a question
by talking about this idea of general
artificial intelligence versus the sort
of thing we
are now experiencing and i said i didn't
know when if ever we would invent such a
thing
uh however i this is a bit provocative
i think i can give you a very rough
timeline
for when there are people that are
artificially smarter than other people
so one of my fields of research in fact
my core academic field of research is
what's called neuroprosthetics
which is really the literal merging of
computer systems and people so we have
three organizations for example one is
working on people that are locked in
they look like they're in a coma we're
building systems to allow them to
communicate with the outside world
another one is working in performance
optimization in athletes less
interesting to me but i get to learn
things
through the collaboration the last is
perhaps the most provocative which is a
company called hum
and we're helping them build a
technology which
it turns out exists you can go pre-order
it right now it's a wearable headband
so not yet jamming things in your brain
but we're getting there so i'll take
volunteers
um the survivor rates are so-so um
and but in this case with the hum band
you flip a switch
and working memory increases by about 20
so i don't know if anyone's ever played
the simon game you know where you push
buttons and colors in a row
largely in this audience most people
will be able to go up to about
five six or seven patterns before you
start to
you can't quite remember which pattern
it was and
we flip the switch and it adds one or
two to that
memory span now that may not sound all
that exciting but people with the larger
working memory span literally live
longer they go further in education they
earn more money
start to on an individual level but sort
of at a population level
if some of you are sevens and some of
you are fives
you will have much better lives and now
we can build a device that makes you a
seven
now that's probably not what this device
does at least you shouldn't be wearing
it long-term trust me i'm not selling
this to you it's steroids for your brain
and
god knows what the long-term
implications are um but
we're helping develop this because for
example
kids with traumatic brain injury you
know maybe they fell off their bike and
in a moment
who they were could have been was taken
away from them and so we're developing
a uh an educational intervention to be
paired with it 15 minutes a day
we flip the switch we do a deep literacy
and math intervention
and we try and put the pen back in their
hand to write their own life story
if we say no to technologies like that
then we're saying no
to changing the lives of all these kids
but by saying yes to it
we are saying that sometime in perhaps
the not too distant future
we might be fundamentally changing what
it means to be human
and the ultimate question there is for
whom
and i think if we're talking about the
wealthy and these sorts of technologies
it will be inevitable that there will be
an effort for them to get the first
access and to
be it a 3 16 gift for their kids
actually work much better if you do it
when they're babies
um and i love
experimenting on defenseless orphans um
so
um so um
that's another of those clips that's
going to be snipped up
um but uh is this a human right like a
vaccine
or is it something the wealthy get to do
before anyone else does and
and and it shouldn't be me that makes
that decision no matter what i think of
myself
this should be a decision that we all
make and right now
i feel like that's not happening yeah or
indeed the market which is the sense of
the
question okay well let's let's move on
to the final section section three which
is titled how do we get there
so i supposedly the route to the future
to be
sort of rather broader and there's a
question from gary who says that ai
could be a brilliant opportunity to
expand our knowledge
should there be restrictions so that it
does not become
that knowledge doesn't become owned by
big corporations actually crosses over
to what you've just been saying actually
especially
the regulatory framework um
so perhaps we broaden it to that sure
um i think there's um i certainly think
there's a mixture of you know hollywood
uh
painting movies with um you know strange
abstractions that
aren't quite uh real depictions of ai
but sort of evoke an image of apocalypse
and
you know the terminator and so on so i
think that's motivating
our understanding of what we need to
regulate and how we need to regulate
let's take a step back and go to the
right like how do you regulate math
i mean that's kind of a strange question
should we be regulating math how do we
regulate math
if we go one level deeper the use of
math and
say you know making decisions to decide
whether to insure somebody or not
now we can start thinking about it in a
much more concrete way which is yes we
should not deny insurance to somebody
with a pre-existing condition
so effectively i think in the same vein
for ai
we'll need to go several levels deep to
understand very specific areas where ai
will be used
how it's being used and then determine
what the appropriate regulatory
framework will be
and we very much need broader education
to engage people from other fields
you know to think about the ethics and
the um you know consequence of its use
in
uh in a variety of different scenarios
yeah it is is this really any more
complex
question than talking about any other
new technology we regulate everything
don't we regulate aircraft we regulate
cars we obviously you have to don't you
so
is is there a specific issue here that
makes it more complicated to regulate
i i think uh one part of it i entirely
agree with
suchi is that we need to think very
differently in different contexts
and some of those contexts already have
a pretty good regulatory framework
so if if ai technologies were to be used
they're not currently used but if they
were to be used in flying aircraft
then there's an incredibly strong
regulatory framework that would be
involved in testing them
if they're used in medicine as new
medical devices again there's a
framework
may or may not be perfect for this
purpose but there's a framework there
that thinks about how to do the
regulation
so i think we need to think different
because the costs of getting it wrong
are very different from flying a plane
to recommending the wrong movie from
netflix uh and we need we we need to
respect that and and see the different
contexts
some of those contexts have good um
regulatory frameworks um
some which near vivian will have a
better sense in education um
may or may not have them and and others
don't have them and we need to start
thinking about them but
but we should deal with it differently
in different contexts
you know i having guidelines and
and regulatory frameworks uh principles
being laid out i met with
actually uh someone today from the un
uh they have a big um uh council that
sort of
should we have some principles about how
data and artificial intelligence get
used in the world
i generally appreciate these sorts of
things
but frankly i don't know how clearly
people
make decisions particularly
decisions that are very technically
complicated and be very hard to
understand for anyone else
about how they deploy these sorts of
technologies out in the world you know
one of the things i've heard is we
should have ethics classes
in computer science schools because it's
worked so well in business schools
i actually think this isn't a magic
solution to anything but one of the
interesting things and i'm going to
uh labor through a metaphor here but i
used it earlier that
artificial intelligence is an
exquisitely powerful and sophisticated
tool it can't make decisions on its own
it can't solve your problems
for you if you don't understand the
solution it's very unlikely in my
opinion that it will figure it out
but it is immensely powerful and
completely changing the economics of
those solutions
the problem as i see it and i'm not here
to criticize computer science schools
but i am going to say this that we sort
of deploy this
army of largely very young and certainly
very male
machine learning experts that have spent
their entire
very short career learning
how to construct a hammer but they've
never actually built a house
they get given and this will be a little
wonky but they'll get it they're given
these perfect data sets like imagenet
and they're asked to solve pre-specified
problems
like name all of the breeds of dogs in
these pictures
but what they don't get is a
four-year-old with diabetes
what they don't get is here's the hiring
history of
amazon build a deep neural network that
hires the right kind of people
well amazon tried to recruit me to do
that and i told them it wouldn't work
but they went ahead and did it anyways
and if anyone read the news about this
it wouldn't hire you if you use the word
women's on your resume
take what you will that says about
amazon's hiring history because that's
where it learned it from
but this is one of the most
sophisticated companies in the world
it has an army of machine learning
experts that they brought in house
and they ended up having to drag this
thing around the back of the barn and
shoot it in the head because
it was unresolvably sexist and they
tried for
a year to fix this problem by
manipulating the data sets
and manipulating the algorithm and it
didn't work
so i really want to be incredibly
careful
that maybe part of this problem not the
total solution but maybe part of this
problem
is actually training people on how to
solve problems
rather than how to build models we're
almost out of time i wanted to get to
the um
there's a question i think we've covered
the question there was a nandita who
asked what impact could the growth of
artificial intelligence have on society
i think we've almost answered that
in every question um so
so but maybe
let's say positive or negative let's do
that just
what impact do you think you don't have
to answer maybe i'm putting words into
your mouth
one sentence what impact do you think it
would have i think it'll impact
the world positively dramatically in
almost all spheres of our
daily lives if we get
if we as a society get it right we pay
attention to it and we get involved
actively as stewards i think the
potential is huge and positive
you all insist ai is used
to make better people and i believe
it could actually have a positive
influence in the world or i wouldn't do
this work
right now i'm not so heartened okay and
the final i can merge the final two
questions actually one from minhaj and
one from chris because they're very
similar
one is do the panel think that we will
ever achieve
a general ai or will it remain a fiction
and a related question i think which
follows on from that if the answer is
yes
is do you think we will always have
control over the ais in our society
even if they begin to demonstrate higher
intelligence than humans
so there's two related questions there
this is an exciting one yeah so since i
get to study brains
and machines uh i fundamentally
think we are a phenomenally complicated
computing system
uh nothing like any of the ais that
anyone's ever put together
on an actual computer but in that sense
theoretically there's no reason why we
shouldn't be able to do this
practically speaking is a different
question we'll need a whole new set of
models
so i i got to do this one really fun
thing once
at a conference i got to debate this guy
named ray kurzweil
on stage about artificial intelligence
so he wrote a book called the
singularity is near
about the emergence of
superintelligences he thought it'd be
great like they'll make so much better
decisions they'd never
vote for brexit sorry um and
uh or trump or what have you
all right clearly not everyone agrees
with that um but
the uh but you know
we had this sort of debate about um
super intelligences and so forth and one
of the things a lot of people don't even
think about
is if we invented something like that
would it even
care about us would in fact even have a
conversation with us
where somewhere we might need such a
super intelligence not to drive a car
because we already have dumb
intelligence that can do that us and the
existing systems um maybe to
manage the entire transportation network
of britain
something that has to manage this
massive distributed system and optimize
all the pieces it doesn't have two eyes
it has
millions and millions of them and ears
and bodies all spread over what the hell
would it ever even have to say to us
why would it wouldn't it possibly be so
alien
that it understands we're intelligent
and we can infer that it is
but it manages transportation and we
manage our lives
and there's nothing really to exchange
there we want to think that
it'll desperately want to talk to us
it'll be just like we are
but i think that might be a it certainly
divorces itself
from a lot of research about what's
called embodied cognition and how much
of who we are
is the very body that we inhabit well
that's a wildly different body
i i i think we we shouldn't think that
to echo vivian's point we shouldn't
think that technologies will kind of
be like us in any sense in the in the
early days when people wanted to fly
what people did and i'm sure you've read
about it or even seen
paintings of it people kind of stuck
feathers onto their arms and flap their
arms quite a lot
and weren't very successful at the
flying thing
and so eventually we came up with a
technical solution for flying which is
actually really different
in many crucial respects from the way it
happens in nature and it's very very
likely to be the same
with artificial intelligence but both
specifically and
the systems that are there now do them
differently from the way we do it
and and in terms of general intelligence
will it ever happen i think
it would be brave to rule it out but
happily it's a long way off
i think we should try to understand what
agi is i bet you if we interview 10
experts in the field they will give you
very different answers for what is
artificial general intelligence in
general they'll all say it's much
smarter than whatever it is we have now
so i think part of the challenge is it's
this
as soon as we start to understand
something we call it ai
and whatever we don't understand that's
agi so
it's kind of a strange thing to describe
so i think we have algorithms we'll
continue to build algorithms to solve
problems
um i don't know we have
we i don't think there's any
evidence or imagined evidence of a
version where we see
this superhuman or supernatural or
algorithm that can exhibit behavior
where you know
often we're humanizing it right in the
go player
it's very easy to understand what it's
doing in search base but we go back and
we're putting
uh human-like qualities in it to see ah
it's thinking it's stepping back it's
coming back it's trying to trick you
when all it's really doing is searching
through board configurations to figure
out what's the right thing to do
what it seems to me is interesting is
whether we then we you'd have to choose
to build one
i mean as you suggested if you have a
system that's running the
global transport network let's say then
i suppose what many people's fear is
this science fiction fear that the thing
is
is so intelligent running the transport
network it decides to run everything
else as well
of its own accord kind of thing so but
presumably if we talk about an agi
if we if it is possible to build one i
suppose the question is
would we have to build it with the
intention of building it
or could it somehow emerge from a lower
level
complex system so i will say this and
it's actually a debate
very literally that's going on in our
field right now which is
are the current technologies
specifically deep neural networks
enough to someday create agi or do we
need to invent something completely new
i happen to be
in the category that thinks we need
something dramatically different
um but i will and and part of my belief
there is
uh just maybe hopefully to put some
concerns to rest
you can add more processors to our
existing system
you can teach it more things you can
show up more newspapers you can
play it uh the bbc on a
constant feed all the time everything in
the world
it will never wake up it will never have
an opinion
about the issues of the day it will
never
approve of trump it's supposed to be
smarter than us right
um it just that sort of thing is not
going to
in some sense magically emerge your
toaster
is never going to wake up uh
and threaten our very existence uh
but could we invent something new might
that
lead to this again i see no theoretical
reason why that's not possible right
uh it's just i don't know what form it
takes and i don't know what
infrastructure it takes
let me give you an example there like
right now we have computer systems that
control
uh you know the basic public utility
system like water electricity
um um and what if we had a
different you know algorithm or computer
come in and
interfere and take down a node by you
know clogging it with traffic
and such or hacking the system now would
you call that agi
that particular example i'm describing
is very possible today
basically by building a system that has
very uh you know
where points of failure are very
concentrated you can go attack those
points of failure and the system can go
down and you could now easily
attribute attributed to agi but i think
all it is is
uh you know computers that are optimized
to build an objective function
and often we're programming them to do
that but i do love one of my favorite
examples though
of a system waking up uh is
and i'm blanking on his name so if
anyone's going to get the reference pull
it up a scottish writer
who wrote a murder mystery and it's just
it starts off purely as a murder mystery
and then it turns out what's happening
the thing that all these people have in
common
is that they run spam they're all a
bunch of spammers
a spam bot training getting more and
more complex and the constant cat and
mouse game against spammers
reaches sentience and realizes the way
to stop the spam is to kill the people
producing it in the first place
and that's what the police eventually
figure out in the end and
i you know again i don't think that
anything like that is
in the works that was a fiction piece
right
yes uh it was pure fiction um
but there are some things out there that
are genuinely exciting
that still don't require us to reach agi
and some of them that are kind of
fearful it doesn't require artificial
general intelligence
to build autonomous weapons and program
a little drone with a bunch of c4 packed
into it to literally recognize my face
and just zoom in as fast as it can and
make a smart bullet for lack of a better
description
it doesn't require artificial intel
general intelligence
to build technologies that keep
autocracies in power
uh some of the out the very first pro
system i ever worked on 20 years ago
my introduction to machine learning was
building real-time lie detection systems
for the caa um now that is very morally
gray
uh needless to say it was incredibly
cool it just read people's facial
expressions
um and by the way we were able to later
use that same system or or i used those
same algorithms
one case to build a system for google
glass that could read people's facial
expressions
for autistic kids to learn how to read
facial expressions in another case
to reunite orphan refugees with their
extended family in refugee camps around
the world
so it's not always so clear what's a
good
and a bad technology but i will say some
of those algorithms we developed 20
years ago
one they're all in your iphone 10. i
mean literally that lab got bought
as a startup by apple so we power all of
your face
stuff um so if you are um
have you ever done the animojis so you
can sort of smile and talk into your
phone and it animates a cat
that cat is 50 million dollars worth of
cia funding
all innovation in the end animates cats
on phones
but at the same time those algorithms
are being used they do toasters as well
i think someday we must surely have a
smart toaster
that's maybe just a little too smart
so but the last is that those same
algorithms are now being used in western
china in ways that i
fundamentally disagree with but you know
this was just academic work we published
our algorithms
and now they're out there and i you
can't always
necessarily control these things that's
why norms setting norms are so
important and just finally just to go
right back to the start i mentioned the
turing test at the start
so so what just an agi right
it how do you define it is that
something that passes the
touring test is that the definition
really we perceive
the thing to be intelligent but if not
what is the definition
so let's quickly go with i i think it'd
be really worthwhile for an ever so
slightly
deeper definition of the turing test so
the turing test is
that there are sort of two black boxes
and i'm asking each of them questions
and i'm getting answers out of them
one's a person one is an ai and they're
both trying to make me
believe that they're a person and then
if i run this test over and over again
i am at chance at getting it right a lot
of people
throw up in the news hey twitter or
just beat the turing test because you
know someone got on a phone
and they heard a voice that sounded like
a person and so it beats
no no i have to be actively trying to
trick that was
turing's actual thing and i'm not saying
it's a magical test that's right or
wrong
just it has a lot more nuance than
people put to it
um so if someone could violate the
turing test
where you're actively trying to figure
it out and over time there's
no difference i'm not saying that's the
magic ingredient
for artificial general intelligence but
you have to admit
uh you might just as well have a
conversation
with one of them versus another one and
there probably is some point at which
the differences
for specific tasks maybe don't matter
that much
yeah you're in agreement
i just want to say one thing you sit me
next to an extraordinary woman
who sorts out one weekend her son's
diabetes and gets all the data she
uh unites orphans with their families in
far-flung parts of the world
it helps autistic kids and i agree with
it from time to time and you give me
grief for agreeing with him but
here we are i agree
[Applause]
maybe we should stop there should we
stop there well you can have the last
word if you'd like
i think in that case of turing test that
you described if somebody told me here
are the six
things or seven or eight or nine or ten
things we're going to
test the turing test on i could set up
10 deep mines each one's going to work
very very hard on one of those
and then i'm going to cycle between them
so that the
human on the other side is kind of
working against a very competent machine
i could easily see a scenario where you
know we passed the turing test
so i still don't think that's
qualitatively very different from where
we are right now
in terms of so i'm not sure that
definition of the turing test is very
relevant in defining agi i don't think
we have a good one
i think that's what i started my premise
with which is i have a hard time
understanding what human intelligence is
so it's really hard to think about what
is agi
i think that's a very good place to end
lecture
so um thank you to everyone that thanks
for sending questions and we've not been
able to ask them all tonight but um
we do encourage you to carry on the
conversation uh
keep asking questions the royal society
website's got a lot of background
information
on this area if you're interested but
for now i'd just like to say could we
thank this superb
[Music]
panel
thank you very much thank you good night
i'm going to ask you what he thought
you