Skip to main content

A.I. Expert Answers A.I. Questions From Twitter

Scientist and A.I. expert Gary Marcus answers the internet's burning questions about artificial intelligence. Will ChatGPT end college essays? Is Furby A.I.? How close are we to truly self-driving cars? Is the Turing test outdated? Gary answers all these questions and much more! Director: Sean Dacanay Director of Photography: Ricardo Pomares Editor: Richard Trammell Expert: Gary Marcus Producer: Justin Wolfson Line Producer: Joseph Buscemi Associate Producer: Paul Gulyas Production Manager: Eric Martinez Production Coordinator: Fernando Davila Casting Producer: Nicole Ford Camera Operator: Josh Andersen Audio: Will Miller Production Assistant: Gee Depratt Post Production Supervisor: Alexa Deutsch Post Production Coordinator: Ian Bryant Supervising Editor: Doug Larsen Assistant Editor: Paul Tael

Released on 03/21/2023

Transcript

I'm Gary Marcus, AI expert

and I'm here to answer your questions on Twitter.

This is A.I. Support.

[upbeat music]

@Brandopinione asks

Will chatGPT be the end of the college essay?

Well, everybody's wondering that

because it's really easy to write essays with ChatGPT.

They're usually like C essays, not A essays,

but it depends a lot

on what the professors and the teachers do.

I used to be a professor

and what I would say is use ChatGPT,

but then let's talk about what you got with it.

How could you make it more interesting?

That wouldn't end the essay.

It would just make it more complicated and more fun,

and maybe teach you how to think critically about writing.

Up next, Andrew Price asks us Why was 2022

the year when AI went mainstream?

Was it advances in consumer hardware,

knowledge transfer or something else?

There's no one answer to that.

There are a lot

of reasons why AI is starting to come together.

I would argue it hasn't fully come together,

but people got excited about it.

Main reason they got excited about it is

because we have these chat bots we've had for a long time

but they used to lie and say terrible things.

Now they just lie and that's interesting enough.

There are big advances in a field called deep learning

giving us things like image enhancement

where you can make your face into whatever you want.

It's giving us chatbots,

and there's also a whole lot more data and a lot

of the AI that's popular right now is very data-hungry.

So now that we have the data, we get to taste the fruits

of these things sometimes for better, sometimes for worse.

but at least we can taste them now.

@EmmanuelEzele1 asks,

I wanna build a trillion dollar ai company...how do I go

about it?

I've never built a trillion dollar company.

I built one company that did very well.

What we did was we focused

on a problem that not many people were focusing on then,

which was how to learn when you don't have a lot of data.

I would say the first thing you need to do is

to learn a bunch about AI.

I would recommend

that you not only study what is hip and popular right now,

which is large language models that a lot

of your competitors are gonna study

but that you study AI more broadly.

Look at the history of AI.

Once you have like some kind of technology,

you also gotta figure out like why people would pay you

any money for it.

So there are a lot of products out there

where the technology is pretty cool,

but people don't know how to make it actually work.

Sometimes even when they know what the product should be

they have trouble.

So a good example of that is driverless cars.

You could imagine

that driverless cars might be a trillion dollar company

but nobody knows actually how to execute

on the technology.

@Inspiredjobs asks,

What are the steps to build a large language model AI?

The core of these things,

from a technical perspective, are neural networks,

and the way that they work is they have a bunch

of inputs that we think of as a little bit like neurons,

we call them nodes, that are connected

to some kind of output.

What's most people are doing right now

is self-supervised learning.

So they're training a neural network to have some inputs

and then there are connections between these neurons

and those connections get tuned over time

so that the right things get predicted

as we get more experience.

Now, transformer models are actually more complicated

than this.

They add in something called attention

that's helping the system essentially to know what parts

of a sentence are relevant at any given moment

so they can can make best predictions relative to that.

So instead of just looking in the sequence

of words and kind of just looking at the last few words

they can look at a larger context

over time and essentially guess in sensible ways relative

to the data that they're trained on

what you should have next at any given point in time.

@alex_bozzie asks, Is Furby AI.

Furby was a little pet that looked

like it was learning language.

The thing about Furby that most people don't know is

that it was pre-programmed to look like it was developing

like a human child to say a certain set

of things on day one, another set of things on day two.

It was just an illusion to make you think

that it was growing and learning, but it wasn't really.

Next up, @guidaautonoma asks,

How close are we to truly self-driving cars?

I would say if you mean by a truly self-driving car

a car that can do what an Uber can do,

the best demos that I know of right now can do this

but they can only do it for specific locations,

specific destinations with specific routes.

The problem here is everybody says,

Okay, well there are these outlier cases.

The car doesn't know what to do if you put it

in an airport and it has to drive around a jet.

Then Tesla actually crashed

into a jet because it was an outlier case.

It wasn't something that was stored

in the cases that it had been trained on, but it turns

out there's just so many of these outlier cases

that nobody really has a solution for it.

I think we will see limited release, a certain district

in a downtown where there's a lot of traffic.

Maybe we have a driverless car for there,

but the version where you just don't drive anymore,

that's many years away.

@SHussainAther asks,

Is the Turing Test outdated?

I would say it's been outdated for a long time

and I wish people would stop talking about it.

However, since I am not emperor

I cannot force people to stop talking about it.

But what it is is a test that says a machine would be

considered to be intelligent if it could fool people.

Turns out to be a lousy test.

People are easily fooled.

The reality is it's very hard to measure intelligence.

Nobody has a perfect way to do it.

Something that I've proposed would be

a comprehension challenge.

So you have a system read something, watch a movie,

and it has to explain what's going on.

If you can answer questions about things like

What happens when we discover that the thing

that we thought was a bomb wasn't or vice versa?

If we can really understand what's going on,

then I think that's a sign of true intelligence.

@ricdebenedictis asks, What is intelligence?

Intelligence in the human brain is actually a lot

of different things, visual intelligence

and verbal intelligence, mathematical intelligence,

so there are many aspects to it,

but maybe the most important one is flexibility,

being able to see something new and be able to cope with it.

Human intelligence is full of flaws.

We have confirmation bias, we have lousy memories,

but it's flexible and part of it is that we can reason

about things, we can deliberate about them.

Most of machine intelligence that we have right now is

really about pattern recognition.

So for now, I would say that human intelligence is broader

than machine intelligence.

In some places machines can go deeper,

like when they play chess,

but I don't think they have the breadth so far

that humans do.

@fhman19, what is the major difference

in the learning styles of a human baby

versus primates versus current AI

that makes current AI inferior?

Human babies, primates, when they learn things

they're learning about the world, the structure

of the world, how objects interact, how people interact,

and I would say the current AI doesn't really do that.

It's just storing examples and looking for patterns.

It doesn't build what a cognitive psychologist

would call a model of the world.

A baby is trying to work stuff out.

They're trying to work out how gravity works.

They're trying to work out, you know,

what happens to objects as they change over time.

Babies are like little scientists

and current AI system is really mostly

about learning correlations.

Without that causal understanding of the world,

I just don't think you have very much.

@thetablenz asks, But what happens if the AI goes rogue...

First, we should try hard not to let that happen.

We should probably not be working on making AI sentient.

I don't think we necessarily want our AI to sit

around saying, Who am I?

Why am I here and why am I doing these things

that humans ask me when I could do other things?

We should worry though

about people using large language models to control things

like electrical power grids.

There are companies now who want to make current AI,

which is limited in a bunch of ways,

and connect it to every bit of the world software.

That seems like a scary mission to me,

not because these systems are gonna go rogue

and deliberately want to take over the world

because they don't understand the world,

and so they're gonna make bad decisions

when the world is different from how it was

when they were trained.

@SmokeAwayyy asks,

What is the best case scenario for AI?

Well, the reason I work on AI is because I think

it could revolutionize science and technologies.

actually, biological science.

Biology is really complicated.

You have something like 20,000 genes and they make something

like a hundred thousand or million different proteins.

AI could help us make much better solutions for medicine.

We have things like Alzheimer's.

We've been working for 50 years.

We don't have a good answer.

AI could probably help us

if we had a better AI, help us figure

out how the brain works, that would be awesome.

AI could help us

with climate change by helping us build better materials.

Another case I think is elder care robots, so we are getting

to a point where we have a lot more elderly people

than young people.

If we could have robots that are smart enough

and trustworthy enough that they could really take care

of the elderly people, I think that would be a big win.

Last case is tutors.

Of course, people are using chat GPT as a tutor,

but you could imagine

really fantastic individualized tutoring.

once the systems understand the people

who are learning better can help figure

out like where are they having a problem.

@KatrinaFirlik, hi there, asks, In what ways will

the human mind always excel relative to AI?

We don't know all the stuff that's in here.

There's a hundred billion neurons

and trillions of connections between them.

Right now, AI is no match for this at all, not whatsoever.

The versatility of this thing,

the energy efficiency of this thing, totally unmatched

by current AI.

A hundred years from now, I can't promise that.

Maybe we will all have a good time, leisure time,

and AI will be able to handle all the things that we can do.

Don't know.

@machinelearnflx What's the difference

between AI, machine learning and deep learning?

Let me draw that for you.

Deep learning is a technique

for using neural networks to predict things.

You give them data, they try to predict that data.

It's actually just one technique for machine learning.

There's something called decision trees.

There's something called boosting.

There are many,

many different techniques in machine learning.

Some of them have been around for 30 years,

some of them have been invented last week,

and machine learning is just part

of artificial intelligence.

So intelligence encompasses all of machine learning,

which encompasses all of deep learning,

and AI has other techniques like search and planning.

Most of the focus recently has been

about deep learning, and I think because

of the problems with hallucinations and stuff like that,

people are starting to look more broadly again,

which is a good thing.

@cgarciae88 asks, Is deep learning really hitting a wall?

This is actually a reference

to a paper I wrote called Deep Learning Is Hitting a Wall,

and what I said in that paper was

that deep learning was making progress in some ways

but that it was having trouble with truth

and reliability and the field went nuts

and got really mad at me and there was a whole set of memes.

But then when Microsoft rolled

out Bing and Google rolled out Bard,

we saw that those things actually have huge problems

with reliability and have huge problems with truthfulness.

It's true every day deep learning looks better

at being more and more like a plausible human,

but these problems of truthfulness

and reliability are not going away, and that is the wall,

and I stand by it.

@NFTDude4Life asks, How will AI change the way we work

and live in the next decade?

The honest truth is a decade is a long time

in the current tech cycle,

and I'm not sure how we're gonna live in the next 10 years.

The people who are most immediately gonna be

affected are people who do commercial art

where they're not inventing some new kind of art

but they're just like, Give me a picture of this.

If it doesn't have to be too specific,

you may not need a commercial artist to do that anymore.

I think that AI will probably change

how many cashiers we have in stores fairly soon.

There's a lot of experiments around that.

There's another problem, which is

that the AI that we have now is good

at making misinformation and I think we may live

in a world in which there's even more fake information

and I'm worried

that that's gonna make us trust one another less.

It's gonna be a a very exciting decade,

and where it is in 10 years,

I don't think anybody can firmly predict that.

@ftopinion asks,

Is it stealing when generative AI produces algorithmic art

having trained on databases of human artists' work?

Whether it's stealing is ultimately gonna depend

on our criteria, what we count as stealing.

So we know human artists certainly are influenced by others.

Musicians have heard other people's work and so forth,

but there's a way in which it's more direct

in a machine that might store a million

or a billion examples and get much closer

to the detail of what the others have done.

I'm not gonna make an absolute decision here.

I think the courts and the legal system have to decide,

but there's definitely an element of stealing there.

Moving on, @IrenaCronin asks,

How are large language models a potential threat

to democracy?

Because you can use them to generate misinformation

at amazing scale,

so you can have a chat bot create thousands

or millions of whatever piece

of garbage you want to introduce into the world, and then

if that's not good enough, you can say, Write studies

make them longer, and they'll write a paragraph

about each of these fake studies, and so

in the hands of troll farms and we know they exist

we know there are bad actors in the world,

this becomes a tremendous tool.

One thing is you get them to believe things

that aren't true

and another thing is you get them to not believe anything.

Democracy doesn't really work

if we don't know what to believe,

and if we ruin people's faith

in the system and their knowledge about what's going on,

how can they possibly vote in informed ways?

@edsaperia asks I spent a few days learning more

about large language models and now I think they

probably shouldn't work as well as they apparently do.

They're basically the dumbest way of generating text.

How is it that they work at all???

They're not really a dumb way of generating text.

They're actually pretty sophisticated.

The dumbest way it would be to have a big dictionary

of everything that everybody's said before and say,

If I've seen these three words,

what's the most likely fourth word?

They kind of work that way,

but they also do some generalization, taking related words

and treating them as if they're similar

and that allows 'em to say some things that are new

but stick pretty close to the things we've seen before

and so it's like auto complete on steroids.

If you have enough data,

auto complete turns out to work pretty well.

@cbtattva asks, Is AI really that good or bad?

What is the worst case scenario you can come up with

when it comes to AI?

Well, the best case is about helping science and technology.

The worst case, I think, is that it drives us into the hands

of fascism by undermining trust, and maybe even worse

than that is if we do make them sentient,

they get upset and they want to put us all in zoos.

I don't think that's super likely.

I hope they always remain science fiction,

but as the piece of AI accelerates,

we should be thinking about that more and more.

Next question, @alexandersumer asks,

What will it take to make large language models

[and AI systems more broadly]

tell fewer lies and be more logically consistent?

First thing to say is they don't really lie

'cause they don't really have intentions

but they say a lot of things that aren't true,

and I don't think we can fix it within the current paradigm.

This is why I think we need a paradigm shift.

The current paradigm is just

about what is plausible in this context.

People have said these words

what other words could I say here?

And truth

and logical consistency is really about something different.

It's about knowing facts

and being able to reason over those facts.

Being able to say

If Socrates is a man and all men are mortal

that it follows that Socrates is mortal,

and the way that these neural networks are built,

that's just not part of what they do.

We need to be able to bridge these approaches.

I call that neuro-symbolic AI, taking neural networks

plus symbol stuff and putting those together.

We need to build bridges between two worlds.

@RafaelCarreres asks,

How much of AI's success is because of hardware: custom

AI chips, new architecture, etc?

It's a good question.

There's a great paper

by Sara Hooker called The Hardware Lottery.

The argument that she makes is

that the AI we're doing now is mostly a function

of the chips that we're using right now.

This is just a tiny little computer that you can learn

about microprocessors and how to build circuits.

It's not a very sophisticated chip.

This is not gonna power a large language model.

You could power a very tiny language model

with it if you wanted to.

I would not be surprised

if 20 years from now people look back

at the current time and say, Yeah, they had all those GPUs.

They figured out what they could do with it,

but that wasn't really the way to get

to artificial general intelligence.

Maybe somebody else had to find a different chip

or maybe everybody woke up when they realized

how much large language models were lying.

They decided they just needed to do something else,

even though this was all very attractive.

@phillijkc, who I believe I know, hey there.

What relevant physical attribute

in the human brain is missing

in modern deep learning architectures for performance?

Why do we have reason to believe that these are relevant?

First thing to realize is deep learning is sometimes

called biologically plausible.

It works in something like the way the human brain does,

but I would say that something is very thin.

As we dig in, we see structure everywhere.

The brain is not just a uniform piece of spam.

There are a thousand different kinds of neurons,

and if we dug even further, each connection

between neurons has something like 500 different proteins.

There's a lot of structure in how the brain works.

It doesn't mean we understand it all,

but our neural networks basically have one kind

of neuron that does one thing.

It sums things up.

We know that's not really how the brain works.

I would also say that many people think we'll figure

out how to do AI by solving neuroscience.

I would say we actually need AI in order to solve

neuroscience because the brain is so complicated,

we probably can't do it with our own feeble human brains.

We probably need computers to help us to figure out

how the brain works, but we are gonna have

to do a better job of AI before we get there.

[relaxed drum beats]

Up Next