Christopher Nolan Thinks People Are Going to Blame AI for Everything

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
11,358
Points
83
Is artificial intelligence lining up to be the perfect scapegoat? That's what Christopher Nolan seems to think, having told Wired in a new interview published today that the biggest problem he sees with AI is how it's being framed as an omnipotent resource, making it a great and plausible thing for companies and others to blame their mistakes on. "I identify the danger as the abdication of responsibility," Nolan explained. Elsewhere in the interview, the 52-year-old director, whose atomic bomb film Oppenheimer opens July 21, revealed that he was going to die for sure in a nuclear holocaust.

See full article...
 
Chris getting a little melodramatic there at "old age"?

Blaming ai is like blaming CGI, it is not the technology that is bad, it is the idiots over relying on it and using it for everything.
 
Chris getting a little melodramatic there at "old age"?

Blaming ai is like blaming CGI, it is not the technology that is bad, it is the idiots over relying on it and using it for everything.
100℅ ai is but a tool, a good tool. When you have people talking about solving this, and that all kind of human problems and making better beer, well that is the problem. You assign it definitive authority, and then shortly you run into problems.
 
100℅ ai is but a tool, a good tool. When you have people talking about solving this, and that all kind of human problems and making better beer, well that is the problem. You assign it definitive authority, and then shortly you run into problems.
Yes, AI has no critical thinking skills, it is an aggregator or knowledge, it is up to the user to check and verify the accuracy of the output.
Even if you have to iterate a dozen times before it gives usable output, it is still 100x faster than doing the work manually.
 
Ai today is only as good as it's source data. Open ai platforms regularly give crap responses that seem valod but are vapid at s minimum.

This is why there are projects at MIT to validate data sources and grade them for AI consumption.

Mr Nolan has a point and the uninformed will blame AI and not validate information.

This is why it's critical for people to be able to have ai that digests trusted data sources and not any. A single AI for every potential data source is a neat expirament to play with but not something that should be trusted.
 
Ai today is only as good as it's source data. Open ai platforms regularly give crap responses that seem valod but are vapid at s minimum.

This is why there are projects at MIT to validate data sources and grade them for AI consumption.

Mr Nolan has a point and the uninformed will blame AI and not validate information.

This is why it's critical for people to be able to have ai that digests trusted data sources and not any. A single AI for every potential data source is a neat expirament to play with but not something that should be trusted.

What are the odds someone is working on an AI to be used to validate the data?
 
What are the odds someone is working on an AI to be used to validate the data?
I know you're joking but it's already in the works. I did a story about a month ago where a researcher on AI said he'd already heard the next Gen AI was already planned to start overseeing its own data.
 
Become a Patron!
Back
Top