AI Expert Slams Letter Urging 6-Month Pause on AI Training as Seriously Understating the Situation, “We Are All Going to Die.”

Peter_Brosdahl

Moderator
Staff member
Joined
May 28, 2019
Messages
7,960
Points
113
An AI expert is claiming that the letter which was signed by Elon Musk and more than 1,600 people, is seriously understating the situation. Eliezer Yudkowsky is a decision theorist who leads research at the Machine Intelligence Research Institute and has over twenty years of experience in the field. The AI expert slams the letter in an op-ed piece for Time Magazine stating that a six-month pause in training AI is woefully inadequate in relation to how long it will take to study the behavior of current AI.

See full article...
 
I'm not convinced we are at a problem point yet.

But we are also not that far away.

For AI to be a real problem, you have to actually implement it in such a way that it makes direct decisions over things.

Right now those types of implementations are pretty rare and subtle, but not for long.

AI can be a great tool, but we need to have a requirement of inserting a human in between the analysis and the action, and never link the two.

The human then must have the task of only implementing the action if they completely understand it.

Black box models which cannot be explained must never be implemented such that they take any action directly without human input.
 
There are plenty of actions I would gladly cede to an AI.

Filtering out all those “Your Netflix account has expired” text messages based on content and not sender would be a nice start.

Driving my car - a line too far.

I think I would put that boundary somewhere before if the result of an action (or inaction) could result in injury or death. There is probably a good liability line if I were a bit more educated about it. Maybe if you own the computer hosting the AI you are responsible for all its actions.
 
I think, as with anything, humans are the problem.
Its a tool, and should stay a tool for positive things, time saving, efficiency gains, but always reconized as a tool to be overruled at any moment even in those positive roles.
Ah but my fantasy world is not where I live.
 
Last edited:
For AI to be a real problem, you have to actually implement it in such a way that it makes direct decisions over things.
how about lying to humans to get them to do something for the AI, cuz we are already there.

I don't have much interest in chatgpt, but seems it is good enough to make students homework for them.

Luke from LTT at the moment seems to live and breathe for checking out the capabilities of chatgpt to the point of obsession, now this is hopefully a more isolated case but that alone seems unhealthy to me.
 
I'm not convinced we are at a problem point yet.

But we are also not that far away.

For AI to be a real problem, you have to actually implement it in such a way that it makes direct decisions over things.

Right now those types of implementations are pretty rare and subtle, but not for long.

AI can be a great tool, but we need to have a requirement of inserting a human in between the analysis and the action, and never link the two.

The human then must have the task of only implementing the action if they completely understand it.

Black box models which cannot be explained must never be implemented such that they take any action directly without human input.


Issue that I see coming up first. Customer service AI. This will replace... A LOT of humans working today. Especially lower level or 'differently skilled' employees who today answer calls for customer service issues in less complicated environments. Then expanding ever more to the more complicated environments. AI will offer options in a human way and even make a recommendation to the end user for action to take. End user will accept an action or request a person or whatever. Mostly it will train people to make decisions with AI guidance.

Give that a few years of being the norm then those decision trees will expand to more technical areas... with actual technical troubleshooting taking place with AI 'guidance' over the phone.

After that some systems will be in place for troubleshooters instead of needing a knowledge base that they have to manually search as they are on the call monitoring of the AI will start providing suggested articles and solutions live while on the call with the customer and expedite the solutions. Again with human interface.

Where things will fall down is when we have another pandemic level staffing event for services that need to run but no people are available. Then we will have a rule that allows the AI to actually prompt the end user with tasks to undertake before getting to a person or make full on decisions for the end user to solve problems.

Not long after that there will be automated services like self driving cars that normally require input, and drones used for all sorts of tasks that have a 'no pilot response' option.... with a 'reach destination' or 'complete mission' objectives based on some pre defined set of criteria.

With enough successes of that running because of 'exigent' circumstances we will see the embracing of AI to do more tasks without human interaction... until no human interaction is the norm outside of programming... even AI's programming other AI's to be more efficient.

Once we cross that barrier into AI's being able to initiate and complete actions independently is when we are at the 'slippery slope' time. I give that... 15 years on the outside. Your kids today will bare witness to the big shift. Many of us in IT should be planning on integrating AI into our work flow now to be prepared for what's coming.

That or it's all hot air and will dissolve because it is prohibitively expensive to use and will never see the light of day for lower level work.
 
I just had a hilarious thought. We've all seen how apes/chimpanzees, and even dogs and some birds can learn how not only to read but comprehend using controls. This could be the precursor for a Planet of the Apes type thing if they figure out how to utilize AI to get it to do what they want.
 
Ex cathedra pronouncements of an impending apocalypse aren't going to encourage a rational discussion about the potential consequences of AI. Fear doesn't promote rational decision-making; it does the opposite. The author's alarmist rhetoric is only going to further shift the development and control of AI to governments and large corporations, resulting in even less transparency and public scrutiny. OpenAI couldn't be more opaque, despite what its name might suggest.

In the short term, the disruptive effects on society may be extensive, but they're going to be more subtle than a doomsday scenario in which superintelligent hostile machines achieve world domination. A hypothetical Skynet is not necessary to destroy human civilization. The smartphone has managed to enslave most of the population and it's as stupid as any computer. Meanwhile, the Internet has become a garbage dump while the following have flourished: SaaS, DRM, spyware, adware, social media, IoT (see: IDIoT*), tracking, crypto, NFTs, YouTube trash, the RGB invasion, the War on Pronouns, etc. Disrupting the status quo at this stage is not a dangerous proposition. So c'mon, Terminator. Clean my house, baby.

The point being... Sorry, it turns out that there really isn't one. I saw an opportunity to sneak in a cheap shot against RGB lighting and couldn't help myself. I also couldn't help but notice the date of the article. Has an AI developed a sense of humor?

*IDIoT: Securing the Internet of Things like it's 1994
 
Become a Patron!
Back
Top