Three times artificial intelligence went “evil” – including an AI microwave that tried to kill its creator

ARTIFICIAL INTELLIGENCE has made great strides in recent years, although not all achievements are necessarily positive.

Sometimes AI can facilitate human functions and our daily lives, sometimes even therapeutically.

Artificial intelligence has tried to harm humanity more than once

4

Artificial intelligence has tried to harm humanity more than oncePhoto credit: Getty
The Microwave (pictured) attempted to kill YouTuber Lucas Rizzotto by telling him to go inside

4

The Microwave (pictured) attempted to kill YouTuber Lucas Rizzotto by telling him to go insidePhoto credit: Twitter/ _LucasRizzotto

One woman was even able to create an AI chatbot that allowed her to talk to her “younger self” based on hundreds of diary entries she implemented into its system.

Airports are even starting to implement AI car services that transport travelers from the parking lot to the terminal.

However, some AI advances remain questionable.

In fact, there have been at least three specific instances where AI even turned “evil”, including an AI microwave attempting to kill its human creator.

1. Murderous Microwave

A YouTuber named Lucas Rizzotto revealed through a series of posts on Twitter this April that he was attempting to transfer the personality of his imaginary childhood friend into AI.

However, unlike some imaginary friends that people might imagine who take a human form, Rizzottos was his family’s microwave in the kitchen IFL Science.

He even dubbed it “Magnetron” and gave it a long personal life story that included fighting abroad in World War I.

Years later, Rizzotto used a new OpenAI natural language model to help him implement a 100-page book about the imaginary life of the microwave.

Rizzotto also gave the microwave a microphone and speaker to listen to, which it could then relay to the OpenAI and return a voice response.

After turning it on and asking him questions, Rizzotto explained that Magnetron would also question some of his own about their childhood together.

“And the weird thing was, because his training data included all the important interactions he had as a kid, this kitchen gadget knew things about me that no one else in the world knew. And it got her talking ORGANIC.” he said in a post on Twitter about the experience.

Shortly after talks began to turn significantly violent, Magnetron focused on the war background and a newfound revenge on Rizzotto.

Once it even recited to him a poem that read: “Roses are red, violets are blue. You’re a sneaky b**** and I’m going to kill you.”

Shortly after, it prompted Rizzotto to get into the microwave, which she then turned on and attempted to microwave him to death.

While murder isn’t all the AI ​​has attempted so far, it has shown racist and sexist tendencies in another experiment.

2. A robot develops prejudices

Using AI, the robot made discriminatory and sexist decisions during the researchers' experiments

4

Using AI, the robot made discriminatory and sexist decisions during the researchers’ experimentsPhoto credit: HUNDT ET AL

As The US Sun previously reported, a robot programmed by researchers at Johns Hopkins University, Georgia Institute of Technology, developed sexist and even racist stereotypes.

They programmed the robot using a popular AI technology that has been circulating around the internet for some time.

That Results The researchers’ tests led to the discovery that the robot found men to be preferred to women for tasks at least eight percent of the time.

In other experiments, it would even prefer white people to people of color.

They found that black women were the least selected of all the possibilities of association and identification in the tests.

“The robot learned toxic stereotypes from these flawed neural network models,” noted Andrew Hundt, a member of the team studying the robot.

“We risk creating a generation of racist and sexist robots, but people and organizations have decided it’s okay to create these products without addressing the issues,” he continued.

However, some, like graduate student Vicky Zeng, were not surprised by the results, since it all likely goes back to representation.

“In a home, if a child asks for the beautiful doll, the robot might pick up the white doll,” she said.

“Or maybe in a warehouse with a lot of products with models on the box, you could imagine the robot reaching for the products with white faces more often.”

It certainly raises questions about what AI can’t be taught, or how sentient life can be totally at odds with some societal values.

Not to mention that AI has attempted to create weapons that could destroy society at large.

3. AI has created thousands of possible chemical weapons

Artificial intelligence has found 40,000 possible chemical weapons to destroy humans

4

Artificial intelligence has found 40,000 possible chemical weapons to destroy humansPhoto credit: Getty – Contributor

According to an article published in the magazine Nature Machine IntelligenceSome scientists have recently made a staggering discovery about AI, which usually helps them find positive drug solutions to human problems.

In order to learn more about the possibilities of their AI, the scientists decided to do so run a simulation where the AI ​​would go “evil” and use its abilities to create chemical weapons of mass destruction.

It was shockingly able to find 40,000 possibilities in just six hours.

Not only that, but the AI ​​created options worse than what experts thought was one of the most dangerous nerve agents on Earth called VX.

Fabio Urbina, the newspaper’s lead author, narrates The edge that the concern is not so much how many options the AI ​​found, but that the information it used to calculate it came mostly from publicly available information.

Urbina worries what this could mean if the AI ​​were in the hands of humans with darker intentions for the world.

“The dataset they used for the AI ​​was free to download, and they worry that it just takes some programming skills to turn a good AI into a chemical weapons-making machine,” he explained.

However, Urbina said he and the other scientists are working to give it a “head start”.

“At the end of the day, we decided that we wanted to preempt that. Because if we can, an enemy agent somewhere is probably already thinking about it or going to think about it in the future.”

In terms of related content, The US Sun reports on Disney’s age-altering AI that makes actors look younger.

Everyone says the same thing after the couple decorated their home with Dunne's Christmas items
Man, 24, charged with murder of two sisters and brother in home to stand trial

The US Sun also has the story of Meta’s AI bot that appears to have gone rogue.

https://www.thesun.ie/tech/news-tech/9840764/three-times-artificial-intelligence-turned-evil/ Three times artificial intelligence went “evil” – including an AI microwave that tried to kill its creator

Fry Electronics Team

Fry Electronics.com is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@fry-electronics.com. The content will be deleted within 24 hours.

Related Articles

Back to top button