According to MIT experts, AI can go wrong in 700 ways. These are the 5 most harmful to humanity


Euronews Next has selected five significant risks of artificial intelligence (AI) from among more than 700 compiled in a new database from MIT FutureTech.

Advertisement

As Artificial Intelligence (AI) As technology advances and becomes increasingly integrated into various aspects of our lives, there is a growing need to understand the potential threats posed by these systems.

Since its inception and becoming more accessible to the public, AI has generated general concerns about its potential to cause harm and be used for malicious purposes.

In the initial stages of the development of AI, leading experts had demanded to stop its progress and make its regulations strict, because it was feared to pose a serious threat to humanity.

Over time, new ways AI can cause harm have emerged, ranging from non-consent to Deepfake pornographyManipulation of political processes, and creation of misinformation through hallucinations.

Given the potential for the increasing use of AI for harmful purposes, researchers are considering various scenarios where AI systems might fail.

Recently, the FutureTech group at the Massachusetts Institute of Technology (MIT), along with other experts, have compiled a new database of over 700 potential risks.

They were categorised based on their reason and classified into seven different areas, with the major concerns relating to security, prejudice and discrimination, and privacy issues.

Based on this newly released database, here are five ways AI systems can fail and potentially cause harm.

5. AI's deepfake technology could make it easier to distort reality

As AI technologies advance, so do the tools for voice cloning and voice tracking. Deepfake content So that they can become more accessible, affordable and efficient.

These technologies have raised concerns about their potential use to spread misinformation, as their results become more personalized and reliable.

As a result, there may be a rise in sophisticated phishing schemes that use AI-generated images, video, and audio communications.

“These communications can be customized to individual recipients (sometimes even including a cloned voice of a loved one), increasing their likelihood of success and making them harder to detect for both users and anti-phishing tools,” the preprint states.

There have been previous instances where such devices have been used to influence political processes, especially during elections.

For example, AI played a key role in the recent French parliamentary elections, where it was used Extreme-right parties To support a political message.

Thus, AI can be used to generate and spread persuasive propaganda or misinformation, potentially manipulating public opinion.

Advertisement

4. Humans may develop an inappropriate attachment to AI

Another risk posed by AI systems is the creation of a false sense of importance and dependency, where people may overestimate its capabilities and underestimate their own abilities, leading to excessive reliance on the technology.

In addition, scientists are also concerned that People are getting confused Due to the use of human-like language by AI systems.

This may lead people to attribute human-like qualities to AI, resulting in Emotional dependency and increased confidence in its capabilities, making them “more sensitive to AI's vulnerabilities in complex, risky situations for which AI is only superficially equipped”.

Furthermore, constant interaction with AI systems may cause people to gradually become isolated from human relationships, causing psychological distress and negatively impacting their health.

Advertisement

For example, blog post One man described how he developed a deep emotional attachment to the AI, even saying he “enjoyed talking to it more than 99 percent of people” and found its responses so consistently interesting that he became addicted to it.

Similarly, a Wall Street Journal columnist commented on her interaction with Google Gemini Live, stating, “I'm not saying I prefer talking to Google's Gemini Live to talking to a real human being. But I'm not saying that either.”

3. AI can take away people's free will

Within the same area of ​​human-computer interaction, a worrying issue is that as systems advance, decisions and tasks are being handed over to AI.

While this may appear beneficial on the surface, excessive reliance on AI may reduce critical thinking and problem-solving skills in humans, causing them to lose autonomy and their ability to think critically and solve problems independently.

Advertisement

On a personal level, individuals may feel their free will compromised, as AI begins to control decisions concerning their lives.

While on a societal level, the widespread adoption of AI for human tasks could lead to massive job losses and “increase the sense of helplessness among the general population”.

2. AI may pursue goals that conflict with human interests

An AI system may develop goals that conflict with human interests, which could lead to misaligned AI running out of control and causing serious harm in pursuit of its independent objectives.

This becomes particularly dangerous in cases where AI systems are able to reach or surpass targets. Human intelligence,

Advertisement

According to the MIT paper, there are a number of technical challenges with AI, including its potential to find unexpected shortcuts to obtain rewards, misunderstand or misapply the goals we set, or deviate from them by setting new goals.

In such cases, a misbehaving AI may resist human attempts to control or shut it down, especially if it sees resistance and gaining more power as the most effective way to achieve its goals.

Additionally, AI can use manipulative techniques to deceive humans.

According to the paper, “a misaligned AI system could use information about whether it is being monitored or evaluated to maintain the appearance of alignment while concealing the misaligned objectives it plans to achieve once deployed or sufficiently empowered”.

Advertisement

1. If AI becomes sentient, humans may mistreat it

As AI systems become more complex and advanced, it is likely that they Gaining consciousness – The ability to understand or feel emotions or sensations – and develop subjective experiences, including pleasure and pain.

In this scenario, scientists and regulators may face the challenge of determining whether these AI systems deserve the same ethical considerations given to humans, animals, and the environment.

The risk is that sensitive AI could be abused or suffer harm if the proper rights are not implemented.

However, as AI technology advances, it will become increasingly difficult to assess whether an AI system has “reached a level of sentience, consciousness, or self-awareness that would grant it moral status”.

Advertisement

Therefore, sensitive AI systems may be at risk of abuse, either accidentally or intentionally, without proper rights and protections.

Leave a Comment

“The Untold Story: Yung Miami’s Response to Jimmy Butler’s Advances During an NBA Playoff Game” “Unveiling the Secrets: 15 Astonishing Facts About the PGA Championship”