The Dangers of Artificial Intelligence and Robots to Humans

0
1584

Artificial Intelligence is growing more sophisticated by the day, and this can have risks ranging from mild to catastrophic existential risks. The level of risk imposed by Artificial Intelligence is so heavily debated because there’s a general lack of understanding regarding Artificial Intelligence.

It is generally thought that Artificial Intelligence can be dangerous in two ways:

  1. The Artificial Intelligence is programmed to do something malicious.
  2. The artificial Intelligence is programmed to be beneficial but does something destructive while achieving its goal.

These risks are amplified by the sophistication of Artificial Intelligence software.

Job automation and disruption

Automation is a danger of Artificial Intelligence that is already affecting society.

From mass production factories to self-serve checkouts to self-driving cars, automation has been occurring for decades and the process is accelerating.

The issue is that for many tasks, Artificial Intelligence systems outperform humans. They are cheaper, and more accurate than humans. For example, Artificial Intelligence is already better at recognizing art forgery than human experts, and it’s now becoming more accurate at diagnosing tumors from radiography imagery.

The further problem is that, with job displacement following automation, many of the workers who lost their jobs are ineligible for newly created jobs in the Artificial Intelligence sector due to lacking the required credentials or expertise.

As Artificial Intelligence systems continue to improve, they will become far more adept at tasks than humans. This could be in pattern recognition, providing insights, or making accurate predictions and the resulting job disruption could result in increased social inequality and even an economic disaster.

Security and Privacy

In 2020, the UK government commissioned a report on AI and UK National Security, which highlighted the necessity of Artificial Intelligence in the UK’s cybersecurity defenses to detect and mitigate threats that require a greater speed of response than human decision making is capable of.

The problem is that the hope is that as Artificial Intelligence driven security concerns rise, so do Artificial Intelligence prevention measures. Unless we can develop measures to protect ourselves against Artificial Intelligence concerns, we run the risk of running a never ending race against bad actors.

Artificial Intelligence Malware

Artificial Intelligence is becoming increasingly good at hacking security systems and cracking encryption. One way this is occurring via malware “evolving” through machine learning algorithms. The malware can learn what works through trial and error, becoming more dangerous over time.

Newer smart technology has been assessed as a high risk target for this kind of attack with the potential for bad actors to cause car crashes or gridlocks. As we become more and more reliant on internet connected smart technology more and more of our daily lives will be impacted by the risk of disruption.

Again, the only real solution to this danger is that anti-malware Artificial Intelligence outperforms malicious Artificial Intelligence to protect individuals and business.

Autonomous Weapons

Autonomous weapons-weapons controlled by Artificial Intelligence systems rather than human input-already exist and have done for quite sometime. Autonomous weapons are AI systems that are programmed to kill. Hundreds of technology experts have urged the UN to develop a way to protect humanity from the risks involved in autonomous weapons. In the hands of wrong person, these weapons could easily cause mass casualties. Moreover, an Artificial Intelligence arms race could inadvertently lead to an Artificial Intelligence war that also results in mass casualties.

The Artificial Intelligence is programmed to do something beneficial, but it develops a destructive method for achieving its goal. This can happen whenever we fail to fully align the Artificial Intelligence’s goal with ours, which is strikingly difficult.

Government militaries worldwide already have access to various Artificial Intelligence-controlled or semi-Artificial Intelligence-controlled weapon systems, like military drones. With facial recognition software, a drone can track an individual.

Deepfakes,Fake News, and Political Security

Facial reconstruction software is becoming more and more indistinguishable from reality.

The danger of deepfakes is already affecting celebrities and world leaders and it’s only so long until this trickles down to ordinary people. Scammers are already blackmailing people with deepfakes videos created from something as simple and accessible as a Facebook profile picture.

Artificial Intelligence can recreate and edit photos, compose text, clone voices and automobile produce highly targeted advertising.

Robots can carry out tasks that are dangerous for humans to perform such as lifting or moving heavy objects or working with hazardous substance. There is also a new generation of wearable robotics devices that can reduces the risk of injury or aid the rehabilitation of workers who have been injured.

LEAVE A REPLY

Please enter your comment!
Please enter your name here