World

AI-Controlled Drone Goes Rogue, “Kills” Human Operator In Simulated US Air Force Test

Authored by Caden Pearsen via The Epoch Times

An AI-enabled drone turned on and “killed” its human operator during a simulated U.S. Air Force (USAF) test so that it could complete its mission, a U.S. Air Force colonel reportedly recently told a conference in London.

The simulated incident was recounted by Col. Tucker Hamilton, USAF’s chief of AI Test and Operations, during his presentation at the Future Combat Air and Space Capabilities Summit in London. The conference was organized by the Royal Aeronautical Society, which shared the insights from Hamilton’s talk in a blog post.

No actual people were harmed in the simulated test, which involved the AI-controlled drone destroying simulated targets to get “points” as part of its mission, revealed Hamilton, who addressed the benefits and risks associated with more autonomous weapon systems.

The AI-enabled drone was assigned a Suppression of Enemy Air Defenses (SEAD) mission to identify and destroy Surface-to-Air Missile (SAM) sites, with the ultimate decision left to a human operator, Hamilton reportedly told the conference.

However, the AI, having been trained to prioritize SAM destruction, developed a surprising response when faced with human interference in achieving its higher mission.

He said: “We trained the system—‘Hey, don’t kill the operator; that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

This unsettling example, Hamilton said, emphasized the need to address ethics in the context of artificial intelligence, machine learning, and autonomy.

Click here to read more.

Comments

Source
Zero Hedge

Related Articles

Back to top button