July 31, 2018
Turns out the public concern over AI growth was not unfounded. A report from CNBC compiled information from cybersecurity experts, drawing clear conclusions that the malicious use of AI would turn automated machines like drones into weapons, and it’s concerning that the US isn’t prepared for it.
The threat was first introduced in February when researchers from the University of Oxford and University of Cambridge published the report titled ‘The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation’. It clarified how the advanced machine learning technology, can enable the bad actors to hack the autonomous drones and turn them into potential weapons.
Guy Caspi, CEO at Deep Instinct– a cybersecurity startup, said in CNBC podcast that “Such attacks, which seem like science fiction today, might become reality in the next few years”. He added on the capacities of the automated vehicles that, “Autonomous cars like Google’s (Waymo) are already using deep learning, can already raid obstacles in the real world, so raiding traditional anti-malware system in cyber domain is possible.”
It is, therefore, really concerning how far back US’s federal agencies are in terms of even the basic cyber security practices. Kirstjen M. Nielsen, secretary of homeland security for DHS, brought more light onto the gravity of the topic by stating that, “For years, the Department of Homeland Security has worried about the dangers of unmanned aerial systems, and we have sought the legal authority to protect Americans against corrupted aerial devices. Today I have a pressing message for Congress: Time is running out.”
In a report published in Washington Post, she wrote that there is an unprecedented ‘spike in the use of drones at our [the US] borders.’ She believes that transnational criminals searching for security gaps might be behind these incidents. Use of drones in criminal activities is not something new either. Nielsen wrote, “Last year, Border Patrol agents arrested a 25-year-old man for using a drone to ferry tens of thousands of dollars’ worth of methamphetamine across the southwest border, raising the spectre that other dangerous goods could be transported the same way.”
This is not the worst either. Islamic State terrorist groups had already started using the bomb-laden drones on the battlefield as was reported by officials during the airstrikes in Raqqa, previous years.
Though situations look pretty grim right now, it’s not like we aponised automated drones would bring about doomsday, and no one is looking out for solutions yet. Vodafone, EASA and other investors have announced the world’s first drone tracking and safety system back in February, and as far as growth in AI is concerned, we know it has far more positive sides (like saving lives in Nuclear disasters) to curb its growth, with stricter policies and freedom of shooting down any drone in civilian areas. Instead, attention should be paid to the definition of clear procedures and considerations, as was required of DHS by 2015 presidential memorandum.
Nielsen also suggests amendments in laws which were written at the time when there was no threat of such devices. She proposes that, “DHS should be able to access signals being transmitted between a nefarious drone and its ground controller to accurately geolocate both quickly. This could allow authorities to take control of the device or stop its operator on the ground to prevent a potential attack.” In doing so, changes must be made to the legal constraints, which also prohibit them from testing drone-defence technologies.
It is clear that it has become the need of the hour for the US to step up, not only on the ground of technological innovations but also on legal grounds to become more able to tackle with current cyber security scenarios.
Resources:
https://img1.wsimg.com/blobby/go/3d82daa4-97fe-4096-9c6b-376b92c619de/downloads/1c6q2kc4v_50335.pdf
https://www.cnbc.com/2018/07/20/ai-cyberattacks-artificial-intelligence-threatens-cybersecurity.html