Subscribe

AI applications may up cyber risks: report

Kgaogelo Letsebe
By Kgaogelo Letsebe, Portals journalist
Johannesburg, 26 Mar 2018
Vulnerability to malicious cyber-attacks or technical failure will increase as AI applications become more prevalent, says the AGCS report.
Vulnerability to malicious cyber-attacks or technical failure will increase as AI applications become more prevalent, says the AGCS report.

The widespread implementation of artificial intelligence (AI) applications may increase vulnerability of businesses to cyber attacks and technical failure. This is according to a new report from global insurer Allianz Global Corporate & Specialty (AGCS).

Titled "The Rise of Artificial Intelligence: Future Outlook and Emerging Risks", the report highlights that although the widespread implementation of AI-based applications brings many advantages for businesses such as improved efficiencies, lessened repetitive tasks and improved customer experiences, it also increases potential threats to higher larger-scale disruptions.

AI, the report said, spans applications in almost every industry and has been predicted to increase corporate profitability in 16 industries across 12 economies by an average of 38% by 2035.

"Chatbots, autonomous vehicles, and connected machines in digital factories foreshadow what the future will look like. AI comes with potential benefits and risks in many areas: economic, political, mobility, healthcare, defence and the environment. However, in the wrong hands, the potential threats could easily counterbalance the huge benefits," says Michael Bruch, Head of Emerging Trends at AGCS.

"Vulnerability to malicious cyber-attacks or technical failure will increase, as societies and economies become increasingly interconnected. Companies will also face new liability scenarios as responsibility for decision-making shifts from human to machine and manufacturer."

According to the report, there are five areas which will play a crucial role in identifying threats brought on by deploying AI applications: software accessibility, safety, accountability, liability and ethics. Bruch explains that by addressing each of these areas, responsible development and introduction of AI becomes less hazardous for society.

"Preventive measures that reduce risks from unintended consequences are essential. In terms of safety, for example, the race for bringing AI systems to the market could lead to insufficient or negligent validation activities, which are necessary to guarantee the deployment of safe, functional and cyber-secure AI agents. This, in turn, could lead to an increase in defective products and recalls.

"With regard to liability, AI agents may take over many decisions from humans in future, but they cannot legally be held liable for those decisions. In general, the manufacturer or software programmer of AI agents is liable for defects that cause damages to users.

"However, AI decisions that are not directly related to design or manufacturing, but are taken by an AI agent because of its interpretation of reality, would have no explicit liable party, according to current law. Leaving the decisions to courts may be expensive and inefficient if the number of AI-generated damages start increasing," adds Bruch.

"A solution to the lack of legal lability would be to establish expert agencies or authorities to develop a liability framework under which designers, manufacturers or sellers of AI products would be subject to limited tort liability."

Similarly, analysis from global risk consultancy firm Control Risks indicates that the development of techniques to use artificial intelligence and tools to enhance capabilities is now increasingly on the agenda of cyber-threat actors.

"As more organisations begin to employ machine learning and artificial intelligence as part of their defences against cyber threats, hackers are recognising the need to advance their skills to keep up with this development. Although the uptake of AI in businesses is slow - as more research is produced and AI technologies become more mature and more accessible to threat actors, this will evolve."

"AI-powered software could help to reduce cyber risk for companies by better detecting attacks, but could also increase it if malicious hackers are able to take control of systems, machines or vehicles. AI could enable more serious and more targeted cyber incidents to occur by lowering the cost of devising attacks. The same hacker attack - or programming error - could be replicated on numerous machines. It is already estimated that a major global cyber-attack has the potential to trigger losses in excess of $50 billion but even a half-day outage at a cloud service provider has the potential to generate losses around $850 million."

Share