Humans are unable to single-handedly deal with cyber attacks. This is the view shared by cybersecurity professionals. There are several reasons to this: the volume of attacks, their constant mutations, the speed of reaction they require, and the lack of expertise on the market. Artificial intelligence (AI), already widely used in fraud prevention, increasingly appears to be a major “game changer” in cybersecurity, in particular in defensive cyber warfare. In fact, the recent Villani report mentions defence as one of the 4 strategic outcomes of artificial intelligence. Conversely, the generalisation of artificial intelligence, including in entirely automated weapon systems, will soon raise issues related to cybersecurity, as this technology could be used for malicious purposes.
What role for AI in cybersecurity?
In just a few years, artificial intelligence has become a marketing buzzword among cybersecurity publishers. This strategy has been successful since one third of companies state they are using AI-based security solutions.. However, this figure cannot hide very diverse realities, with some solutions relying more on engines made of sophisticated rules than on real AI features. Indeed, to refer to artificial intelligence, there needs to be 1) a capacity for environment perception through training, whether supervised or not; 2) a capacity for analysis and problem solving; 3) a capacity for action proposal, or even autonomous decision-making.
Theoretically, AI could greatly contribute to cybersecurity, in terms of prevention, anticipation, detection or reaction. Practically speaking, vulnerability or internal/external threat detection is one of the most mature uses of AI. Action is needed quickly since the existing detection systems based on signatures are showing their limits: high number of false positives; incapacity to adapt to the latest threats, especially APTs; and cumbersome signature databases, which impact performances. Various actors such as iTrust in France (Reveelium solution), Darktrace in the United Kingdom, or Cylance (American company that has recently opened a branch in France) specialised in developing AI-based solutions for anomaly detection and behaviour analysis. For their part, most of network and endpoint security solutions companies (Symantec, Sophos, F-secure, SentinelOne, Fortinet, Palo Alto Networks…) have integrated more or less advanced AI building blocks in their solutions, sometimes by buying small specialised actors (such as Invicea, bought by Sophos, or RedOwl, bought by Forcepoint in 2017).
After detection, incident response is also largely impacted by this trend. Indeed, the idea is to multiply the efficacy of SOCs and CSIRTs by granting ever increasing intelligence to SIEMs. Splunk announced a few days ago the acquisition of Phantom Cyber, an expert in automation and incident response orchestration. IBM, for its part, incorporated its Watson module into Qradar and is now offering a “cognitive security” option enabling to process in a combined way both structured data (logs for instance) and unstructured data (experts’ opinions, social networks…).
To these various defensive uses, we should also add the possibility to use AI to authenticate the user from a footprint established through the analysis of their own behaviour (see DARPA’s Active Authentication programme).
Main uses of artificial intelligence in cybersecurity
|Prevention||Code’s Safety||Programming support, automatic bug fixing
|Cyber-resilience||Auto-adaptive systems able to automatically reconfigure themselves when attacked||
|Anticipation||Cyber Threat Intelligence
|Data leak prevention; analysis and characterisation of past attacks; monitoring of potential attackers.||
|Identification of attacks||Identification of the perpetrators of a cyber attack||
|Cognitive cybersecurity||Aggregation and treatment of a set of unstructured data (experts’ opinions, social networks…) and structured data (logs) in order to support the security teams.||
|Detection||Vulnerability detection||Automated penetration test; attack simulation; flaw detection in software.||
|Internal or external threat detection||Anomaly detection from behaviour analysis; anti-APT; log analysis; fraud prevention.||
|Reaction||Incident response||Automation and incident response orchestration (incident analysis; implementation of counter-measures; content filtering; evidence collection…)||
|Identification of attacks||Identification of the perpetrators of a cyber attack||
Beyond the technical layers of the cyberspace, artificial intelligence can finally play an ambivalent role on the semantic layer since it allows the creation of fake news at an industrial level – as demonstrated by the fake speech of Barack Obama produced by the University of Washington – while facilitating their detection. DARPA just launched a forensic media programme to certify information.
Artificial intelligence will therefore progressively permeate all cybersecurity technologies and processes. A good example of this is the Cyber Grand Challenge, organised by DARPA during DEFCON 2016, where various AI competed to detect and correct flaws in an entirely automated way.
If using technology for cyber defence purposes seems then promising, its limitations should also be taken into account as they are less technological than human (understanding AI) and psychological (accepting AI). Are we ready to let machines make decisions that may have severe implications? If behaviour detection or code analysis do not seem to be an issue, content filtering, IP address blocking, and especially attack identification are decisions that involve “commitment”. Generally speaking, artificial intelligence cannot replace human intelligence. Its mission is mostly to enhance it. This implies that technology is not a black box: users must be able to follow and understand the various reasoning stages and understand the decision. This is the essential prerequisite for the trust that they may or may not put in the system. There is a real risk of seeing AI battles seeking to weaken each other and to deceive opposing machines. During DEFCON 2017, researchers have thus demonstrated that is was possible to use the ‘Open AI’ framework to create completely undetectable malware.
What cybersecurity for AI?
The connection between AI and cybersecurity has therefore another side – a negative one – linked to the misuse of technology which can be the victim of embezzlement and attacks. It can firstly be attacks by cache poisoning which consist in injecting biased or poor quality data during the training phase. Tay, Microsoft’s chatbot (or conversational robot) was one of the victims… Another method: attacks by inference, which consist in forcing AIs to disclose their internal operation (thresholds, rules…) by using various scenarios. This method is already widely used by cybercriminals to deceive fraud prevention systems. Finally, it is possible to deceive AI systems by slightly modifying their environment, as Google researchers recently demonstrated with image recognition. These weaknesses have led Adi Shamir, co-inventor of the RSA algorithm, that we should at all cost avoid asking an AI system how to save the internet. The risk is indeed high that it first recommends to kill the network in order to better save it…
At the military level, these risks are all the more worrisome in that AI will soon be omnipresent in weapon systems, which some countries envision to be largely autonomous in the near future. While the United States primarily conceives AI as a mean to increase human performances, both at the physical and cognitive levels, Russia is working on the full automation of some platforms. The aim is to robotise 30% of military equipment by 2025 in order to progressively exclude human beings from the frontline. In this global competition, China has not been left behind and is now seeking to use civil technologies as a lever for its military capabilities with the ambition to become a world leader by 2030.
Artificial intelligence has therefore become a major sovereignty issue. In view of the proactive approach of its Russian, American and Chinese competitors, France has a card to play, both scientifically and with regards to available data and industrial outcomes. It is therefore a matter of creating the conditions for the security of artificial intelligence and trust in this technology. Firstly, by investing in artificial intelligence security . This is what the Villani report recommends by suggesting to entrust the ANSSI (National Cybersecurity Agency of France) with a mission on that topic. Secondly, by defining an ethical framework. How, for instance, can we reconcile the right to oblivion and data protection when faced with systems that goggle up and memorise billions of data? Thirdly and finally, by focusing efforts on a few sectors in which France is a leader. Cybersecurity is definitely one of those.
|What strategy for France?
During the ‘AI for Humanity’ conference held at the Collège de France, the French President presented France’s ambitions and strategies related to AI.
Four priorities have been defined:
– To strengthen the AI ecosystem to attract the best talents;
– To develop a policy for opening up data;
– To create a regulatory and financial framework in favour of the emergence of AI champions;
– To initiate a discussion on AI regulation and ethics.