Artificial intelligence (AI) has wormed its way into all sectors of our society…even the most complex areas, such as medicine. The combination of innovative approaches to modern AI and increasingly powerful machine learning algorithms has produced some really interesting solutions to some inextricable problems. In the medical world, artificial intelligence helps doctors extract valuable information from medical images that is hard to spot by, or even undetectable to, the naked eye. This can provide indicators to enable early detection of severe diseases such as cancer.
Stalking evil, whether biological or digital, when it is almost invisible is one of the strengths of so-called “artificial intelligence” technologies. Hence the appearance of cases in which AI is being used for cybersecurity purposes.
Both medicine and cybersecurity are obviously far more complex than a game of chess, in particular because of their interdisciplinarity. The use of AI in these areas must thus be understood with a good dose of humility and realism. As Garry Kasparov recently remarked at WebSummit in Lisbon: “IBM scientists spent years building Deep Blue and, in the end, all it can do is play chess.”
We’re obviously a very long way from little David in Spielberg’s ‘A.I.’. It would, rather, be more appropriate to speak of augmented intelligence: a set of technologies that allows computers to do one single innovative thing: learn. Using structured and unstructured data, machines are able to build up a certain level of knowledge on a given topic. They can then begin to reason and identify relationships between different elements (such as malicious files, IP addresses, medical images, etc.). This is what constitutes AI – nothing more, nothing less.
Thanks to augmented intelligence, security will become cognitive
Cyber threats are constantly growing in number, with the global costs of cybercrime expected to reach 2 billion dollars in 2019. 2.5 trillion bytes of data are generated every day worldwide, 80% of which are unstructured. Humans alone simply do not have the ability to make this secure.
Since the advent of “fortress” approaches – namely the deployment of static defences to protect a given perimeter – we are now seeing the emergence of a growing number of “security intelligence” or “threat intelligence” concepts. The idea is to harness the power of computers to collect and analyse masses of data in real time. Thanks to the new AI capabilities allowing for intelligence from SIEM and other analysis systems to be learnt, interpreted and further processed, humans are able to focus on what they’re best at: making critical decisions. Cybersecurity analysts and experts can thus increase their level of performance. This can only be good news given that forecasts indicate a shortage of almost a million experts by 2020.
Welcome to the era of cognitive security…
Where artificial intelligence systems work hand in hand with humans to strengthen prevention of, anticipation and response to increasingly sophisticated attacks. In 2016, IBM released a study titled “Cybersecurity in the Cognitive Era: Priming Your Digital Immune System,” demonstrating that complexity and speed were set to become the major cybersecurity challenges, requiring a daring paradigm shift.
For the moment, AI is still revealing its best potential specifically in relation to detecting vulnerabilities and threats. Traditional detection systems have certain limitations: a high number of false positives, difficulties with new threats, cumbersome signatures… Many start-ups, as well as several of the best-known software companies, have thus started to develop solutions based on augmented intelligence.
In the near future, we hope to be able to benefit from AI’s strengths in areas such as:
- software development: assistance to programming, bug fixing
- cyber resilience: self-adaptive systems
- data leak prevention
- attack attribution: identifying the perpetrators of a cyberattack
- attack simulation to train operational teams
- anomaly detection, in particular to combat fraud
- incident response: automation and orchestration for quick response times
The Ponemon Institute conducted a comprehensive study on the subject, published in July 2018. Some 600 companies that have deployed artificial intelligence in their cybersecurity strategy or plan to deploy it in the near future responded, making the study relatively representative.
It demonstrated that, in 69% of cases, AI increases the speed of threat analysis and, in 64% of cases, helps contain damage faster.
The study also revealed that the companies surveyed believe that AI-based technologies allow for a greater level of security than humans are able to provide alone, as well as significantly reducing the workload of IT teams. An average saving of 39,500 hours a year! This equates to almost 2.5 million dollars… So, should we fear for our jobs? It’s hard to say, but the majority of respondents believe that technologies will never completely replace human judgement.
For all its capabilities, artificial intelligence does require some preparation before being deployed. The study reveals the most common barriers to its use. The problems of interoperability, the lack of maturity or stability of certain technologies, along with errors or inaccuracies, make up the top 3. Indeed, the key difference between a “smart” machine and a conventional machine is that the former can get things wrong.
The dark side of the force
When implemented correctly, AI is thus very promising in terms of cybersecurity, especially when it comes to coping with exponential amounts of data containing hidden threats, the forms of which are constantly evolving… And which are also endowed with intelligence, whether natural or artificial. Believing that cybercriminals haven’t thought about artificial intelligence would, however, be too good to be true.
According to a report by the University of Cambridge’s “Centre for the Study of Existential Risk”, large-scale spear-phishing targeting prominent political figures was one of the first products of the malicious use of artificial intelligence. Any resemblance to actual persons or events is purely coincidental…
“It’s official, AI has arrived in cybersecurity” was the opening line of the presentation made by security data scientist Brian Wallace at the 2017 BlackHat conference. “Hackers have been using artificial intelligence as a weapon for quite some time. It makes total sense because hackers have a problem of scale, trying to attack as many people as they can, hitting as many targets as possible, and all the while trying to reduce risks to themselves,” he continued. How, then, can we harness the power of AI for positive ends, avoiding its dark side?
Have we stayed the sword of Damocles once again?
In the 1950s, John McCarthy and Alan Turing, the fathers of AI, believed that artificial intelligence would save humanity. Since then, efforts to make machines more and more similar to humans have probably allowed for rapprochement between the two worlds, but not always in the way those who tried to humanise machines had hoped. Those who, nowadays, have the task of managing teams of specialised computer scientists will understand what I am talking about. 😉
I thus agree with the famous astrophysicist Stephen Hawking who, in 2014, warned us against the risk that the “robots of the future” could exceed human intelligence. Elon Musk and many of his peers also share this analysis and these fears, which are now more relevant than ever. Let us therefore focus on what humans can do best: exploring, experimenting, sometimes failing and, above all, interacting, since our strength lies in collaborative intelligence, thinking creatively and outside the traditional frameworks. Today, technologies enable human beings to strengthen areas in which they are weak, and to support the development of creativity and collective intelligence. Whether smart or not, what would appear clear is that the future will be augmented. As Ray Kurzweil, futurologist and author of the book “The Singularity is Near”, foresees: All different forms of human expression, art, science, are going to become expanded, by expanding our intelligence.
— Pascal Steichen
Pascal Steichen is a member of the FIC Advisory Board
- EncroChat: Deciphering of the End-to-End Encryption Service Used by Criminals Cybercrime
- Preserving Digital Footprints and Cyber Resilience: Training the Swiss Police Cybercrime
- Ransomware in Six Questions (by the Ministerial Delegation to the Security Industries and the Fight Against Cyberthreats, French Ministry of the Interior) Cybercrime