Custom Search

Cyber Security:

AI's role in fighting Malware

Malware is everywhere in cyber-space. Malevolent code is passed widely around, infecting and distorting systems.

The battle against the evil code that is out there is taken on by Malware Busters - passed on to the public by security firms - working hard to protect systems from malevolent code.

Artificial intelligence (AI) cannot not automatically detect and resolve every potential malware or cyber-threat incident. However, because it can be used to model of both bad and good behaviour, it can be used as a weapon against even the most advanced malware.

Traditionally bad-behaviour modelling has been uses to create 'signatures' of malware that can be used to detect and remove them.

Products that use good-behaviour modelling will detect many forms of malware that a signature-based tool will miss.

Instead of the reactive security of the past, today's AI-based malware prevention solutions focus on delivering proactive security.

Their approach is built around AI models that have inbuilt Machine Learning capabilities. They are trained to identify malware before it executes, without the use of signatures, frequent updates, or cloud connection. The AI models improve their understanding of the threat situation by machine learning and can increasingly be capable of calculating the risk of executable code damage, and then decide whether a file is safe and can be executed, or needs to be quarantined.

The move from trapping malware to hunting it

Threat Trapping

For decades malware has been dealt with by 'threat trapping'.

Models of malware's bad behaviour were studied, defined, and their 'essence' was then encapsulated in 'signatures' that were programmed into software to identify known malware operating on a system.

If a malware signature was found in an object, it was deemed malicious, isolated (trapped) and the user was notified.

This system required malware first to be 'discovered' (usually after it has done enough damage to make it a real problem), then its signature has to be written, programmed and uploaded into a software package that can be employed to protect a computer system.

Without 'signatures', conventional detection tools that are dependent on bad-behavior models are entirely ineffective. So, they cannot not detect advanced malware that is written specifically for a particular target's system - no signature - no detection!

Threat Hunting

Threat hunting involves 'good-behaviour models' proactively searching out anomalous and malicious activities that don't match the models. So instead of looking for specific already known 'bad guys' you are instead looking for 'odd happenings' that indicate some malevolent force is at work.

The idea is that by modelling 'what is good', you don't have to model 'what is bad'. Anything different from good must therefore be bad.... or at least highly suspect! So you do not need to know the name of the villains before you can catch them.

This shift from trapping to hunting, or from bad-behavior to good-behavior modelling, is necessary because advanced malware is so sophisticated that it can easily evade security solutions that rely on bad-behavior models such as signatures. Malware authors are quite adept at creating single-use or limited-use malware that will never be seen by signature-creating vendors.

Malware-hunting tools detect anomalies such as:

An excessive use of certain resources (such as CPU or memory)

Connections to hosts with which the target of an infection never communicated in the past

An unusual amount of data transferred to an external host

The invocation of programs, such as compilers or network exploration tools, that have never been used before

Logins at unusual times

This new aproach is promising, but a lot of work has to be done before it can replace the old 'trap' method. Currently it is being used tentatively in conjunction with traditional forms of malware opposition.

Writing in April 2021 Peter Rawlinson from Bristol University said that 'we have made ground–breaking in-roads into the detection and blocking of cyber-attacks in real-time, seeing ransomware detected within four seconds, and reducing file encryption by over 80%'.

A lot of work is currently being done in Universities to explore how AI can be used more effectively to detect attacks and to combat them when they occur. In the UK Bristol University is leading the pack on this work. Professor Pete Burnap (Professor of Data Science & Cybersecurity) is at the forefront of the research into this area.