First Line of Defense for Cybersecurity : AI

The year 2017 wasn’t a great year for cyber-security; we saw a large number of high-profile cyberattacks; including Uber, Deloitte, Equifax and the now infamous WannaCry ransomware attack, and 2018 started with a bang too with the hacking of Winter Olympics. The frightening truth about increasingly cyber-attacks is that most businesses and the cybersecurity industry itself are not prepared. Despite the constant flow of security updates and patches, the number of attacks continues to rise.

Beyond the lack of preparedness on the business level, the cybersecurity workforce itself is also having an incredibly hard time keeping up with demand. By 2021, there are estimated to be an astounding 3.5 million unfilled cybersecurity positions worldwide, the current staff is overworked with an average of 52 hours a week, not an ideal situation to keep up with non-stop threats.

New AI algorithms use Machine Learning (ML) to adapt over time, and make it easier to respond to cybersecurity risks. However, new generations of malware and cyber-attacks can be difficult to detect with conventional cybersecurity protocols. They evolve over time, so more dynamic approaches are necessary.


Given the state of cybersecurity today, the implementation of AI systems into the mix can serve as a real turning point. / Image: pixabay


Another great benefit of AI systems in cybersecurity is that they will free up an enormous amount of time for tech employees. Another way AI systems can help is by categorizing attacks based on threat level. While there’s still a fair amount of work to be done here, but when machine learning principles are incorporated into your systems, they can actually adapt over time, giving you a dynamic edge over cyber criminals.

Unfortunately, there will always be limits of AI, and human-machine teams will be the key to solving increasingly complex cybersecurity challenges. But as our models become effective at detecting threats, bad actors will look for ways to confuse the models. It’s a field called adversarial machine learning, or adversarial AI.

Bad actors will study how the underlying models work and work to either confuse the models — what experts call poisoning the models, or machine learning poisoning (MLP) – or focus on a wide range of evasion techniques, essentially looking for ways they can circumvent the models.

Four Fundamental Security Practices


The best defense against a potential AI cyber-attack is rooted in maintaining a fundamental security posture that incorporates continuous monitoring, user education, diligent patch management and basic configuration controls to address vulnerabilities.

With all the hype surrounding AI we tend to overlook a very important fact. The best defense against a potential AI cyber-attack is rooted in maintaining a fundamental security posture that incorporates continuous monitoring, user education, diligent patch management and basic configuration controls to address vulnerabilities. All explained below:

1. Identifying the Patterns

AI is all about patterns. Hackers, for example, look for patterns in server and firewall configurations, use of outdated operating systems, user actions and response tactics and more. These patterns give them information about network vulnerabilities they can exploit.

Network administrators also look for patterns. In addition to scanning for patterns in the way hackers attempt intrusions, they are trying to identify potential anomalies like spikes in network traffic, irregular types of network traffic, unauthorized user logins and other red flags.

By collecting data and monitoring the state of their network under normal operating conditions, administrators can set up their systems to automatically detect when something unusual takes place — a suspicious network login, for example, or access through a known bad IP. This fundamental security approach has worked extraordinarily well in preventing more traditional types of attacks, such as malware or phishing. It can also be used very effectively in deterring AI-enabled threats.

2. Educating the Users

An organization could have the best monitoring systems in the world, but the work they do can all be undermined by a single employee clicking on the wrong email. Social engineering continues to be a large security challenge for businesses because workers easily can be tricked into clicking on suspicious attachments, emails and links.


Employees are considered by many as the weakest links in the security chain, as evidenced by a recent survey that found that careless and untrained insiders represented the top source of security threats./ Image: pixabay

Educating users on what not to do is just as important as putting security safeguards in place. Experts agree that routine user testing reinforces training. Agencies must also develop plans that require all employees to understand their individual roles in the battle for better security. And don’t forget a response and recovery plan, so everyone knows what to do and expect when a breach occurs. Test these plans for effectiveness. Don’t wait for an exploit to find a hole in the process.

3. Patching the Holes

Hackers know when a patch is released, and in addition to trying to find a way around that patch, they will not hesitate to test if an agency has implemented the fix. Not applying patches opens the door to potential attacks — and if the hacker is using AI, those attacks can come much faster and be even more insidious.

Challenges facing using AI in Cybersecurity

Challenges facing using AI in Cybersecurity / Image: author

4. Checking Off the Controls

The Center for Internet Security (CIS) has issued a set of controls designed to provide agencies with a checklist for better security implementations. While there are 20 actions in total, implementing at least the top five — device inventories, software tracking, security configurations, vulnerability assessments  and control of administrative privileges — can eliminate roughly 85 percent of an organization’s vulnerabilities. All of these practices — monitoring, user education, patch management and adherence to CIS controls — can help agencies fortify themselves against even the most sophisticated AI attacks.