According to Norton, the average recovery cost from a common data breach is estimated at $3.86 million. The same Norton research found that it can take companies, on average, 196 days to identify a data breach. This is where Artificial Intelligence (AI) comes in. AI provides insights that help companies understand threats. These insights can help reduce response times and make companies compliant with security best practices.
What Is Machine Learning and How Is it Used in Cybersecurity?Machine learning (ML) is AI's brain —a type of algorithm that enables computers to analyze data, learn from past experiences, and make decisions, all in a way that resembles human behavior.
Machine learning algorithms in cybersecurity can automatically detect and analyze security incidents. Some can even automatically respond to threats. Many modern security tools, like threat intelligence, already utilize machine learning.
There are many machine learning algorithms, but most of them perform one of the following tasks:
Regression—detects correlations between different datasets and understand how they are related to each other. You can use regression to predict system calls of operating systems, and then identify anomalies by comparing the prediction to an actual call.
Clustering—identifies similarities between datasets and groups them based on their common features. Clustering works directly on new data without considering previous examples.
Classification—Classification algorithms learn from previous observations, and try to apply what they learn to new, unseen data. Classification involves taking artifacts and classifying them under one of several labels. For example, classify a binary file under categories like legitimate software, adware, ransomware, or spyware.
The Impact of AI on CybersecurityWhile artificial intelligence can improve security, the same technology can give cybercriminals access to systems with no human intervention. The list below explains the good news abou AI's impact on cybersecurity.
Vulnerability managementOrganizations are struggling to manage and prioritize the large number of new vulnerabilities they come upon daily. Conventional vulnerability management techniques respond to incidents only after hackers have already exploited the vulnerability.
AI and machine learning techniques can improve the vulnerability management capabilities of vulnerability databases. Additionally, tools like user and event behavior analytics (UEBA), when powered by AI, can analyze user behavior on servers and endpoints, and then detect anomalies that might indicate an unknown attack. This can help protect organizations even before vulnerabilities are officially reported and patched.
Threat huntingConventional security tools use signatures or attack indicators to identify threats. This technique can easily identify previously discovered threats. However, signature-based tools cannot detect threats that have not been discovered yet. In fact, they can identify only about 90 percent of threats.
AI can increase the detection rate of traditional techniques up to 95 percent. The problem is that you can get multiple false positives. The ideal option would be a combination of AI and traditional methods. This merger between the conventional and innovative can increase detection rates by up to 100 percent, thus minimizing false positives.
AI can also improve threat hunting by integrating behavior analysis. For instance, you can develop profiles of every application inside your organization’s network by analyzing data from endpoints.
Network securityConventional network security techniques focus on two main aspects: creating security policies and understanding the network environment. Here are some aspects to consider:
Policies—security policies can help you distinguish between legitimate and malicious network connections. Policies can also enforce a zero-trust model. However, creating and maintaining policies for a large number of networks can be challenging.
Environment—most organizations don’t have precise naming conventions for applications and workloads. As a result, security teams have to spend a lot of time determining what set of workloads belong to a given application.
AI can enhance network security by learning the patterns of network traffic and recommending both security policies and functional workload grouping.
Data centersAI can monitor and optimize critical data center processes like power consumption, backup power, internal temperatures, bandwidth usage, and cooling filters. AI provides insights into what values can improve the security and effectiveness of data center infrastructure.
You can use AI to reduce maintenance costs. AI can prompt alerts that let you know when you have to attend to hardware failures. AI-based alerts enable you to fix your equipment before further damage occurs. Google reported a 15 percent reduction in power consumption, and a 40 percent reduction in cooling costs in their data centers, after implementing AI technology back in 2016.
AI Applications in Cybersecurity: Real Life ExamplesMachine learning can quickly scan large amounts of data and analyze it using statistics. Modern organizations generate huge amounts of data, so it's no wonder the technology is such a useful tool.
Security screeningSecurity screening done by immigration officers and customs can detect people that are lying about their intentions. However, the screening process is prone to mistakes. In addition, human-based screening can lead to errors because people get tired and can be distracted easily.
The United States Department of Homeland Security has developed a system called AVATAR that screens body gestures and facial expressions of people. AVATAR leverages AI and Big Data to pick up small variations of facial expressions and body gestures that may raise suspicion.
The system has a screen with a virtual face that asks questions. It monitors changes in their answers as well as differences in their voice tone. The collected data is compared against elements that indicate that someone might be lying. Passengers are flagged for further inspection if they are considered suspicious.
Security & crime preventionThe Computer Statistics (CompStat) AI system has been in use by the police department of New York since 1995. CompStat is an early form of AI that includes organizational management, and philosophy, but depends on different software tools. The system was the first tool used for “predictive policing” and many police stations across the U.S have been using CompStat to investigate crimes since then.
AI-based crime analysis tools like the California-based Armorway are using AI and game theory to predict terrorist threats. The Coast Guard also uses Armorway for port security in Los Angeles, Boston and New York.
Analyze mobile endpointsGoogle is using AI to analyze mobile endpoint threats. Organizations can use this analysis to protect the growing number of personal mobile devices.
Zimperium and MobileIron announced a collaboration to help organizations adopt mobile anti-malware solutions incorporating artificial intelligence. The integration of Zimperium’s AI-based threat detection with the MobileIron’s compliance and security engine can address challenges like network, device, and application threats.
Other vendors that offer mobile security solutions include Skycure, Lookout, and Wandera. Each vendor uses its own AI algorithm to detect potential threats.
AI-powered threat detectionCommodities trader ED&F Man Holdings experienced a security incident several years ago. An independent assessment indicated that the company needed to improve its cybersecurity processes and tools.
The company looked to Cognito, Vectra's AI-based threat detection and response platform. Cognito collects and stores network metadata and enriches it with unique security insights. It uses this metadata along with machine learning techniques to detect and prioritize attacks in real time.
Cognito helped ED&F Man Holdings to detect and block multiple man-in-the-middle attacks, and halt a cryptomining scheme in Asia. Moreover, Cognito found command-and-control malware that had been hiding for several years.
Detection of sophisticated cyber-attacksEnergy Saving Trust is an organization that is striving to reduce carbon emissions in the U.K. by 80 percent by 2050. The company was looking for an innovative cyber security technology to strengthen its overall cyber defense strategy. This includes defending the company's critical assets, including intellectual property and sensitive client data from sophisticated cyber-attacks.
After careful evaluation, the company decided to focus on Darktrace’s Enterprise Immune System. Darktrace’s platform is based on machine learning technology. The platform models the behaviors of every device, user, and network to learn specific patterns. Darktrace automatically identifies any anomalous behavior and alerts the company in real time.
Energy Saving Trust was able to detect numerous anomalous activities as soon as they occurred and alert the security team to carry out further investigations, while mitigating any risk posed before real damage is done.
Reducing Threat Response TimeA global bank faced sophisticated cyber threats and advanced attacks. The bank needed to improve its threat detection and response. The existing solution could not effectively detect and mitigate new generations of threats.
The bank’s security team deployed Paladon's AI-based Managed Detection and Response Service (MDR) service. Paladon’s threat hunting service is based on data science and machine learning capabilities.
The bank’s threat detection and response capabilities for advanced attacks were enhanced. This includes data exfiltration, advanced targeted attacks, ransomware, malware, zero-day attacks, social engineering, and encrypted attacks.
Drawbacks and Limitations of Using AI for CybersecurityAI technology presents some limitations that prevent it from becoming a mainstream security tool.
Resources—organizations need lots of resources including data, memory, and computing power.
Data sets—security companies need to use many different data sets of anomalies and malware codes to train the AI system. Getting accurate data sets can require a lot of resources, and time which some companies cannot afford.
Hackers also use AI—to improve and enhance their malware. AI-based malware can be extremely dangerous because it can develop more advanced attacks by learning from existing AI tools.
Neural fuzzing—is used to detect software vulnerabilities by testing large amounts of random input data. A threat actor can combine neural fuzzing with neural networks to gather information about a target software or system and learn its weaknesses.
ConclusionThe impact of AI on our lives will continue to grow as more technology is integrated into everyday life. Some experts believe that AI has a negative effect on technology, but others claim that AI can greatly improve our lives. For cybersecurity, the main benefits focus around faster analysis and mitigation of threats. Concerns focus on the ability of hackers to deploy more sophisticated cyber and technology-based attacks.
Eddie Segal is an electronics engineer with a Master’s Degree from Be’er Sheva University, a big data and web analytics specialist, and also a technology writer. In his writing, Eddie covers subjects ranging from cloud computing to agile development to cybersecurity and deep learning.
Want to write an article for our blog? Read our requirements and guidelines to become a contributor.