Tech
Large Language Models and Artificial Intelligence in Cyber Security
Large language models are combining with artificial intelligence applications in cybersecurity to detect threats more effectively. In this content, explore the role and advantages of these models in cybersecurity.
Large Language Models and Cybersecurity
Large language models (LLMs) have become a factor that has led to an increase in discussions about artificial intelligence in boardrooms around the world, even though the technology has been effectively used in various forms for many years. ESET began using artificial intelligence for the first time a quarter of a century ago to improve the detection of macro viruses. Today, security teams need AI-based tools more than ever due to three main factors:
- New Skill Shortages: There is an estimated shortage of about four million specialists in the field of cybersecurity worldwide. In Europe, it is estimated that there are 348,000 missing experts, while in North America, the number is 522,000. Thanks to its 24/7 operational capacity, AI has the ability to detect patterns that security experts might overlook.
- Agile Threat Actors: While cybersecurity teams struggle to find talent, their adversaries are rapidly strengthening. The cybercrime economy is projected to cost the world $10.5 trillion annually by 2025. Threat actors can easily obtain everything they need to launch attacks through “as-a-service” offerings and toolsets.
- Increased Risks: With the increase in digital investments, confidence in IT systems has also risen to ensure sustainable growth and competitive advantage. Network defenders know that if they cannot deal with or rapidly detect and contain cyber threats, they may face significant financial and reputational losses. The average cost of a data breach today is $4.45 million. However, a serious ransomware breach can cost much more. For example, financial institutions alone have lost a total of $32 billion due to service disruptions since 2018.
Future Applications of Artificial Intelligence
How can artificial intelligence be used by security teams in the future?
- Threat Intelligence: LLM-powered GenAI assistants can simplify complex information by analyzing intricate technical reports for analysts, summarizing key points and actionable insights in plain language.
- AI Assistants: AI “co-pilots” integrated into IT systems can help organizations eliminate dangerous misconfigurations. This applies to security tools that require updates for complex settings like firewalls, as well as general IT systems like cloud platforms.
- Increasing SOC Productivity: Today’s Security Operation Centers (SOCs) are under immense pressure to quickly detect, respond to, and contain incoming threats. The expansion of the attack surface and the abundance of alert-generating tools can often overwhelm analysts. This leads analysts to spend a significant portion of their time on false positives, causing real threats to be overlooked. AI can alleviate this burden by contextualizing and prioritizing alerts.
- New Discoveries: Threat actors continuously evolve their tactics, techniques, and procedures (TTPs). However, AI tools have the capability to scan for the latest threats by combining indicators of compromise (IoCs) with publicly available information and threat reports.
Applications of Artificial Intelligence in Attacks
How is AI used in cyber attacks?
- Social Engineering: One of the most notable applications of GenAI is helping threat actors create large-scale, highly convincing, and grammatically perfect phishing campaigns.
- Business Email Compromise (BEC): GenAI technology can deceive victims by mimicking the writing style of a specific individual or corporate identity, leading to money transfers or the delivery of sensitive data. Deepfake voice and video technologies can also be used for this purpose.
- Disinformation: GenAI can significantly lighten the load of the content creation process for influence operations. A recent report has warned that Russia is already using such tactics, and it is noted that if these tactics succeed, they could be widely repeated.
Limitations of Artificial Intelligence
Limits of AI: Artificial intelligence has both positive and negative aspects. However, it currently has some limitations as well. It can yield high false positive rates and requires high-quality training sets to be effective. Human oversight is often necessary to ensure the accuracy of outputs and for models to train themselves. All of this demonstrates that AI is not a magical solution for either attackers or defenders.