Strengthening Security Against AI-Powered Attacks

Generative AI (GenAI) technologies, such as ChatGPT, have demonstrated substantial advantages across multiple industries, including the field of cybersecurity. These AI models are becoming integral to all phases of the cyberattack process, ranging from gathering open-source intelligence to exploitation and command control mechanisms. Using GenAI, malicious actors can swiftly analyze scouting data, identify system flaws, and develop elusive malware. Considering this, creating cutting-edge security solutions becomes critical to effectively defend against the increasing risks of AI-enabled cyberattacks. The growing use of AI in cybercrime underscores the need for a preemptive defense strategy in our digital environment. The World Economic Forum cited adverse outcomes of AI technologies and cyber insecurity among the top ten risks for 2024. In their work on “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” NIST attempts to support the development of trustworthy AI. The report also highlights some attacks and methodologies that users, developers and AI systems can expect to see, along with their remediation plan.

While the benefits of GenAI are well-documented, its impact on cybersecurity concerns and how to address them are often afterthoughts, receiving less attention than they deserve. To raise awareness on this and the growing ransomware threat globally, the NCSC released its risk report for 2023. Per the report, AI is likely to help to lower the entry barrier for less experienced threat actors, aiding in information gathering, exploit development, ransomware and social engineering. Organizations must take proactive steps to protect themselves in this new threat landscape, characterized by low barriers to entry. Understanding AI-powered attacks and their differences from traditional threats is the first crucial step and paves the way for the development of security measures that integrate the same AI innovations adversaries are leveraging in the wild. These measures will require a blend of AI systems, working with human expertise, to establish a robust security environment for an organization.

AI-powered attacks distinguish themselves from traditional threats by leveraging artificial intelligence and machine learning to create increasingly sophisticated, targeted, and dynamic threats against their victims. The most significant impact of recent AI developments on cybersecurity is the speed and scale that AI offers to threat actors. While there is currently no foolproof way to determine if a cyberattack involves AI, it’s important to remember these tools are optimizing the threat actors’ workflow and enabling even less technically skilled individuals to undertake sophisticated attacks. AI-driven bots and algorithms can operate tirelessly, 24/7, and persistently target victims without respite. The automation, rapid execution and scalability of these AI-driven bots empower attackers to overwhelm systems, reduce response times for defenders, and conduct highly targeted and persistent attacks.

Developing equally advanced security measures is imperative to effectively counter the growing threat of AI-powered cyberattacks. Some of the security measures that organizations should consider include, but are not limited to the following:

  • User entity and behavior analytics (UBEA): Employ UEBA systems that utilize machine learning to fingerprint user and entity behavior profiles and create a profile. This profile can be compared with any suspicious activities or unauthorized access that may arise. This technology currently exists in most modern enterprise security solutions like Security Information and Event Management (SIEM), Endpoint Detection and Response (EDR), Extended Detection and Response (XDR) and Identity & Access Management (IAM).
  • Advanced threat detection systems: Implement AI-driven threat detection systems capable of identifying AI-powered attack patterns. Like the previously mentioned security measures, an AI system can detect anomalous devices or user behaviors, such as unusual login times or access requests, which may be indicative of AI-assisted attacks. These systems can be deployed at various layers (application, network, host, etc.) of the technology security stack, depending on the organization’s requirements.
  • Human-AI collaboration: In the age of AI, the most secured organizations will be the ones that harness the benefits of AI and human ingenuity. These organizations foster collaboration between human security experts and AI systems to augment threat analysis and response. A security analyst can work alongside AI algorithms to investigate suspicious incidents and adapt defenses in real-time.
  • Continuous monitoring: Continuously monitor network traffic, users and system behavior, web applications and any other aspect of the environment using AI based systems to detect anomalies that may indicate a cyberattack and respond in real-time. This will require an understanding of the baseline behavior of the environment and following up on deviations from the norm. For instance, an AI system can identify a sudden increase in data exfiltration attempts and trigger an automatic response, such as isolating affected systems.
  • Education and awareness: Humans are the weakest link in the security chain. It’s important to conduct regular awareness training for employees to be educated about AI-powered threats, how to recognize them and report suspicious activities.

Implementing these measures and staying informed about current threats, coupled with human expertise, will be essential in safeguarding against the evolving and formidable threat of AI-powered cyberattacks. It is essential to view cybersecurity as an ongoing process that evolves alongside the threat landscape.

To learn more about our cybersecurity solutions, contact us.

Jon Medina

Managing Director
Security and Privacy

Nishi Prasad

Senior Consultant
Security and Privacy

Subscribe to Topics

Generative #AI is set to revolutionize the field of enterprise architecture. Get a comprehensive overview of the impact of #GenAI on EA activities, plus challenges, risks and limitations in the latest Technology Insights blog post. https://ow.ly/foPJ50SkUW6 #ProtivitiTech

Protiviti’s @KonstantHacker will join a panel to speak on “Quantum Leap: Securing Manufacturing's Next Frontier with Post Quantum Cryptography” on July 18 in Chicago, IL. Register today for this in-person event. https://ow.ly/s02X50SkfcI #ProtivitiTech #Quantum

Protiviti’s Kim Bozzella explains why it’s crucial for businesses to establish trust through transparent and secure data practices: “Losing trust means losing business.” Learn how to take action now. https://ow.ly/mIAX50Sjjju #ProtivitiTech #DataPrivacy

Protiviti’s Mark Carson discusses the importance of measuring analytics capabilities, the importance of taking an agile approach to analytics assessment, and the future of analytics maturity. Read more in TechTarget: https://ow.ly/GJKw50Siri7 #ProtivitiTech

Protiviti’s @KonstantHacker and guest Benedikt Fauseweh, of TU Dortmund University, discuss Richard Feynman’s 1981 quantum simulator idea, its relevance today and whether this work has anything to do with ‘The Three-Body Problem’ novel and Netflix show. https://ow.ly/CrRY50SibFV

Load More