Strengthening Security Against AI-Powered Attacks

Generative AI (GenAI) technologies, such as ChatGPT, have demonstrated substantial advantages across multiple industries, including the field of cybersecurity. These AI models are becoming integral to all phases of the cyberattack process, ranging from gathering open-source intelligence to exploitation and command control mechanisms. Using GenAI, malicious actors can swiftly analyze scouting data, identify system flaws, and develop elusive malware. Considering this, creating cutting-edge security solutions becomes critical to effectively defend against the increasing risks of AI-enabled cyberattacks. The growing use of AI in cybercrime underscores the need for a preemptive defense strategy in our digital environment. The World Economic Forum cited adverse outcomes of AI technologies and cyber insecurity among the top ten risks for 2024. In their work on “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” NIST attempts to support the development of trustworthy AI. The report also highlights some attacks and methodologies that users, developers and AI systems can expect to see, along with their remediation plan.

While the benefits of GenAI are well-documented, its impact on cybersecurity concerns and how to address them are often afterthoughts, receiving less attention than they deserve. To raise awareness on this and the growing ransomware threat globally, the NCSC released its risk report for 2023. Per the report, AI is likely to help to lower the entry barrier for less experienced threat actors, aiding in information gathering, exploit development, ransomware and social engineering. Organizations must take proactive steps to protect themselves in this new threat landscape, characterized by low barriers to entry. Understanding AI-powered attacks and their differences from traditional threats is the first crucial step and paves the way for the development of security measures that integrate the same AI innovations adversaries are leveraging in the wild. These measures will require a blend of AI systems, working with human expertise, to establish a robust security environment for an organization.

AI-powered attacks distinguish themselves from traditional threats by leveraging artificial intelligence and machine learning to create increasingly sophisticated, targeted, and dynamic threats against their victims. The most significant impact of recent AI developments on cybersecurity is the speed and scale that AI offers to threat actors. While there is currently no foolproof way to determine if a cyberattack involves AI, it’s important to remember these tools are optimizing the threat actors’ workflow and enabling even less technically skilled individuals to undertake sophisticated attacks. AI-driven bots and algorithms can operate tirelessly, 24/7, and persistently target victims without respite. The automation, rapid execution and scalability of these AI-driven bots empower attackers to overwhelm systems, reduce response times for defenders, and conduct highly targeted and persistent attacks.

Developing equally advanced security measures is imperative to effectively counter the growing threat of AI-powered cyberattacks. Some of the security measures that organizations should consider include, but are not limited to the following:

  • User entity and behavior analytics (UBEA): Employ UEBA systems that utilize machine learning to fingerprint user and entity behavior profiles and create a profile. This profile can be compared with any suspicious activities or unauthorized access that may arise. This technology currently exists in most modern enterprise security solutions like Security Information and Event Management (SIEM), Endpoint Detection and Response (EDR), Extended Detection and Response (XDR) and Identity & Access Management (IAM).
  • Advanced threat detection systems: Implement AI-driven threat detection systems capable of identifying AI-powered attack patterns. Like the previously mentioned security measures, an AI system can detect anomalous devices or user behaviors, such as unusual login times or access requests, which may be indicative of AI-assisted attacks. These systems can be deployed at various layers (application, network, host, etc.) of the technology security stack, depending on the organization’s requirements.
  • Human-AI collaboration: In the age of AI, the most secured organizations will be the ones that harness the benefits of AI and human ingenuity. These organizations foster collaboration between human security experts and AI systems to augment threat analysis and response. A security analyst can work alongside AI algorithms to investigate suspicious incidents and adapt defenses in real-time.
  • Continuous monitoring: Continuously monitor network traffic, users and system behavior, web applications and any other aspect of the environment using AI based systems to detect anomalies that may indicate a cyberattack and respond in real-time. This will require an understanding of the baseline behavior of the environment and following up on deviations from the norm. For instance, an AI system can identify a sudden increase in data exfiltration attempts and trigger an automatic response, such as isolating affected systems.
  • Education and awareness: Humans are the weakest link in the security chain. It’s important to conduct regular awareness training for employees to be educated about AI-powered threats, how to recognize them and report suspicious activities.

Implementing these measures and staying informed about current threats, coupled with human expertise, will be essential in safeguarding against the evolving and formidable threat of AI-powered cyberattacks. It is essential to view cybersecurity as an ongoing process that evolves alongside the threat landscape.

To learn more about our cybersecurity solutions, contact us.

Jon Medina

Managing Director
Security and Privacy

Nishi Prasad

Senior Consultant
Security and Privacy

Subscribe to Topics

Learn more about what GRC Managed Service is and what it can do for SAP S/4HANA and SAP cloud solutions in the latest #SAP Blog post. https://ow.ly/OMaL50RfsHw #ProtivitiTech

Protiviti is a proud sponsor of ServiceNow Knowledge 2024—a three-day conference all about #AI. Stop by our booth (#2503) to visit with our team and learn how the #ServiceNow platform makes business transformation possible. https://ow.ly/qa6p50Rh9wf

What is #DesignThinking? Could it help your organization? Find out how Protiviti uses it to help clients build net new applications and modernize legacy systems. https://ow.ly/fMK550Rfsoi #ProtivitiTech

Join our May 2 webinar designed for privacy and security professionals seeking to navigate the intricate nuances of data governance within the ever-evolving global regulatory landscape. Register today! https://ow.ly/hzrG50R4fTX #ProtivitiTech #DataPrivacy

The latest Technology Insights Blog post offers insight into the unique risks associated with Large Language Models (LLMs) and how to establish strategies to mitigate them. https://ow.ly/q3w550RfbXm #ProtivitiTech #TechnologyInsights

Load More