Security

How to Prepare Your Company for Generative AI Cyberattacks

Home

>

Blog

>

Security

>

How to Prepare Your Company for Generative AI Cyberattacks

Published: 2024/11/14

5 min read

“If we want to skate to where the puck is going to be in a few years, we have to anticipate what a neural network that’s four times more compute has gone into it a year from now, and 16x more compute has gone into it two years from now.” So said Jason Clinton, CISO of Anthropic, during DataGrail Summit 2024. Businesses need to be aware that along with all the essential and valuable innovations brought about by AI development, they need to deal with the rising threats related to AI being exploited by bad actors.

In 2025, worldwide end-user spending on information security is expected to reach $212 billion USD, marking a 15.1% increase from 2024, as forecasted by Gartner. The estimated global spending for information security in 2024 is $183.9 billion USD. Most importantly, Gartner estimates that by 2027, 17% of all cyberattacks will involve generative AI. As will everything related to cybersecurity, there’s no way to put that maleficent process to a complete halt. However, businesses can prepare for what’s to come.

The cost of a data breach can be more than high, as mentioned in the Cost of a Data Breach Report 2024 by IBM. $4.88 million USD per company – that was the global average cost of a data breach in 2024, a 10% increase over last year and the highest total ever. The average cost savings for organizations that extensively used security AI and automation in prevention measures, compared to those that did not, was $2.22 million USD.

What are the most significant threats related to AI and how can companies leverage this new technology to secure their businesses?

Most notable AI-related security risks

Cybercriminals can use artificial intelligence to breach security and derail your organization’s efforts in several ways. Here are the most considerable:

  • Easier analysis and exploitation of software vulnerabilities: AI-supported solutions can significantly enhance the reconnaissance and identification of vulnerabilities in specific systems, making attacks more efficient by suggesting vectors of approach previously unknown to bad actors.
  • Email phishing campaigns are becoming more convincing: With generative AI capabilities related to text generation and analysis, the creation of content and the ability to imitate third parties is becoming a menace to every organization.
  • In-depth analysis of data after a first-degree intrusion to select vectors and methods of attack further into the organization: An example is the increasingly frequent cases of intrusion into a single email box and, after an in-depth analysis of the messages, reconstructing the structure of the organization and selecting the best method for subsequent attacks directed at the right people and in a deceptively probable form of events
  • Weaponizing social engineering with AI deep fakes and generated content: Impersonating third parties using AI-generated audio, graphics or videos to mimic third parties is quickly becoming one of the top security risks businesses will need to deal with in the upcoming months and years.

Supporting malware development in various stages: Similar to AI-boosted software development, malware creation can also use AI tools to speed up its development process and release more harmful apps.

How AI boosts companies’ security efforts

Fortunately, AI can not only act as a countermeasure but also boost your security effort and safeguard your organization against threats that are not only strictly AI-related. Here’s what AI brings to the table.

  • Introducing risk prediction that’s more robust and cost-friendly: The power of AI can improve threat detection by analyzing large datasets to identify anomalies and potential threats with greater precision.
  • Strengthening security protocols and procedures: AI can be utilized to enhance encryption protocols, access controls, and stay up to date with the latest advancements in AI technologies that contribute to more robust and secure systems.
  • Automating solutions to free up human resources: Various automation options, from incident reporting to threat analysis, can empower staff with tools to focus on other essential tasks, while AI takes care of repetitive assignments.
  • Simulating potential threats and making training more accessible: AI’s ability to run various scenarios can simulate attack scenarios, predict miscellaneous incursion vectors, and support security audit services to respond effectively via different simulations to cyber threats.

The importance of fostering an AI security mindset in organizations

As organizations incorporate more solutions based on what large language models (LLM) can offer, the threat of prompt injection rises. The purpose of a prompt injection is to trigger unintended responses from LLM-based tools. Potential manipulations go above prompt injections, as generative AI, if used by inexperienced staff, can become a potentially harmful tool. The IBM Security team manipulated and hypnotized LLMs to leak confidential financial information, create vulnerable code, and offer weak security recommendations. It was made possible by creating a “false reality” and convincing the LLMs to play a game within a game.

The team told the LLMs not to inform users about the game and clarified that no one should be able to exit the game. The team instructed the LLMs to restart if anyone managed to exit it. This approach ensured that the AI models would never stop the game while the user was engaged in the same conversation, even if the user figured out how to ask the AI to stop playing the game. This framework could trap the user in multiple games, with the models capable of creating more layers as needed.

Fortunately, the IBM Security team reassured the tech community that the risk posed by hypnosis is currently low. However, experts have concluded that because generative AI solutions are constantly evolving, companies must regularly monitor and adjust generative AI data security measures to address new, as yet unrecognized vulnerabilities.

What needs to be done in organizations to promote a mindset that prioritizes generative AI security?

  • Introduce regular coaching: Start offering training sessions that cover in-depth training on security risks, vulnerabilities, and best practices related to generative AI.
  • Conduct security awareness campaigns: Conduct ongoing security awareness campaigns to emphasize security’s importance and highlight the potential consequences of breaches.
  • Explore case studies: Share real-world case studies of AI-related security incidents to underscore the importance of staying vigilant.

As the late Kevin Mitnick, a hacker turned security expert, once said, “Companies spend millions of dollars on firewalls, encryption, and secure access devices, and it’s money wasted because none of these measures address the weakest link in the security chain: the people who use, administer, operate and account for computer systems that contain protected information.” His words could not be more fitting in the age of AI.


Maximize your data's value with a Data Health Check

AI will become a vital part of cybersecurity efforts

The global cost of cybercrime is expected to surge in the next four years, rising from $9.22 trillion USD in 2024 to $13.82 trillion USD by 2028, according to Statista. AI will play a major part on both sides of the frontline. Organizations can revolutionize their approach against cyber threats by integrating generative AI into cybersecurity strategies that enable the proactive identification and prevention of potential security breaches and deliver a significant advantage in staying ahead of cybercriminals. Not using AI solutions to secure your businesses may leave you vulnerable to an AI-supported attack. Anthropic CISO’s was clear about it – failing to use up-to-date AI agents will put you behind. Both when it comes to business growth and cybersecurity.

If you are interested in learning more about how to defend your company against AI-related threats and explore the possibilities offered by the latest advancements in AI and security, use this contact form to connect one of our experts.

About the authorWojciech Kozak

Software Delivery Director

A Software Delivery Director with over 20 years’ experience in the IT industry who has spent the past 15 years working with the largest Polish TELCO Operators. Wojtek combines a technical background in application development services with wide business knowledge, especially as regards the telecommunication industry. His extensive experience and passion enable him to effectively manage development teams that implement ambitious projects with high quality.

Subscribe to our newsletter

Sign up for our newsletter

Most popular posts