Artificial Intelligence

How to Secure the Best Possible Generative AI Data for your Project?

Home

>

Blog

>

Artificial Intelligence

>

How to Secure the Best Possible Generative AI Data for your Project?

Published: 2024/08/14

5 min read

According to a Gartner survey, 62% of CFOs and 58% of CEOs believe that AI will significantly impact their industries in the next three years. The adoption of Generative AI (GenAI) is an ongoing trend, and companies should carefully consider and address all the challenges associated with adopting it. Despite 80% of AI decision-makers expressing concerns about data privacy and security, only one in ten organizations has a reliable system for assessing bias and privacy risk in large language models (LLMs). To address these issues early and meet compliance requirements, businesses must implement the right frameworks and design their AI solutions with data security in mind. Read on to learn how.

Tips on overcoming LLM security challenges

Unfortunately, several security challenges come with providing high-quality data for large language AI models. What are the most crucial AI security challenges you might encounter and how can you deal with them?

Filtering content for LLMs

Both the prompt and the response provided by LLMs need to be filtered. When taking advantage of cloud providers’ services, you can use a built-in content filter to evaluate content according to specific criteria. For example, in Azure, content filters can filter out specific content categories: hate, sexual content, self-harm and violence.

Examples of tools used for content filtering:

  • Azure AI Content Safety
  • AWS Content Moderation
  • GCP Natural Language

Processing sensitive data in AI models

If you want to avoid a scenario like the one described at the beginning of this guide, you have to pay extra attention to how your AI solution processes sensitive data. Take a look at the example below, based on a Retrieval Augmented Generation (RAG) solution. RAG is a powerful technique that provides AI models with new knowledge through augmented queries (prompts).

In a typical RAG solution, context remains crucial. After you input a prompt, the generated answer is based on the existing data stored in the database. If the data is unsecured, the answer will be unsatisfactory. That’s why it’s important to take care of what’s inside the database that builds the context for the AI model.

RAG solution

Examples of tools used for sensitive data processing:

  • Azure PII detection
  • Amazon Comprehend
  • GCP Sensitive Data Protection
  • Scrubadub – an open-source tool used for replacing data

How to stop generative AI prompt injections?

The purpose of prompt injection is to trigger unintended responses from LLM-based tools. Older LLMs were more vulnerable to prompt injection and modern models have evolved into more sophisticated entities, so while less of an issue, prompt injection is still a challenge that needs to be addressed. Custom prompt engineering remains the first line of defense against such practices.

If you want to try hacking AI chatbots, check out this online game.

Useful tools to stop prompt injection:

Prompt Shields (previously Jailbreak risk detection) – a unified API that analyzes LLM inputs, detects user prompt attacks and documents attacks (when attackers embed hidden instructions in external materials to gain unauthorized control over an LLM session).

Rebuff – an open-source tool that filters out dangerous prompts and stores them for later analysis. It constantly learns to improve prompt injection prevention.

Certain models specialize in prompt injection detection, for example: deberta-v3-base-prompt-injection-v2.

A proper way to evaluate LLMs

Assessing LLMs is essential when transitioning between different models. This approach enables you to score and analyze a model to determine whether switching from one to another will have a positive or negative impact. The following LLM evaluation tools support the transition process and provide prompts for evaluation. If a model “passes” the test, it means that it will behave as expected after the change.

Examples of tools used for LLM evaluation:

  • Azure AI Studio evaluation
  • Amazon Bedrock evaluation
  • LangSmith evaluation

Free Online Conference: Data governance trends

How to prevent hallucination in AI chatbots?

What are LLM hallucinations? They are incorrect or misleading results that AI models generate – which can be a recurring issue for some models. Avoiding them remains an obvious priority for companies that want to make the most out of LLMs. Here are some tips for preventing AI hallucinations:

  • Do not use ChatGPT – Avoid using this app, as it usually tries to answer a question even if it lacks sufficient data.
  • Focus on prompt engineering – As mentioned before, pay attention to system prompts.
  • Set the right temperature – Some models have a so-called temperature setting, which affects their answers. The setting usually uses a scale from 0 to 1 (sometimes 0 to 2, depending on the model). “0” means the model will give you rigorous answers. The closer you set the temperature to “1”, the more creative the model gets with its responses.
  • When using RAG, check the distance – RAG always tries to find the relevance between two documents. It’s a good idea to ensure documents are relevant to a topic and avoid junk or insecure data input as a model will try to find connections even between two thematically unconnected documents.
  • Use the Chain of Verification technique – Draft an initial response and create additional verification questions to cross-check an answer. Then answer these questions independently and receive a verified response based on the answers. LLMs prefer narrow tasks, and the broader the question, the more hallucinatory the answer can be.

Implement DPSy to get better prompts and avoid hallucinations

Since prompt engineering can lead to inconsistencies in LLMs, it’s worth considering DPSy, a framework that assists users with creating prompts. How does it work? You input an initial prompt, and then DSPy interacts with it, based on the training data, and modifies it to generate a final prompt that is presented to you. By using the DPSy framework, you don’t need to create accurate prompts yourself and you can minimize the risk of hallucinations.

DPSy

Source: DSPy – Does It Live Up To The Hype? By Skanda Vivek

Keeping your generative AI data secure

In the most recent McKinsey Global Survey on AI, 65 percent of respondents answered that their organizations regularly use AI, nearly double the percentage from a previous survey conducted by the company just a few months ago. Respondents’ expectations for AI’s impact remain high, with three-quarters predicting that AI will lead to significant or disruptive change in their industries in the years ahead.

Achieving progress through AI will only be possible with the right data corpus. Contrary to some beliefs, AI is not a magic solution that can do everything on its own. High-quality data is the fuel that powers AI, and without it, AI becomes useless. The Generative AI Security Guide – How to Protect Your Data provides insights that can assist you and your company in ensuring data security in AI-driven products. It introduces best practices for developing safe generative AI applications and explores often overlooked aspects of data privacy in GenAI. Download it for free to find out how to mitigate potential security risks when using LLM.

About the authorPiotr Kalinowski

Head of Data & AI

A cloud data engineer with extensive experience with architectures, software development and building advanced data processing systems, Piotr has worked with the largest institutions in the financial sector. Along with developing AI/ ML solutions in AWS, Microsoft Azure, Oracle and Alibaba Cloud, he is an avid cloud blogger.

Subscribe to our newsletter

Sign up for our newsletter

Most popular posts