Prompting Productivity banner

Promoting Productivity: Strategies for Securing Generative AI

Published 28 June 2024

With 7 million people in the UK having used generative AI at work this year , there is a clear and growing need for organisations to implement strategies to secure the use of these tools. While some of the benefits of generative AI are well understood - such as large-scale operational efficiencies and significant cost savings – there are potential risks that must be addressed as part of an organisations’ information security and data privacy strategy. 

In this blog, we’ll look at the risks posed by using generative AI in the workplace and strategies you can implement to ensure information security and data privacy, including some basic dos and don’ts. 

Are there Cyber Security and Data Privacy Risks with Generative AI? 

Yes. Over reliance on the information provided from generative AI could lead to operational, legal or reputational issues if inaccurate, unreliable or biased responses are used. For example, untested software code could lead to bugs or security flaws, or unverified or biased information could lead to legal disputes or brand damage. 

Generative AI also poses risk when it comes to data privacy. As Stephen Almond, Executive Director at the ICO, warns, “businesses are right to see the opportunity that generative AI offers ... but they must not be blind to the privacy risks”. To start, company proprietary information or personal data (also referred to as Personally Identifiable Information (PII)) could be shared outside of requirements of data privacy laws and regulations or company policies. For more on this topic, we recommend watching our Data Privacy and AI webinar, or reading our blog covering the main data privacy risks posed by AI.

How Can I Reduce the Risks Posed by Generative AI?

To address the potential risks introduced by generative AI, organisations should consider implementing the following strategies.

1. Governing the Use of Generative AI

Generative AI is a third-party service that should undergo the same scrutiny during supplier selection, evaluation, and monitoring as any other third-party product or service. If your organisation has a supplier assurance framework, this should be applied to generative AI tools. 

Organisations should ensure that an agreed level of information security and data privacy is in place before approving the technology’s use and that this level is maintained by the solution provider. Some organisations may benefit from developing a specific policy on the use of generative AI.

The use of unauthorised generative AI products should be considered “shadow IT” and its use restricted where possible. Access can be restricted using technical measures such as web filtering or cloud access security brokers. Limiting access to generative AI can provide valuable time for organisations to select and evaluate generative AI solution providers in accordance with their needs and use cases.

2. Minimise Data Leakage Risks

User prompts are often stored in large language models (LLMs) to train the AI engine. This storage is usually indefinite. Depending on the provider, this information could be returned in a response to another user, potentially infringing intellectual property rights, resulting in a data breach or potentially violating data privacy laws or regulations. 

Organisations can take steps to combat data leakage by restricting the information transferred. They can achieve this by using a segregated instance of generative AI, such as running it in an isolated sandbox environment or by disabling data capture settings. Both these approaches reduce the information that is shared and retained by the provider.

ConfigurationRisks / DisadvantagesBenefits
Internet-connected instance with data capture enabled
  • Data may be used to train LLMs
    Data may be retained indefinitely
    Data may leave environment boundary or jurisdictions
    Potential for data breaches or intellectual property rights infringement
    Potential breach of company policy or privacy laws or regulation
 
  • Readily available to users
  • No need to set up or configure
Internet-connected instance with data capture disabled
  • Confidential information is transmitted across the internet to a third party, increasing the attack surface.
  • Reliance on users to disable data capture and trust in solution provider required.
  • Provides users with quick and easy access to generative AI, whilst reducing the likelihood of data leakage or data privacy violations.
Isolated local instance
  • Requires effort to configure and communicate to employees.
  • Provides users with all the benefits of generative AI, whist minimising the security and data privacy risks.
3. User Awareness and Guidance

It is vital that business leaders communicate both the risks and opportunities of using generative AI within their organisations to bring the greatest level of business transformation. This should include:

  • Promoting the use of approved generative AI solutions and prohibiting the use of unapproved solutions. Organisations may benefit from updating their acceptable use policy with rules for the acceptable use of generative AI.
  • Providing guidance on how to use generative AI, including verifying and/ or testing responses generated. For example, software code should undergo the necessary checks and testing before being released. Information should also be fact checked using other reputable sources.
  • Asking employees to notify data privacy or information security teams of any changes to the terms or service or data privacy notices that could lead to unlawful processing of personal data or information security risks.

Basic Dos and Dont's

Beyond these strategies, below are some basic practices that you can communicate to your organisation that will help minimise risk.

Do

  • Review and configure the security of generative AI to meet the needs and risk appetite of your business.
  • Make it clear to employees what is expected or unacceptable use of generative AI.
  • Educate employees on the risks and benefits of generative AI
  • Verify, test or fact check information produced from generative AI.

    Do Not

  • Input data into generative AI tools without first evaluating the risks, especially company confidential or Personal Data. 
  • Use these tools for employee monitoring or automated decision making, without having performed a Data Protection Impact Assessment.  
  • Use information without human oversight.
Generative AI is a new technology that can bring enormous benefits and is already being widely adopted by many organisations. For help in ensuring it is used securely and in compliance with data privacy laws and regulations, get in touch with our team or attend one of our webinars.