With 7 million people in the UK having used generative AI at work this year , there is a clear and growing need for organisations to implement strategies to secure the use of these tools. While some of the benefits of generative AI are well understood - such as large-scale operational efficiencies and significant cost savings – there are potential risks that must be addressed as part of an organisations’ information security and data privacy strategy.
In this blog, we’ll look at the risks posed by using generative AI in the workplace and strategies you can implement to ensure information security and data privacy, including some basic dos and don’ts.
Are there Cyber Security and Data Privacy Risks with Generative AI?
Yes. Over reliance on the information provided from generative AI could lead to operational, legal or reputational issues if inaccurate, unreliable or biased responses are used. For example, untested software code could lead to bugs or security flaws, or unverified or biased information could lead to legal disputes or brand damage.
Generative AI also poses risk when it comes to data privacy. As Stephen Almond, Executive Director at the ICO, warns, “businesses are right to see the opportunity that generative AI offers ... but they must not be blind to the privacy risks”. To start, company proprietary information or personal data (also referred to as Personally Identifiable Information (PII)) could be shared outside of requirements of data privacy laws and regulations or company policies. For more on this topic, we recommend watching our Data Privacy and AI webinar, or reading our blog covering the main data privacy risks posed by AI.
How Can I Reduce the Risks Posed by Generative AI?
To address the potential risks introduced by generative AI, organisations should consider implementing the following strategies.
1. Governing the Use of Generative AI
Generative AI is a third-party service that should undergo the same scrutiny during supplier selection, evaluation, and monitoring as any other third-party product or service. If your organisation has a supplier assurance framework, this should be applied to generative AI tools.
Organisations should ensure that an agreed level of information security and data privacy is in place before approving the technology’s use and that this level is maintained by the solution provider. Some organisations may benefit from developing a specific policy on the use of generative AI.
2. Minimise Data Leakage Risks
User prompts are often stored in large language models (LLMs) to train the AI engine. This storage is usually indefinite. Depending on the provider, this information could be returned in a response to another user, potentially infringing intellectual property rights, resulting in a data breach or potentially violating data privacy laws or regulations.
Configuration | Risks / Disadvantages | Benefits |
---|---|---|
Internet-connected instance with data capture enabled |
|
|
Internet-connected instance with data capture disabled |
|
|
Isolated local instance |
|
|
It is vital that business leaders communicate both the risks and opportunities of using generative AI within their organisations to bring the greatest level of business transformation. This should include:
- Promoting the use of approved generative AI solutions and prohibiting the use of unapproved solutions. Organisations may benefit from updating their acceptable use policy with rules for the acceptable use of generative AI.
- Providing guidance on how to use generative AI, including verifying and/ or testing responses generated. For example, software code should undergo the necessary checks and testing before being released. Information should also be fact checked using other reputable sources.
- Asking employees to notify data privacy or information security teams of any changes to the terms or service or data privacy notices that could lead to unlawful processing of personal data or information security risks.
Basic Dos and Dont's
Beyond these strategies, below are some basic practices that you can communicate to your organisation that will help minimise risk.
Do
- Review and configure the security of generative AI to meet the needs and risk appetite of your business.
- Make it clear to employees what is expected or unacceptable use of generative AI.
- Educate employees on the risks and benefits of generative AI
- Verify, test or fact check information produced from generative AI.
Do Not
- Input data into generative AI tools without first evaluating the risks, especially company confidential or Personal Data.
- Use these tools for employee monitoring or automated decision making, without having performed a Data Protection Impact Assessment.
- Use information without human oversight.