AI Acceptable Use Policy: Where to Start?

Sophie Stalla-Bourdillon on November 14, 2023
Last edited: November 4, 2024
Default alt text

Generative artificial intelligence (AI) are prediction algorithms that are able to create any type of content, be it text, code, images, audio, or video – think ChatGPTBing ChatBardStable DiffusionMidjourney, and DALL-E, for example. With the emergence of generative AI-as-a-Service – which has lowered barriers to entry – generative AI is spreading to most business units: marketing and sales, customer operations, software engineering, and R&D.

As a result, 54% of respondents in the AI Security and Governance Report say their organization already leverages at least four AI systems, and 79% have increased their budgets for AI systems, applications, and development over the past year.

Still, we’re faced with a range of generative AI risks, from security threats to misinformation, deception, discrimination, and more broadly, noncompliance. Without caution, organizations are at risk of violating confidential obligations, intellectual property rights such as trade secrets and copyrights, or privacy and data protection-related obligations.

The concerns around genAI are warranted. According to the 2024 State of Data Security Report, which surveyed 700 data professionals across industries and geographies, 88% said employees at their organization are using AI, regardless of whether the company has officially adopted it and put a policy in place to dictate its use. Yet, just 50% say their data security strategy is keeping up with the pace of AI evolution, which introduces significant risk of sensitive information being exposed or misused.

Just like any third party or in-house tool that processes protected information, genAI should be governed by internal policies. But this requires going beyond traditional security obligations – you need to set rules for human oversight and review, as well as transparency when using genAI to produce content for public consumption or decision making.

What Is an AI Acceptable Use Policy?

An Acceptable Use Policy (AUP) sets rules for using an organization’s resources, and in particular IT-related resources, including hardware, software, and networks. It is one policy within a broader information security program, along with data classification, data retention, and incident response policies, among others.

An acceptable use policy defines authorized behavior and prohibits unwanted behavior, with the aim of protecting an organization’s assets while enabling an effectiveness and productivity. Typically, the organization will require its employees to acknowledge the acceptable use policy before being granted access to IT resources, and violations will lead to disciplinary action. Acceptable use policies are thus the natural home for governing generative AI usage.

Just like any other policy, adoption of acceptable use policies will require education and training to make sure employees are familiar with the newly-adopted rules, and understand how to comply with them in practice.

A Warning About AI Regulations

It is crucial to track new developments in AI regulation, as lawsuits have been brought against a variety of providers. In response, lawmakers are moving from a reactive to a proactive stance.

Lawsuits against genAI providers make it clear that the risk surface is complex and multifaceted, involving multiple legal pitfalls such as intellectual rights violations, as well as privacy and data protection violations. For example, the lawsuit against GitHub, Microsoft, and OpenAI that centers on GitHub Copilot’s ability to transform commands written in plain English into computer code, and the lawsuit against Stable AI, of which Stable Diffusion generates images following text prompts from users, both raise copyright issues. Lawsuits (see also herehere, and here) against OpenAI concerning ChatGPT, and a lawsuit against Google related to Bard-data scraping practices, also raise sensitive data protection and privacy issues.

Regulators are in the process of or have already started to regulate some practices. The recently adopted US Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, for example, mandates the issuance of a series of guidelines and recommendations to the benefit of the Federal Government and Agencies, with a view to advance the responsible and secure use of generative AI. In particular, it discourages general bans:

“Agencies should instead limit access, as necessary, to specific generative AI services based on specific risk assessments; establish guidelines and limitations on the          appropriate use of generative AI; and, with appropriate safeguards in place, provide their personnel and programs with access to secure and reliable generative AI          capabilities, at least for the purposes of experimentation and routine tasks that carry a low risk of impacting Americans’ rights.”

Key Steps for Building an AI Acceptable Use Policy

There are five key steps you should follow to build an acceptable use policy targeting genAI usage.

1. Define the AI Acceptable Use Policy Scope

An acceptable use policy is, in principle, applicable to all of an organization’s employees. There is no reason why the section on generative AI should be treated differently.

It is essential to precisely define the genAI tooling that is in scope, eventually with examples, to make sure employees actually feel impacted by the rules. This means explicitly stating how GenAI tooling can be used in a variety of contexts, such as by engineering and research teams to produce code, by marketing teams to produce content, and by all teams to generate meeting notes.

2. Set Clear Acceptable Terms of Use

Set clear acceptable use terms so that there is no ambiguity when employees find themselves in questionable scenarios. Acceptable terms of use should cover:

  1. The list of legitimate business purposes for which generative AI tooling can be used.
  2. A process for vetting AI tools to verify legitimate business purposes, involving relevant stakeholders like security and privacy teams. The vetting process should require these teams’ due diligence in considering related purposes for which the AI-generated content will be used, as well as the types of data used within prompts, who can access/reuse prompts/results and why, intellectual property clearance impacting the training/deployment/improvement phases, legal bases for the training/deployment/improvement phases when personal data is at stake, the operationalization of data subject rights, and the impact of the practice supported by the tool upon fundamental rights. One challenge with the genAI tooling currently available on the market is that providers are struggling to identify a lawful legal basis under data protection laws, which jeopardizes the tools’ lawfulness.
  3. A requirement to create an inventory of vetted genAI tools.
  4. Rules related to protecting authentication credentials used to access genAI tooling, retention and disclosure of prompt history, and individual prompts.
  5. The list of categories and classes of organizational data that can be inputted within genAI tooling for each use case. The acceptable use policy should align with the internal data classification policy on this point.
  6. Rules setting forth the oversight and review process, in particular when genAI tooling is used to produce content that is shared externally, or is used to support decision making that may impact individuals. This should involve a variety of teams, including privacy and ethics.

 

3. Blacklist Prohibited Practices

It may seem excessive, but undesirable and unauthorized practices should be listed as such for the sake of clarity. By way of example:

  1. Content produced with genAI tooling should never be publicly released without adding a specific acknowledgement.
  2. Prompts and query results should not be disclosed to third parties, except for authorized service providers.
  3. Information classified as ‘Strictly Confidential’ should never be inserted within prompts.
  4. GenAI tooling should not be accessed with personal emails.

4. Allocate Responsibilities and Acknowledge Rights

Allocate responsibilities in relation to an employee’s role. This may look like the following:

  1. All employees must comply with a predetermined list of standards that should govern the use of genAI systems in all cases: professionalism, reflexive thinking, respect for others, compliance with laws including equality laws, privacy and data protection laws, etc.
  2. All employees are encouraged to work collaboratively and discuss concerns/questions related to the use of genAI tooling as soon as they arise.
  3. All employees have a duty to report violations to this policy.
  4. Line managers should be responsible for ensuring that their teams are aware of and comply with the acceptable use policy.
  5. The IT department is responsible for managing, consolidating, and/or overseeing the approved list of genAI systems to ensure that only authorized tools are used within the organization.
  6. The compliance team, with the assistance of the legal team, is responsible for handling complaints related to violations of the acceptable use policy.
  7. The compliance team, with the assistance of the legal team, is responsible for adapting/amending the acceptable use policy to meet new applicable legal/ethical requirements.
  8. Employees should be informed about their rights, and in particular privacy and data protection rights, when genAI tooling is switched on within the organization, e.g., the right to object/access/correction/deletion. One major problem with B2B generative AI-as-a-Service offerings is that they don’t always offer the possibility for users to exercise their rights at the individual level. Even when they do so, they are often limited in their ability to fulfill data subject requests, such as deletion requests.
  9. Employees should be informed about the implications of not adhering to the acceptable use policy, and about changes made to the policy.
  10. A timeframe for reviewing the policy should be established, e.g., on an annual basis, and employees should be given an opportunity to provide feedback about the policy to ensure they buy into it.

5. Set an Incident Reporting Process

To detect potential misuse of generative AI tooling as early as possible, employees should be informed about the Incident Reporting Process, which may be its own policy. By way of example, the policy may address the following considerations:

  1. Employees should be encouraged to discuss concerns/questions with competent teams (IT, privacy, ethics) at any point in time following a particular process.
  2. Employees must report violations of this policy to the compliance department following a particular process.
  3. All reports of suspected violations or incidents will be investigated promptly and thoroughly, within a predefined time frame.

Getting Started with Your AI Acceptable Use Policy

With these five tactics, the process of building an acceptable use policy that addresses generative AI becomes more clear, but also more urgent. GenAI tools and LLMs are evolving quickly, as demonstrated by the half of data professionals that said their data security strategy is failing to keep up with it. In order to ensure that the use of genAI tools does not become unmanageable and thus highly risky, an acceptable use policy should be an early priority.

Learn More About AI Security

Check out our other AI blogs.

your data

Put all your data to work. Safely.

Innovate faster in every area of your business with workflow-driven solutions for data access governance and data marketplaces.