Six Risks from ChatGPT that Internal Audit Should Know About

Risks from ChatGPT

Artificial intelligence applications like ChatGPT are becoming common tools in the workplace to do everything from generating job descriptions, writing and editing reports, and to managing schedules (See related article, “How Employees Are Using ChatGPT on the Job“). But the apps aren’t perfect. In fact, they can be error prone and can even create new risks that companies must assess and manage.

Legal, internal audit, and compliance leaders should address their organization’s exposure to six specific ChatGPT risks, identified by consulting and research firm, Gartner. They must also consider what guardrails to recommend to management to ensure responsible enterprise use of generative AI tools, according to Gartner.

Wolters Kluwer Buyer’s Guide

“The output generated by ChatGPT and other large language model (LLM) tools are prone to several risks,” said Ron Friedmann, senior director analyst at Gartner’s Legal & Compliance Practice. “Legal and compliance leaders should assess if these issues present a material risk to their enterprise and what controls are needed, both within the enterprise and its extended enterprise of third and nth parties. Failure to do so could expose enterprises to legal, reputational, and financial consequences.”

The six risks from ChatGPT (and other AI apps) that legal, internal audit, and compliance leaders should evaluate include:

Risk 1: Fabricated and Inaccurate Answers

Perhaps the most common issue with ChatGPT and other LLM tools is a tendency to provide incorrect – although superficially plausible – information.

“ChatGPT is also prone to ‘hallucinations,’ including fabricated answers that are wrong, and nonexistent legal or scientific citations,” said Friedmann. “Legal and compliance leaders should issue guidance that requires employees to review any output generated by ChatGPT for accuracy, appropriateness, and actual usefulness before being accepted.”

Risk 2: Data Privacy and Confidentiality

Internal audit leaders should be aware that any information entered into ChatGPT, if chat history is not disabled, may become a part of its training dataset.

“Sensitive, proprietary, or confidential information used in prompts may be incorporated into responses for users outside the enterprise,” said Friedmann. “Legal and compliance need to establish a compliance framework for ChatGPT use, and clearly prohibit entering sensitive organizational or personal data into public LLM tools.”

Risk 3: Model and Output Bias

Despite OpenAI’s efforts to minimize bias and discrimination in ChatGPT, known cases of these issues have already occurred, and are likely to persist despite ongoing, active efforts by OpenAI and others to minimize these risks.

“Complete elimination of bias is likely impossible, but legal and compliance need to stay on top of laws governing AI bias, and make sure their guidance is compliant,” said Friedmann. “This may involve working with subject matter experts to ensure output is reliable and with audit and technology functions to set data quality controls.”

Risk 4: Intellectual Property (IP) and Copyright risks

ChatGPT in particular is trained on a large amount of internet data that likely includes copyrighted material. Therefore, it’s outputs have the potential to violate copyright or IP protections.

“ChatGPT does not offer source references or explanations as to how its output is generated,” said Friedmann. “Legal and compliance leaders should keep a keen eye on any changes to copyright law that apply to ChatGPT output and require users to scrutinize any output they generate to ensure it doesn’t infringe on copyright or IP rights.”

Risk 5: Cyber Fraud Risks

Bad actors are already misusing ChatGPT to generate false information at scale, such as fake reviews and falsified video and audio impersonations. Moreover, applications that use LLM models, including ChatGPT, are also susceptible to prompt injection, a hacking technique in which malicious adversarial prompts are used to trick the model into performing tasks that it wasn’t intended for such as writing malware codes or developing phishing sites that resemble well-known sites.

“Legal and compliance leaders should coordinate with owners of cyber risks to explore whether or when to issue memos to company cybersecurity personnel on this issue,” said Friedmann. “They should also conduct an audit of due diligence sources to verify the quality of their information.”

Risk 6: Consumer Protection Risks

Businesses that fail to disclose ChatGPT usage to consumers (for example, using it to create a customer support chatbot) run the risk of losing their customers’ trust and being charged with unfair practices under various laws. For instance, the California chatbot law mandates that in certain consumer interactions, organizations must clearly and conspicuously disclose that a consumer is communicating with a bot.

“Legal and compliance leaders need to ensure their organization’s ChatGPT use complies with all relevant regulations and laws, and appropriate disclosures have been made to customers,” said Friedmann.

The use of AI in the workplace is just getting started and is likely to balloon in the coming years. As these apps evolve and employees begin to use them in new and surprising ways, new risks are certain to emerge. Legal, risk, audit, and compliance professionals need to stay on top of these emerging risks and assess them to ensure they don’t cause negative consequences to the organization.  Internal audit end slug


Joseph McCafferty is editor & publisher of Compliance Chief 360°

One Reply to “Six Risks from ChatGPT that Internal Audit Should Know About”

  1. Thank you for a great overview of risks of using ChatGPT. Benefits are promising so many teams may dive right in and begin using ChatGPT without proper preparation.. Internal auditors should ensure their organizations establish a program that addresses these risks before they begin using the tool..

Leave a Reply

Your email address will not be published. Required fields are marked *