The Information Commissioner’s Office, the United Kingdom’s independent authority established to uphold information rights in the public interest, has released beta guidance on managing artificial intelligence and data protection risks. The guidance, which the ICO calls a “toolkit,” contains risk statements to help organizations that are using AI to process personal data understand the risks to individuals’ information rights. It also provides suggestions on best practices and technical measures to manage the risks and demonstrate compliance with data protection laws.
The directive is in response to a call for help from industry leaders that started in 2019 when the use of AI began to rise, along with the potential pitfalls of using the new technology.
The toolkit draws on the regulator’s previous launch of an alpha version in March 2021. Auditors can use the framework to help audit AI applications and ensure that there is compliance with data protection legislation. During the beta stage, ICO will test the toolkit on live examples of AI systems that process personal data.
“The toolkit reflects the auditing framework developed by our internal assurance and investigation teams,” said Senior Policy Officer for Technology and Innovation Service Alister Pearson said in a statement. “This framework gives us a clear methodology to audit AI applications and ensure they process personal data in compliance with the law. If your organization is using AI to process personal data, then by following this toolkit, you can have high assurance that you are complying with data protection legislation.”
The final version of the toolkit will be released in December 2021.