Site icon LoupedIn

Italy suspends ChatGPT; UK issues reminder of privacy requirements

The Information Commissioner’s Office (ICO) has issued a press release to remind organisations using or developing generative AI or large language models (LLMs) about the core principles of data protection legislation that they will need to comply with to use these techniques in a legally compliant manner.

This is a much softer approach than the Italian supervisory authority who has called for a temporary suspension of ChatGPT whilst the regulator considers privacy risks and harms. Many academics feel the same as the Italians after publishing an open letter (which now has 5,000 signatories) calling for a moratorium on AI until the full outcomes are understood and any protections put in place.

The ICO’s softly-softly approach is both more in keeping with the ICO’s historic approach (compared with its European counterparts) but also the UK government’s approach to regulation (or not as the latest white paper concludes that the UK will use a set of non-statutory principles) of AI (compared with the proposed European legislation). Potentially the ICO recognises that AI is multi-faceted and that one regulator alone should not be slamming the brakes, but instead the ICO is looking to groups like the Digital Regulation Cooperation Forum to provide more holistic guidance. The DRCF has just recently published its report into ‘Transparency in the procurement of algorithmic systems‘ and use of AI and LLM systems would be ideal for this joint regulatory scrutiny and analysis.

Jocelyn is a technology and data lawyer, interested in anything connected to those two topics in non-contentious matters. Her areas of expertise cover IT agreements, data protection, data centres and telecommunications.

Exit mobile version