Skip to main content

2025 Data law trends

5. Tougher enforcement is reshaping data and privacy compliance

By Rachael Annear, Robert Barton, Richard Bird, Davide Borelli, Mark Egeler, Nina Frant, Daniel Gold, Cat Greenwood-Smith, Tim Howard, Sam Kloosterboer, Joseph Mason, Giles Pratt     

IN BRIEF

The spotlight on AI risks is intensifying, and with it comes a surge in data-related regulatory enforcement worldwide. Regulators are not only using existing laws but are also advocating for greater powers to oversee AI development and deployment. In some regions, this includes calls for restrictions on AI-related processing. For organizations developing AI, it’s important to integrate compliance and risk management measures throughout the process. At the same time, attention should remain on existing enforcement risks around cyber issues, privacy practices, and consumer and competition laws.

title

As AI becomes ever more ubiquitous and powerful, regulators are rushing to manage and mitigate this potentially high-risk technology. Typically, this means relying on privacy laws, including the EU and UK GDPR, where the data processing includes personal data, but consumer protection legislation and antitrust laws are also being used to put guardrails around AI.

Examples of regulatory action in 2024 include:

  • Some regulators, including several EU data protection authorities (DPAs), are actively investigating AI companies for alleged breaches of the EU GDPR. The Italian DPA issued OpenAI with a formal notice for violations of provisions of the EU GDPR, having originally banned the use of ChatGPT in Italy until OpenAI complied with a set of interim measures.
  • In the UK, the Information Commissioner’s Office (ICO) has been investigating Snap’s ‘My AI’ chatbot but, in July 2024, agreed to close its investigation on the basis that Snap appropriately remedied its alleged breaches of the UK GDPR. However, the ICO noted that its investigation had led to Snap conducting a more thorough review of potential risks posed by the chatbot.
  • Some regulators, such as the South Korean Personal Information Protection Commission (PIPC) and the UK’s ICO, are aiming to mitigate AI risks through updating existing guidance and regulatory innovation. The ICO launched a consultation series in the first half of 2024 on the intersection of data protection and generative AI, focused on topics such as purpose limitation in the generative AI lifecycle, the accuracy of training data and model outputs and the allocation of controllership. We expect to see updates to ICO guidance in 2025 as a result. South Korea’s PIPC has emphasized regulatory sandboxes and introduced a ‘Prior Adequacy Review Mechanism,’ where it will work together with startups developing innovative AI models or services to ensure that sufficient privacy and data protection measures are embedded in the design of AI systems.

QuoteMarks_34x25px_Blue.png

Data privacy regulators across the world are focused on AI. Businesses need to ensure that they are developing and deploying AI systems compliantly including, where appropriate, engaging closely with regulators as they do so.

Giles Pratt, Partner

  • In the US, the Federal Trade Commission (FTC) has increasingly brought investigations and enforcement actions related to AI. In July 2023, the FTC issued a civil investigative demand (CID) to OpenAI covering a range of topics, including public disclosures about AI products, the data it used to train its models and measures taken to mitigate potential risks including false statements about individuals. This follows a settlement with Rite Aide related to the company’s use of AI-based facial recognition technology. In addition, the agency recently announced a sweep of enforcement actions concerning AI-related misrepresentations. More CIDs can be expected to be issued in AI investigations, given the FTC’s November 2023 approval of a resolution making it easier for officials to issue CIDs.

QuoteMarks_34x25px_Blue.png

The Italian DPA’s bold stance against OpenAI reflects the global shift toward stricter AI regulation. AI growth must be matched by strong commitments to data protection and regulatory engagement.

Davide Borelli, Counsel

Looking ahead to 2025, we expect privacy regulators to continue their focus on AI.

In the US, the FTC should be expected to ramp up its rigorous scrutiny of AI products and businesses. The FTC has publicly stated its interest in enforcement relating to advertising claims, AI product misuse to perpetuate fraud and scams, competition concerns and copyright/IP concerns with regards to training AI models and data privacy. The FTC’s interest in investigating competition concerns has already resulted in the issuance of orders to five companies requiring them to provide information about recent investments and partnerships involving generative AI companies and cloud service providers. The agency has also announced an investigation into ‘surveillance pricing,’ the practice of categorizing individuals using their personal information to set pricing targets for goods or services using AI technology.

QuoteMarks_34x25px_Blue.png

As many companies increasingly become AI companies, they will need to ensure that they are developing and deploying AI systems safely and effectively.

Joseph Mason, Associate

In the UK and EU, we expect ongoing focus on AI products and services, particularly those deemed to be higher risk, and companies should expect a robust approach from regulators if they suspect infringements of EU or UK GDPR. It remains to be seen how plans to reform UK data laws announced by the newly-elected UK government will impact data protection regulation as it relates to AI.

QuoteMarks_34x25px_Blue.png

Working out how to approach AI enforcement is fast becoming a global priority, reflecting a collective commitment to harnessing the power of AI responsibly.

Rachael Annear, Partner

In the EU, there is increased regulatory focus on consistent enforcement of GDPR by DPAs in cross-border cases. Following its 2024-27 strategy, the European Data Protection Board (EDPB) aims to ‘reinforce a common enforcement culture and effective cooperation.’ This partly reflects the realization that data processing is an increasingly cross-border activity, and that greater collaboration between DPAs is therefore necessary. The EU is taking the following steps to improve data regulation across the EU:

  • Updates to the one-stop-shop mechanism (OSS) mechanism:
    • Despite being a cornerstone of the EU’s GDPR, the OSS mechanism has not fully met expectations, with delays in enforcement arising when the lead DPA was unable to reach a consensus with other DPAs. The European Commission has proposed a Regulation containing new procedural rules which aim to further harmonize enforcement and improve the efficiency of cross-border cases. The regulation is currently still in the legislative pipeline. The EDPB and the European Data Protection Supervisor (EDPS) jointly issued an Opinion on this proposal, welcoming many aspects aimed at improving the handling of cross-border claims.
    • In a recent Opinion, the EDPB clarified that, in relation to the OSS:
      • a controller’s central administration can only be considered its ‘main establishment’ if it makes and implements the decisions on the purposes and means of the processing of personal data; and
      • the OSS mechanism is applicable only if one of the controller’s EU establishments makes and implements those decisions; without such an establishment, the OSS cannot be applied.
    • There is an increased use in the ‘regulatory toolbox’ by EU DPAs and an increase in the amount and height of fines (following implementation of EDPB Guidelines on the calculation of fines). In 2023 alone, DPAs collectively imposed an amount of over €1.97bn across 1,690 fines. This trend is continuing in 2024 (eg a recent €290m fine for Uber by the Dutch DPA), while regulators are increasingly using other regulatory powers such as enforcement orders.
    • Specific focus areas of EU DPAs include the use of tracking cookies (and ePrivacy in general), data trading (brokers), shadow banning and similar technologies and the use of biometric data including facial recognition.

Similarly, US regulators have interpreted their existing investigative authority in novel ways to allow it to address new data privacy issues.

  • The US Department of Justice (DOJ) continues to bring actions under its Civil Cyber-Fraud Initiative against federal contractors that fail to implement appropriate security controls required by government contracts, including one recent settlement of over $10m against consulting companies associated with New York State’s implementation of federal COVID-19 Emergency Rental Assistance programs.
  • The US Securities and Exchange Commission (SEC) has had mixed success in attempting to broaden an existing rule that requires companies to maintain sufficient accounting controls to apply in the data privacy and cybersecurity context. The agency recently secured a settlement of over $2m in part on the basis of this broader interpretation of the rule. But just one month later, a court dismissed similar claims in a separate lawsuit, holding that the rule did not provide the SEC with authority to regulate data privacy and security.
  • The FTC continues to investigate and (in coordination with the DOJ) sue for alleged infractions of federal law protecting children’s digital privacy. In August 2024, following an investigation, the DOJ sued TikTok and affiliates for allegedly failing to obtain parental consent before collecting children’s personal information, in violation of a federal statute.

While the UK’s ICO is continuing to take regulatory action for alleged data privacy infringements, it has suffered several recent adverse decisions.

  • In October 2023, Clearview AI successfully appealed against the ICO’s £7.55m fine and processing ban, with the court holding that the processing of UK data subjects’ photos by non-UK/EU criminal law enforcement and national security agencies was outside the material scope of both the EU and UK GDPRs.
  • In April 2024, a second instance court dismissed the ICO’s appeal against the first instance court’s 2023 judgment, which largely overturned the ICO’s 2020 enforcement action against Experian regarding its processing of user data for its marketing services.

In August 2024, the UK Government announced a proposed uplift to the annual data protection fees by 37 percent, in what could be seen as a recognition that the ICO may need additional resources to take as much regulatory action as it might wish.

In addition to regulatory enforcement in the EU, there is an increase in ‘private enforcement’ through class action litigation as EU case law on material and non-material damages further develops.

In the UK, opt-out mass claims alleging infringements of the UK GDPR have become much harder to bring since the Supreme Court’s 2021 judgment in Lloyd v Google. However, case law in this area is still embryonic and several funders and plaintiffs are testing this, including by using alternative collective redress mechanisms, such as the opt-in Group Litigation Order and the antitrust-specific ‘Collective Proceedings’ model.

Plaintiffs in the US continue to bring class action claims arising from data breaches. Questions remain about whether such claims give rise to standing to sue in federal court under recent US Supreme Court jurisprudence, but companies may face pressure to settle such claims rather than prolong litigation by disputing plaintiffs’ alleged injuries or damages. Earlier this year, Cash App and its parent company reached a $15m class settlement arising from data breaches that took place in 2021 and 2023, exposing customers’ personal information.

Looking ahead

As we look to 2025 and beyond, companies should brace for an intensified regulatory focus on data enforcement, particularly concerning the development and deployment of AI systems. Regulators have shown a readiness to take strong actions against suspected privacy law violations, including halting the launch of AI solutions or pausing ongoing AI development.

However, these regulatory measures also serve as valuable guidance for safe and effective AI deployment. To navigate this landscape, companies should:

  • Ensure they maintain comprehensive documentation, including detailed data protection impact assessments for high-risk processing.
  • Stay informed about the latest guidance from DPAs, such as the UK’s ICO and the EU’s EDPB.
  • Prioritize the integration of privacy protections into their AI systems from the outset of the development process.

Beyond AI, changes to the EU GDPR’s OSS mechanism are likely to facilitate more enforcement of cross-border processing within the EU. We also anticipate an uptick in global enforcement actions related to alleged breaches of privacy, cybersecurity, and consumer protection laws.

Back to top