Regulators Are Coming for AI. Here Is What the Fines Tell Us.

When Italy’s data protection authority levied a €15 million fine against OpenAI in December 2024, the reaction in most boardrooms was roughly what you’d expect. Risk matrices were updated. Compliance teams scheduled meetings. And at startups across Asia, founders largely moved on with their week.
That instinct is understandable. OpenAI is one of the most scrutinised companies in the world, and the penalty came after a prolonged investigation. What does that have to do with a 40-person SaaS company in Singapore?
Quite a lot, as it turns out.
The Garante’s case against OpenAI was not primarily about scale. It was about a specific and replicable failure: processing users’ personal data without adequate legal basis and without proper transparency when training AI models. Those are not enterprise-grade compliance problems. They are foundational ones, and they appear in products of every size.
Since GDPR came into force in 2018, European regulators have issued cumulative fines of approximately €5.88 billion. The 2024 total alone reached €1.2 billion. Regulators across the United States, China, and Southeast Asia have pursued parallel actions under their own frameworks. None of these cases required an AI-specific law to proceed. The tools already existed. Regulators have simply started using them more deliberately.
Enforcement at a Glance
The following cases represent some of the most significant AI-related enforcement actions issued to date, all under pre-existing law.
Amazon - €746 million. Luxembourg regulators found its ad algorithm processed personal data without proper consent.
Meta - €390 million. Ireland's DPC ruled that algorithmic profiling via Facebook and Instagram lacked a valid legal basis.
LinkedIn - €310 million. Same authority, same issue: behavioural advertising without adequate consent.
Clearview AI - over €100 million across 7 actions. Scraped billions of facial images without consent to build a biometric database.
Didi - ~USD 1.2 billion. China's CAC penalised excessive data collection, unclear disclosures, and unauthorised behavioural analysis.
OpenAI - €15 million. Italy's Garante found ChatGPT trained on personal data without a lawful basis or proper transparency.
Hello Digit - USD 2.7 million. CFPB fined the fintech startup for a savings algorithm that caused overdrafts while guaranteeing it wouldn't.
iTutorGroup - USD 365,000. EEOC settlement after hiring software auto-rejected over 200 candidates based on age.
Berlin bank - €300,000. Fined for refusing to explain why its algorithm rejected a credit card applicant.
What the Biggest Fines Actually Tell Us
The conventional narrative around AI compliance tends to focus on what’s coming next — the EU AI Act, Singapore’s evolving framework, the proliferating state-level rules in the United States. These are worth tracking. But the enforcement record that already exists carries a more immediately useful signal.
Look at where the largest fines have landed. In 2021, Luxembourg’s data protection authority fined Amazon €746 million. The allegation was not that Amazon built something dangerous. The targeted advertising algorithm driving its recommendations processed personal data without proper consent. Behavioural advertising, automated at scale, without a valid legal basis under GDPR — that is a product decision, not a theoretical risk.
Meta absorbed €390 million in fines from Ireland’s Data Protection Commission in 2022, split across Facebook and Instagram. The core finding was that Meta had shifted away from consent as its legal basis for processing personal data, instead framing the arrangement as contractually necessary. Regulators ruled that requiring users to accept new terms or lose access to the platform did not constitute freely given consent for algorithmic profiling. The enforcement was specifically about what the algorithm was doing with user data and whether the legal framework beneath it held up.
LinkedIn received a €310 million fine from the same Irish authority in October 2024, also for targeted advertising conducted without adequate legal basis.
And Clearview AI, perhaps the most instructive case in this space, has been fined by data protection authorities in Italy, France, the Netherlands, Greece, and the United Kingdom, with penalties accumulating to over €100 million across seven separate enforcement actions. Each targeted the same conduct: scraping facial images from the public internet to build a biometric database, without consent, without transparency, and without a lawful basis for processing special category data. The Dutch regulator is now actively exploring personal liability for Clearview’s directors.
The pattern across all of these cases is consistent. Enforcement is not being triggered by AI systems that are obviously dangerous in a science-fiction sense. It is being triggered by systems that are opaque, that rely on data they had no clear right to use, and that make consequential decisions about people without adequately explaining how.
The Smaller Cases That Deserve More Attention
The headline fines tend to involve companies large enough to survive them, at least financially. The cases that matter more for founders are the ones involving companies most people have never heard of.
In August 2022, the US Consumer Financial Protection Bureau fined Hello Digit USD 2.7 million. Hello Digit was a fintech startup offering an automated savings product. The algorithm was designed to move money into savings without triggering overdrafts. It caused overdrafts instead, while the company simultaneously guaranteed users it wouldn’t. The CFPB’s position was straightforward: representing that your AI product does something it does not is deceptive conduct. The fine was accompanied by an order to reimburse all the overdraft charges Hello Digit had previously refused to pay back. The legal hook was the Consumer Financial Protection Act, a statute with no AI-specific provisions.
A year later, in August 2023, the US Equal Employment Opportunity Commission settled a lawsuit against iTutorGroup for USD 365,000. The company’s hiring software automatically rejected female applicants aged 55 or older and male applicants aged 60 or older. The system disqualified over 200 candidates without any human review. One rejected applicant re-submitted an otherwise identical application using a later birth date and received an interview. The EEOC’s case rested on the Age Discrimination in Employment Act, which has been law since 1967.
Both of these companies were using automation to handle routine functions — savings management and hiring — in ways that either made false promises or embedded illegal discrimination directly into decision logic. Neither required an AI-specific regulation to produce a significant legal consequence.
China, Personal Liability, and a Precedent Worth Reading Carefully
China’s enforcement action against Didi in 2022 remains the largest AI-adjacent penalty on record. The Cyberspace Administration of China fined the ride-hailing company approximately USD 1.2 billion after finding that it had collected data from users’ phone albums without clear purpose, gathered excessive facial recognition information, failed to communicate clearly about data processing practices, and analysed passenger travel behaviour without authorisation.
The scale of the fine is notable. But the element that deserves more attention in a global context is this: Didi’s CEO and Chairman were each personally fined 1 million yuan on top of the corporate penalty. Personal accountability was written directly into the enforcement action.
That approach is spreading. The Dutch Data Protection Authority is actively investigating personal liability for Clearview AI’s directors following that company’s €30.5 million fine in September 2024. DLA Piper’s 2024 GDPR enforcement survey explicitly flagged management personal liability as an emerging priority for European regulators. In Singapore, the Personal Data Protection Act carries criminal liability provisions for individuals who knowingly or recklessly disclose personal data, gain unauthorised access to data systems, or use personal data for illegal purposes, with penalties including fines and imprisonment.
For founders, this changes the calculation in one specific and important way. The corporate structure that ordinarily separates personal exposure from regulatory risk is less reliable in data protection enforcement than in most other areas of law. Who inside the company knew about a data practice, when they knew it, and what decisions they made can become legally relevant in ways that rarely apply elsewhere.
The Berlin Bank Case and Why It Matters
One enforcement action that rarely makes the headlines but is highly relevant for founders building AI-driven products deserves more space here.
In May 2023, the Berlin Commissioner for Data Protection and Freedom of Information fined a Berlin-based bank €300,000 for rejecting a customer’s credit card application through an automated system and then refusing to explain why. The customer had a good credit rating and a consistently high income. He complained to the regulator after the bank declined to tell him what factors had driven the decision, even when directly asked.
The Berlin DPA found violations of Articles 5(1)(a), 15(1)(h), and 22(3) of GDPR, covering the principles of transparent data processing, the right to access information about automated decisions, and the specific obligations that apply when an automated system makes a decision that significantly affects a person.
The fine was not for making an automated decision. It was for making one without being able to explain it to the person it affected, and without giving that person a meaningful ability to challenge the outcome. For any company using AI to score, rank, approve, or reject users — in lending, hiring, insurance, or anywhere else — this case defines the minimum transparency standard regulators expect.
What Is Happening in Southeast Asia Right Now
The cases above are useful context. The regulatory environment in Southeast Asia is the immediate operational reality for most of the founders and companies LDU works with.
Vietnam passed the first standalone, legally binding AI law in the region. Law No. 134/2025/QH15, enacted by the National Assembly on 10 December 2025, took effect on 1 March 2026. It follows the EU AI Act’s risk-based framework, requiring conformity assessments, registration with a national AI database, and ongoing monitoring for high-risk AI systems. Maximum fines reach VND 2 billion (approximately USD 75,800) for organisations, with revenue-based penalties available for serious violations by large companies. Foreign providers whose AI systems affect Vietnamese users must appoint a local legal representative. Transition periods of 12 to 18 months are running from the effective date, with stricter deadlines for healthcare, education, and financial applications.
Singapore does not yet have a binding AI law, but the enforcement environment is tightening around existing frameworks. The Personal Data Protection Commission has grown notably more active. In May 2024 alone, the PDPC issued three enforcement decisions and accepted compliance undertakings from six additional organisations. In April 2025, a Singapore SaaS provider was fined SGD 17,500 after a breach exposed the personal data of approximately 689,000 individuals. The PDPC’s position was unambiguous: a technology company that holds personal data on behalf of its clients has independent obligations to protect that data. Pointing to clients as the data owners does not remove the platform’s compliance exposure.
Those fines are modest by European standards. What matters is the direction and the principle being established. Under Singapore’s enhanced PDPA penalty framework, which came into effect in October 2022, the PDPC can impose penalties of up to SGD 1 million or 10 percent of an organisation’s annual turnover in Singapore, whichever is higher. For a company at Series A or B, that range is not theoretical.
The Competition and Consumer Commission of Singapore announced in September 2024 that it is building tools to test AI systems for anticompetitive behaviour. These are active areas of regulatory development, not distant policy questions.
Three Compliance Gaps That Appear Repeatedly
Working alongside startups and growth-stage companies across Asia, the same vulnerabilities surface in different products and sectors. They are worth naming directly.
• Data collected for one purpose is being used for another. GDPR calls this purpose limitation, and most modern data protection frameworks have equivalent concepts. The practical version of this problem is common: data gathered during onboarding gets fed into targeting, profiling, or recommendation systems the user was never told about. Each time an AI system applies existing data to a new function, the question of whether there is a lawful basis for that specific processing reopens. Most companies never ask it a second time.
• The algorithm makes consequential decisions and nobody can explain them. The Berlin bank case illustrates this precisely. The violation wasn’t that an algorithm rejected the application. It was that the company couldn’t explain why, even to the person directly affected. Any product that scores, ranks, or decides on users — in credit, employment, insurance, or content access — carries this exposure. Regulators in Europe and increasingly in Asia expect companies to be able to explain automated decisions to affected individuals on request.
• Special category data is being processed without full recognition of what that means. Biometric data, health information, data that can reveal racial or ethnic origin, and behavioural data from which sensitive characteristics can be inferred all carry stricter obligations under GDPR and equivalent frameworks. The Clearview cases are the obvious example, but the category is broader than most founders realise. AI tools operating in healthcare, hiring, insurance, or anywhere user demographics are relevant to model outputs may be touching special category data even when the inputs appear routine.
The Stakes Are Rising
Two years ago, regulatory enforcement around AI was largely concentrated in Europe and mostly affected large technology companies. That is no longer the case.
The EU AI Act is now in its enforcement phase. Its maximum penalty for the most serious violations reaches €35 million or 7 percent of global annual turnover, whichever is higher — exceeding GDPR’s upper thresholds. National competent authorities across every EU member state have investigative and penalty powers, coordinated through the European AI Board.
Vietnam’s AI law is live. Indonesia’s equivalent is in development. Other ASEAN members are watching closely. The regulatory floor across the region is rising, and it is doing so at a pace that has outrun the compliance timelines most early-stage companies have built into their roadmaps.
For any company that deploys AI in a customer-facing capacity, the practical question is not whether a specific AI law applies. The more useful question is whether the AI systems you are running can survive scrutiny under data protection, consumer protection, and employment frameworks that already exist. For most startups, answering that question honestly requires a conversation that has not happened yet.
Work With a Legal Team Built for This Era
LDU works with startups, scale-ups, and Web3 companies to build legal operations that combine AI-powered efficiency with experienced commercial judgment. Our team has been using legal technology and AI tools since our founding because we believe legal services should enable growth, not slow it down.
We help founders draft and review contracts, negotiate with enterprise customers and partners, structure commercial agreements, prepare for fundraising and due diligence, and build scalable legal processes that grow with the business.
If you're closing larger customers, entering strategic partnerships, preparing for investment, or simply want to understand how legal AI can help your business move faster, contact LDU for a free legal consultation. The legal industry is changing rapidly. We're here to help you take advantage of that change rather than getting left behind.
👉 Book now or email us at hello@lduasia.com






.jpg)
