The EU AI Act bans social scoring because it can lead to discriminatory, unjust, and disproportionate treatment of individuals, often in contexts unrelated to the original purpose of the data. The ban is rooted in concerns about fundamental rights, privacy, fairness, and the risk of creating China‑style systems of social control.
Two key sources explain this in detail:
- Holla Legal & Tax’s breakdown of Article 5(1)(c)
- Osservatorio Labour’s analysis of Recital 31 and Article 5(1)(c)
Below is a structured explanation synthesizing those insights.
🚫 1. What the EU Means by “Social Scoring”
According to the EU AI Act, social scoring refers to AI systems that profile, evaluate, or classify individuals based on:
- Social behavior (e.g., payment habits, social media activity, workplace behavior)
- Personal characteristics (e.g., education level, financial situation, personality traits)
These systems generate a “score” that can influence access to services, opportunities, or rights.
The EU explicitly references China’s social credit system as an example of what it seeks to prevent.
⚖️ 2. Why Social Scoring Is Considered Harmful
- It Can Lead to Discriminatory or Unfair Treatment
Social scoring can result in people being treated worse than others based on opaque or biased algorithms.
This includes:
- Denial of housing
- Restricted access to public services
- Employment disadvantages
- Insurance or financial penalties
The EU considers this fundamentally incompatible with the right to equal treatment.
- It Violates Privacy and Data Protection Principles
Social scoring often relies on large‑scale data collection, including sensitive personal data.
The EU warns that this can create:
- Constant surveillance
- Profiling without consent
- Intrusive monitoring of everyday behavior
- It Enables Social Control
The EU explicitly notes that social scoring can promote government or corporate social control, similar to China’s system.
- It Can Punish People in Unrelated Contexts
A core reason for the ban is that social scoring can lead to unfavourable treatment in contexts unrelated to where the data was originally generated.
For example:
- A person’s social media behavior affecting their access to education
- A late bill payment affecting eligibility for public services
This is considered unjust and disproportionate.
📜 3. What the Law Actually Prohibits (Article 5(1)(c))
The EU AI Act bans AI systems that:
- Profile or classify individuals based on social behavior or personal traits, AND
- Lead to detrimental or unfavourable treatment that is:
- Unrelated to the original context of the data, OR
- Unjustified or disproportionate to the behavior in question.
Osservatorio Labour’s analysis confirms that both conditions must be met for the practice to be prohibited.
🧩 4. Are All Forms of Social Scoring Banned?
Not entirely.
According to the EU Commission’s Guidelines (C 2025/884), only social scoring practices that meet the harmful criteria in Article 5(1)(c) are banned.
This means:
- Narrow, context‑specific evaluations (e.g., employee performance reviews) may be allowed.
- Broad, cross‑context scoring systems that affect rights or opportunities are prohibited.
The EU draws a line between legitimate assessment and systemic, discriminatory scoring.
🛡️ 5. Enforcement and Compliance
Regulators across the EU will monitor for violations starting February 2025, with full supervisory bodies operational by August 2025.
Organizations must ensure their AI systems:
- Do not generate cross‑context social scores
- Do not produce unjustified or disproportionate negative effects
- Do not replicate China‑style social credit mechanisms
Failure to comply can result in significant penalties under the AI Act.
📌 Summary
The EU banned social scoring because it:
- Threatens fundamental rights
- Enables discrimination
- Violates privacy
- Encourages social control
- Punishes people in unrelated contexts
- Creates disproportionate and unjust outcomes
The ban is targeted, but firm: any AI system that evaluates people in ways that can harm them outside the original context of the data is prohibited.
Here’s a clear, authoritative list of AI uses that are likely banned under the EU AI Act’s prohibition on social scoring. These examples reflect the Act’s rule that AI systems may not profile or classify people based on social behavior or personal traits in ways that lead to unfavourable treatment in unrelated contexts or unjustified/disproportionate harm.
Below is a structured set of examples illustrating what would fall under this prohibition.
🚫 Examples of AI Uses Likely Banned as Social Scoring Under the EU AI Act
- Cross‑Context Reputation Scores
AI systems that combine data from multiple unrelated areas of life to generate a “trustworthiness” or “reputation” score, such as:
- A “citizen score” combining social media behavior, shopping habits, and public‑service usage
- A “morality score” based on lifestyle choices, political activity, or personal associations
- A “community trust score” used by local governments to rank residents
These mirror the types of systems the EU explicitly wants to avoid, including China‑style social credit mechanisms.
- AI That Restricts Access to Services Based on Behavioral Scores
Systems that deny or limit access to essential services based on broad behavioral scoring, such as:
- Denying public housing because of online behavior
- Restricting access to education based on neighborhood reputation data
- Limiting access to public benefits due to aggregated lifestyle metrics
These are classic examples of “unfavourable treatment in an unrelated context.”
- AI That Scores Individuals for Government Surveillance or Control
Any AI system that:
- Ranks citizens for compliance with government expectations
- Tracks “good” or “bad” behavior for public‑order purposes
- Generates scores used to determine eligibility for government programs
The EU considers such systems incompatible with fundamental rights.
- Workplace or Employment Social Scores
AI systems that:
- Combine personal life data with workplace metrics to determine promotions
- Penalize employees for off‑duty behavior (e.g., social media posts, lifestyle choices)
- Use personality predictions to limit job opportunities across sectors
These systems classify people based on personal traits in ways that can lead to disproportionate harm.
- Insurance or Financial Scores Based on Irrelevant Personal Traits
Examples include:
- Using social media activity to determine insurance premiums
- Using friendship networks to adjust creditworthiness
- Penalizing customers for “risky” lifestyle indicators unrelated to the service
The EU allows legitimate creditworthiness and insurance risk scoring, but not when based on irrelevant or cross‑context personal traits.
- AI That Scores Students for Non‑Educational Behavior
Systems that:
- Rank students based on family background, neighborhood, or online behavior
- Use predicted personality traits to determine access to advanced classes
- Penalize students for off‑campus conduct unrelated to school performance
These would be considered disproportionate and unrelated to the educational context.
- Retail or Commercial Loyalty Scores Used Outside Commerce
For example:
- A retail loyalty score used to determine access to unrelated public services
- A customer “reliability score” used by employers or landlords
- A shopper’s behavioral profile used to influence credit or insurance decisions
This is exactly the kind of cross‑context misuse the Act prohibits.
- Neighborhood or Community Risk Scores Applied to Individuals
AI systems that:
- Score individuals based on neighborhood crime rates
- Penalize residents for the behavior of others in their community
- Use aggregated community data to restrict access to services
These systems classify people based on group‑level traits, which is explicitly prohibited.
🧩 Why These Uses Are Banned
Across all examples, the EU’s logic is consistent:
- They involve profiling based on behavior or personal traits
- They lead to unfavourable treatment
- The treatment is unrelated, unjustified, or disproportionate
- They resemble systems of social control incompatible with EU values
Overview of China’s Social Credit System
China’s social credit system is a national framework for tracking and evaluating the trustworthiness of individuals, businesses, and government entities, built around blacklists, redlists, and data‑sharing across agencies. It is not a single unified “score,” but rather a collection of legal, financial, and administrative mechanisms used to reward compliance and penalize misconduct.
🕰️ Historical Development
- Early Roots (1980s–1990s): Financial Credit Gaps
The concept began as an effort to create a personal banking and financial credit rating system, especially for rural individuals and small businesses who lacked formal credit histories.
By the early 1990s, China was studying Western credit models like FICO, Equifax, and TransUnion to modernize its financial system.
- 2000s: Emergence of Social Credit Concepts
In the early 2000s, China began experimenting with broader “trustworthiness” systems inspired by commercial credit scoring abroad. Regional trials began in 2009, testing how to integrate financial, legal, and administrative data.
- 2014–2020: National Planning and Pilot Programs
The State Council’s 2014 Planning Outline marked the formal launch of the modern social credit system.
Key developments included:
- National pilots with eight credit‑scoring firms (2014)
- Expansion of blacklists managed by courts and regulatory agencies
- Local governments building their own rating systems
By 2023, most private scoring initiatives were shut down as the central government re‑centralized control.
- Deep Historical Antecedents
Scholars note that the system also draws on older Chinese governance traditions, including:
- Imperial personnel archives
- The Dang’an (personnel dossier) system under Communist rule
- A failed early‑2010s proposal to create “morality files” on citizens
These historical practices show continuity in China’s long‑standing use of record‑keeping for governance.
🧩 How the System Works Today
China’s social credit system is not a single national score, but a network of databases, blacklists, and administrative tools managed by agencies such as:
- The National Development and Reform Commission (NDRC)
- The People’s Bank of China (PBOC)
- The Supreme People’s Court (SPC)
Its functions include:
- Blacklists and Redlists
- Blacklists identify individuals or companies that violate laws or court judgments.
- Redlists (whitelists) reward entities with strong compliance records.
- Cross‑Agency Enforcement
Blacklisting can trigger penalties across multiple domains, such as:
- Travel restrictions
- Limits on luxury purchases
- Reduced access to government procurement
- Public naming and shaming
These penalties are primarily tied to court judgment debtors and regulatory violations.
- Local and Sector‑Specific Systems
According to research from IFRI, two main instruments have emerged:
- Local personal credit ratings
- Sector‑specific blacklists (tax, agriculture, courts, etc.)
These systems are interconnected through data‑sharing arrangements.
🎯 Purpose and Stated Goals
The Chinese government frames the system as a way to:
- Improve trust in society
- Strengthen legal compliance
- Combat fraud, food‑safety violations, and financial misconduct
- Standardize credit rating functions across the economy
📌 Summary
China’s social credit system evolved from financial credit reforms in the 1980s, expanded through pilot programs in the 2000s, and formalized with the 2014 national planning outline. It is best understood as a governance infrastructure built around blacklists, administrative penalties, and data‑sharing—not a single Orwellian “score.” Its historical roots stretch back to imperial and Communist‑era record‑keeping systems, reflecting a long tradition of state‑managed dossiers.
