New EDPB guidance narrows lawful basis for AI profiling under GDPR
From a regulatory standpoint, the European Data Protection Board (EDPB) published updated guidance in late 2025 clarifying how the GDPR applies to AI profiling and automated decision‑making. The document tightens the acceptable legal bases for many AI‑driven profiling activities. It also demands specific transparency measures and heightened data protection safeguards.
Who issued the guidance and why it matters
The guidance was issued by the EDPB, which coordinates data protection supervision across EU member states. The update responds to rapid adoption of complex AI systems by public and private actors. The Authority has established that existing justifications such as broad contractual clauses and expansive legitimate interest claims are often insufficient for profiling that produces legal or similarly significant effects.
What the guidance says: key points
The guidance reiterates principles from the Court of Justice of the European Union and reflects positions of national authorities, including the Garante. Key takeaways include:
- Stricter lawful bases: Several profiling operations previously justified under legitimate interests or general contractual necessity may now require explicit consent or another clear legal basis when they affect individuals significantly.
- Enhanced transparency: Data controllers must provide intelligible explanations about profiling logic, data sources, and potential impacts. High-level descriptions are not sufficient for complex models.
- Greater data minimisation and purpose limitation: Controllers must limit inputs and avoid repurposing datasets collected for unrelated aims without reassessing lawful basis.
- Risk‑based safeguards: Where profiling creates high risks, controllers must implement stricter technical and organisational measures, including robust impact assessments and testing.
Interpretation and practical implications
Dal punto di vista normativo, this guidance narrows operational freedom for organisations that deploy profiling at scale. The risk compliance is real: companies cannot rely on generic contractual terms to justify intrusive profiling. The Authority has established that transparency obligations include concrete explanations that enable individuals to understand likely consequences.
Immediate steps organisations should consider
Organisations should reassess profiling pipelines against the guidance. Conduct targeted data protection impact assessments. Review and, where necessary, change the lawful basis for processing. Strengthen documentation of decision logic and adopt technical mitigations such as differential privacy, model explainability tools and human‑in‑the‑loop controls.
What comes next
National supervisory authorities are expected to align enforcement activity with the EDPB guidance. Companies that do not adapt their lawful‑basis assessments and safeguards face increased regulatory scrutiny and potential corrective measures under the GDPR.
- Stricter interpretation of lawful bases: Where profiling produces legal effects or similarly significant impacts on individuals, consent or specific statutory grounds are generally required, rather than a broad reliance on legitimate interest.
- Heightened transparency: Data controllers must provide clear, intelligible information about the logic, significance and envisaged consequences of profiling.
- Risk‑based safeguards: For high‑risk profiling, controllers must implement technical and organisational measures, including Data Protection Impact Assessments (DPIAs) and stronger oversight.
2. interpretation and practical implications
From a regulatory standpoint, the guidance signals a narrowing of acceptable lawful bases for profiling that produces significant effects. The Authority has established that routine reliance on legitimate interest is insufficient when outcomes alter individuals’ legal position or impose material disadvantages.
interpretation
The guidance treats high‑impact profiling as qualitatively different from ordinary data processing. Controllers must show a specific legal basis or obtain explicit consent when decisions carry legal or similarly significant consequences. Transparency obligations extend beyond general privacy notices. Controllers must explain the processing logic and expected impacts in plain language.
practical implications for controllers
Companies must reassess their lawful‑basis decisions. Start by mapping profiling activities and flagging those that affect rights or access to services. Conduct DPIAs where risks are substantial. Implement technical measures such as explainability tools and human review layers. Ensure contractual and governance controls across suppliers.
what organisations should do now
Compliance risk is real: update policies, train staff and document decisions. Adopt a documented decision framework that links profiling use cases to lawful bases. Enhance user notices with clear descriptions of profiling logic and likely outcomes. Build audit trails for oversight and regulators.
risks and enforcement
Regulators may issue corrective orders, impose fines or require suspension of processing where safeguards are inadequate. Administrative and reputational consequences are likely if profiling lacks a solid lawful basis or fails transparency tests.
best practices
Prioritise DPIAs for any profiling with legal or significant effects. Use human‑in‑the‑loop reviews for automated decisions. Provide concise, intelligible explanations to data subjects. Maintain records demonstrating why a chosen lawful basis applies and what mitigations were implemented.
The next section examines concrete compliance steps and model clauses companies can deploy to align profiling practices with the guidance.
From a regulatory standpoint, the guidance narrows the room for manoeuvre for numerous AI uses in marketing, credit scoring, recruitment and insurance. The Authority has established that profiling that produces legal or similarly significant effects will face stricter scrutiny. Compliance risk is real: authorities will expect demonstrable accountability and evidence that less intrusive alternatives were considered.
Practically, companies must reassess AI systems that profile individuals to determine whether processing creates legal or similarly significant effects. Where it does, controllers must obtain valid consent—freely given, specific, informed and unambiguous—or identify a clearly applicable statutory basis. Relying on legitimate interest will be difficult when decisions affect access to services, pricing, employment opportunities or credit.
3. What companies must do now
Start by mapping where AI-driven profiling is used and the decisions it supports. Record each decision point, the data sources, and the potential impacts on individuals. From a regulatory standpoint, this inventory is the foundation of accountability.
Carry out targeted assessments of systems that may trigger legal or similarly significant effects. The Authority has established that such assessments must show why less intrusive options were rejected. Document the selection criteria and comparative testing results.
Review and, where needed, update lawful bases for processing. Where consent is required, implement mechanisms that meet the standard for validity. Where a statutory basis exists, record the legal provision relied upon and why it applies.
Strengthen technical and organisational measures. Apply accuracy checks, human oversight, explainability measures and safeguards against discriminatory outcomes. The Authority will expect demonstrable controls and continuous monitoring.
Negotiate vendor contracts and include model clauses that allocate compliance responsibilities, audit rights and data protection obligations. Ensure records show who made key risk decisions and why.
Prepare transparency materials that explain profiling logic and its practical consequences in clear, accessible language. The Authority has established higher expectations for meaningful transparency where decisions affect individuals materially.
From a compliance programme perspective, integrate these steps into existing governance structures. Assign clear roles, set reporting lines and schedule periodic reviews. The risk of regulatory intervention increases with opaque systems and poor documentation.
Companies that act now should expect closer supervisory engagement and more rigorous enforcement where profiling drives important outcomes for individuals.
The Authority has established that firms that act now should expect closer supervisory engagement and more rigorous enforcement where profiling drives important outcomes for individuals.
From a regulatory standpoint, proactive remediation is essential. Companies should follow a structured checklist to reduce legal and operational risk.
- Map all AI systems that perform profiling or automated decision‑making and classify them by impact (low/medium/high).
- Run or update DPIAs for high‑risk systems, documenting identified risks and mitigation measures.
- Review legal bases: where processing produces significant effects, rely on explicit consent or ensure a clear statutory basis under applicable law.
- Enhance transparency: publish concise, understandable explanations of profiling logic, data sources and likely consequences; provide simple opt‑out routes and accessible human review mechanisms.
- Adopt technical safeguards: maintain model documentation, implement explainability features, enable comprehensive logging, and monitor models for bias and performance drift.
4. risks and potential sanctions
From a regulatory standpoint, authorities view failures in these areas as compliance shortfalls. The Authority has established that inadequate DPIAs, opaque profiling and missing safeguards attract intensified scrutiny.
Compliance risk is real: supervisory bodies may impose corrective orders, audit measures, temporary restrictions on processing, or bans on specific systems. Administrative fines under data protection regimes can be substantial where obligations are breached.
Practical implications for companies are concrete. Firms should expect mandatory remediation plans, enhanced reporting obligations and potential on‑site inspections by regulators. Contractual and compliance frameworks must cover third‑party models and suppliers.
What should companies do now? Document governance decisions, update risk registers, assign clear accountability and ensure board‑level oversight of AI uses that affect rights and opportunities.
From a liability perspective, regulators may coordinate with sectoral supervisors where profiling affects credit, employment or insurance. Civil claims and reputational damage are additional risks if affected individuals suffer harm.
Best practices include regular independent audits, routine post‑deployment monitoring, user‑facing explanations in plain language and clear mechanisms for human intervention. Where appropriate, embed privacy‑enhancing technologies and robust vendor due diligence.
The Authority has established that incremental enforcement is likely: expect phased interventions beginning with orders to remedy, followed by fines or processing restrictions if non‑compliance persists. Companies that document and act on risks will reduce exposure and demonstrate good faith to supervisors.
Companies that document and act on risks will reduce exposure and demonstrate good faith to supervisors.
5. Best practices for ongoing compliance
From a regulatory standpoint, supervisory scrutiny will focus on systems that affect rights or opportunities.
Compliance risk is real: regulators are prepared to use a full range of measures, from orders to suspend processing to algorithmic remedies.
The Authority has established that prioritisation will favour cases with clear societal impact, including discriminatory outcomes in hiring, credit and insurance.
Adopt a pragmatic, risk‑based RegTech strategy that prioritises high‑impact processes. Practical steps include:
1. map purposes and decision flows
Document where profiling occurs, the data sources used and the decision points that affect individuals. Clear maps speed audits and remediation.
2. run risk assessments and impact analyses
Conduct data protection impact assessments for high‑risk profiling. Update assessments when models, inputs or use cases change.
3. review models for fairness and explainability
Test algorithms for disparate outcomes across protected groups. Implement explainability measures suitable for affected stakeholders.
4. enhance governance and accountability
Assign ownership for models and data flows. Ensure legal, technical and operational teams meet regularly to review emerging risks.
5. implement monitoring and incident playbooks
Set continuous monitoring for accuracy, bias and drift. Maintain documented response plans for regulator inquiries, subject access requests and breaches.
From a regulatory standpoint, supervisory scrutiny will focus on systems that affect rights or opportunities.0
From a regulatory standpoint, supervisory scrutiny will focus on systems that affect rights or opportunities.1
From a regulatory standpoint, supervisory scrutiny will focus on systems that affect rights or opportunities.2
From a regulatory standpoint, supervisory scrutiny will focus on systems that affect rights or opportunities.3
key governance and technical measures for AI privacy
From a regulatory standpoint, supervisory authorities focus on organisations that deploy systems affecting rights or opportunities. The Authority has established that oversight, documentation and demonstrable controls are central to lawful AI use.
governance
Appoint a designated privacy lead and ensure active board oversight of AI initiatives. From a regulatory standpoint, lines of responsibility must be clear. Companies should embed privacy responsibilities into project governance and require regular reporting to senior management. The risk of insufficient oversight includes regulatory enforcement and reputational harm.
documentation
Maintain current records of processing activities, DPIAs and decision trails for model changes. The Authority has established that traceable documentation supports accountability. Practically, keep versioned logs of model updates, rationale for design choices and mitigation steps. Documentation reduces compliance risk and facilitates audits.
transparency and individual rights
Provide clear notices, accessible access and objection mechanisms, and ensure meaningful human review for materially impactful decisions. From a regulatory standpoint, transparency is not optional. Implement standardised communication templates and staffed review processes so individuals can exercise their rights effectively. Failure to respect rights increases exposure to complaints and sanctions.
technical controls
Adopt privacy‑by‑design, fairness testing and explainability tools throughout the development lifecycle. The Authority has established that technical measures must be proportionate and demonstrable. Use automated tests, interpretability reports and explainability summaries for controllers and affected individuals. Log test results and corrective actions to show continuous improvement.
third‑party management
Include GDPR compliance clauses in vendor contracts and audit AI suppliers on a regular basis. From a regulatory standpoint, responsibilities do not vanish when functions are outsourced. Require contractual guarantees, data processing terms and audit rights. Weak vendor controls increase joint liability and operational risk.
What organisations must do next: document governance decisions, integrate technical controls, and enforce contractual safeguards to reduce compliance risk and demonstrate good faith to supervisors.
what organisations must do next
From a regulatory standpoint, organisations should translate the EDPB guidance into a clear, auditable workplan. Prioritise systems that produce automated outcomes affecting rights or opportunities.
Carry out targeted impact assessments that map the decision flows, data inputs and scoring logic. Update legal bases where profiling relies on consent or legitimate interest. Strengthen notices to data subjects and document how explanations will be provided in practice.
practical compliance steps
Embed technical controls such as explainability modules, fairness testing and logging to produce measurable evidence of safeguards. Require vendors to deliver technical attestations and access to model documentation.
Adopt concrete metrics for monitoring bias, error rates and adverse impacts. Set thresholds that trigger governance reviews and remediation. Keep detailed records of decisions, tests and corrective actions to show good-faith compliance.
governance, contracts and training
Assign clear accountability within the organisation and integrate AI oversight into existing data protection governance. Update supplier contracts to include audit rights and breach notification timelines.
Train operational teams on model limits, data minimisation and subject rights. Ensure legal, technical and compliance units coordinate on responses to supervisory inquiries.
interpretation and regulatory risk
The Authority has established that documentation and demonstrable safeguards determine supervisory outcomes. Compliance risk is real: regulators will assess both design choices and ongoing monitoring.
Non-compliance can lead to administrative sanctions and reputational harm. Focus on proportionate measures aligned with the system’s risk profile and the rights at stake.
what this means for companies
Practical implementation reduces exposure and preserves market trust. Companies should prioritise actionable steps: rigorous impact assessments, measurable controls, contract remedies and continuous oversight.
Sources: EDPB guidance (2025), CJEU case law, national authority statements including the Garante.


