The CFM's deontological standard requires compliance with the protection of personal data in the training of AI models in healthcare. However LGPD there is no clear legal basis for this use. What healthtechs and developers need to do before August 2026.
1. The problem in one sentence
Most healthtechs and health AI developers operating in Brazil use patient data to train, validate or improve their models. CFM Resolution 2.454/2026 requires that this use complies with the general protection of personal data. The LGPD, which is the applicable data protection law, has no explicit and sufficiently clear legal basis for this purpose when it comes to sensitive health data. This is the regulatory gap that the sector needs to address now.
This is not an abstract loophole. It's about concrete regulatory exposure: an organization that uses patient data to train AI without a proper legal basis is subject to sanctions from the ANPD, civil liability to data subjects and, now, deontological liability for the doctors who employ its systems.
2. What CFM Resolution 2.454/2026 determines
Art. 6, §2 of the resolution is straightforward: ‘the use of personal data for training, validation or improvement of AI models, systems and applications must comply with ethical and scientific principles and general protection of personal data’.
Two elements of this wording deserve technical attention. Firstly, the resolution uses the expression ‘general protection of personal data’ - not ‘LGPD’. This choice was deliberate: it signals openness to specific sectoral regulations that may arise, without binding the normative text to the literality of a law whose suitability to the AI context is still under construction.
Secondly, the rule creates a duty of conformity that falls on the doctor (which cannot use systems that do not comply with data protection rules - art. 4, IV) and, by extension, on the medical institution who hires or develops these systems (art. 14). The platform developer is not the direct addressee of the ethical standard - but they are the technical agent who must ensure that the product delivered to the medical client is legally usable.
In practical terms: if a healthtech's AI system has been trained with patient data without a proper legal basis, the hospital using it will be in breach of the resolution. And the contract between them will define who is liable - or whether both are liable.
3. What the LGPD says about health data
The LGPD classifies health data as sensitive personal data (art. 5, II). This classification has a direct consequence: the processing of sensitive data requires a qualified legal basis, listed exhaustively in art. 11.
For the direct clinical and care context - diagnosis, treatment, management of the patient's own care - the legal bases are relatively accessible: health protection (art. 11, II, f), the holder's vital interest (art. 11, II, e) and, for public health operations, the basis of art. 11, II, g.
The problem arises when health data is used for a purpose distinct of direct patient care - specifically, the training of AI models. This is a new purpose, not automatically covered by the legal bases of the original clinical treatment. Brazilian data protection doctrine has not yet reached a consensus on which legal basis supports this use.
The options under consideration are:
Specific consent (art. 11, I LGPD): This is a safer legal basis, but it is operationally complex. It requires that the patient has unequivocally consented to the use of their data specifically for AI training - not just for their medical treatment. Generic consent forms do not meet this requirement.
Scientific research (art. 11, II, c LGPD): possible for projects structured as research, with approval by a Research Ethics Committee (CEP/CONEP) and the adoption of anonymization whenever possible. It does not apply to the ongoing commercial development of AI products.
Legitimate interest (art. 10 LGPD): The LGPD expressly prohibits the use of sensitive data based solely on legitimate interest. For health data, this basis does not apply.
■ Prior anonymization: genuinely anonymized data is not personal data and falls outside the scope of the LGPD. However, the anonymization of medical data is technically challenging - the ANPD has already signaled that strict criteria must be applied for the process to be considered irreversible.
4. The hypothetical scenario: healthtech that hasn't mapped out its legal bases
A healthtech is developing an AI system for diagnostic support in cardiology. Since 2023, it has been using data from patient records at partner hospitals to train and refine its model. The contracts with the hospitals provide for data sharing ‘for the purpose of improving services’. Patients signed standard terms of service - with no specific mention of the use of their data for AI training.
With CFM Resolution 2.454/2026 in force, partner hospitals will be in breach of the deontological standard by maintaining contracts that do not guarantee compliance with the protection of personal data (art. 4, IV and art. 14). Doctors who use the system will be held accountable by the CRM.
For healthtech, the scenario is one of multiple exposures: termination of contracts with hospitals seeking to comply; action by the ANPD for processing sensitive data without a legal basis; civil liability towards data subjects; and, depending on the contractual structure, litigation with the partner hospitals themselves.
The question is not whether the data was processed with technical security. The question is whether there was legal authorization for that specific purpose. Security of information and lawfulness of processing are independent requirements - and both need to be present.
5. What PL 2338/2023 adds
The Brazilian AI Legal Framework, currently before the Senate, partially addresses this gap by providing for a Data Protection Impact Assessment (DPIA) for high-risk AI systems - a category that most diagnostic support systems fall into. This assessment would be in addition to the LGPD regime, not replacing it.
The bill also provides for the possibility of specific legal bases for health research and innovation, but the current wording is still generic and does not solve the problem of continuous training of commercial models with patient data. Until the framework is approved, the applicable regime is that of the LGPD. with all its limitations for that use case.
6. Risk matrix by business model
The degree of regulatory exposure varies according to the healthtech's business model. The table below organizes the main profiles:
Centralized data model with third-party medical records
Maximum exposure. Sensitive patient data from multiple hospitals, treated for a purpose (AI training) other than the original clinical care. Requires specific consent or complete restructuring for CEP research basis.
Federated model with anonymized data
Reduced exposure, as long as the anonymization is technically robust and auditable. Requires formal anonymization protocol, with documentation demonstrating the irreversibility of the process.
Model with the institution's own proprietary data
Intermediate exposure. The legal basis of health protection or scientific research may be sufficient, as long as the use is documented and aligned with the purpose informed to the patients.
Reinforcement learning model with synthetic data
Minimal exposure to the GDPR if the synthetic data is genuinely unlinked to real data subjects. Requires documentation of the data generation and validation process.
7. Practical recommendations for healthtechs
■ Audit the origin and legal basis of all the data used to train the models. For each set of data, identify: (i) which legal basis of the LGPD was used; (ii) whether the purpose informed to the data subject covered AI training; (iii) whether there is documentation proving compliance.
Review contracts with partner hospitals and clinics. Include clauses that precisely describe the purposes of the data processing, the applicable legal bases, the responsibilities of each party and the rights of the data subjects. Generic ‘service improvement’ clauses do not meet the LGPD standard for sensitive data.
■ Assess the technical feasibility of anonymization. For models that can operate with anonymized data, implement formal anonymization protocols - with auditable documentation that demonstrates the irreversibility of the process.
Structure research projects with ethical approval for use of the scientific research base. For cases in which retroactive consent is unfeasible and anonymization is technically impossible, the scientific research basis (art. 11, II, c LGPD) can be an alternative - as long as the project is approved by CEP/CONEP.
■ Document the Algorithmic Impact Assessment (AIA) and the Data Protection Impact Report (DPIR). Both instruments are required for high-risk AI systems in healthcare - the first by the CFM Resolution (Annex I, XIII), the second by the LGPD (art. 38). They are not substitutes: they are complementary.
■ Prepare the technical documentation required by the CFM Resolution for the systems supplied to medical clients. Hospitals and doctors using AI systems will need to demonstrate to the CRM that the systems they employ meet the requirements of the standard. Healthtechs that anticipate this demand will be in a superior competitive position.
8. The question every healthtech needs to answer
There is a strategic question that structures all of the above analysis: what is the legal basis that supports the use of patient data in the models that your company operates?
If the answer is ‘consent’, it is necessary to verify that this consent was obtained specifically for AI training - and not just for medical treatment. If the answer is ‘anonymization’, it is necessary to demonstrate that the process is technically robust and auditable. If the answer is ‘we don't know’, the immediate priority is to map this exposure before August 2026.
CFM Resolution 2.454/2026 doesn't create the problem - it makes its lack of a solution indefensible.
Normative references
CFM Resolution no. 2.454, of February 11, 2026. DOU 27/02/2026.
Law No. 13.709/2018 - General Personal Data Protection Law (LGPD). Arts. 5, II; 7; 11; 38.
PL 2338/2023 - Legal Framework for Artificial Intelligence (currently before the Federal Senate).
ANPD. Technical Note on the Anonymization of Personal Data, 2023.
CFM Resolution No. 2,217/2018 - Code of Medical Ethics.
This article was prepared by the Regulation & Technology team at DMS Advogados and is for information purposes only. It does not constitute legal advice or establish a client relationship.