Artificial intelligence in healthcare often brings attention to speed, efficiency, and innovation. But some of the most important questions are not about performance alone. They are about trust. Who is responsible when sensitive health data is used? What safeguards are needed? How do we encourage innovation without weakening protection for patients?
These questions are especially relevant in Europe right now.
On 12 March 2026, the European Data Protection Board and the European Data Protection Supervisor adopted a joint opinion on the proposed European Biotech Act. They supported the goal of improving Europe’s biotechnology and clinical trials environment, but also warned that the handling of sensitive health and genetic data requires strong safeguards. Their recommendations included clearer responsibility for data controllers, limits on long retention periods, clearer rules for further processing, stronger consistency with the AI Act, and explicit use of measures such as pseudonymisation where possible.
For many people, this may look like a specialised legal discussion. For medical students, it is something much more practical. It is a reminder that AI in health does not exist separately from ethics, data governance, and patient rights. It develops inside a wider framework of trust.
Clinical research is one area where this becomes very visible. Research increasingly relies on digital tools, large datasets, and advanced analytics. AI may help identify patterns, support trial design, or contribute to better prediction and stratification. At the same time, research environments often involve some of the most sensitive forms of personal data, including health and genetic information. That creates real responsibilities for everyone involved in the system.
The recent European discussion matters because it highlights a core principle: simplification and innovation should not come at the cost of protections. In healthcare, trust is not a barrier to progress. It is part of what makes progress possible.
This is also where medical education has to evolve. Future doctors may not become data protection specialists, but they do need enough understanding to practise responsibly in digital environments. They need to know why governance matters, how confidentiality and transparency affect patient trust, and why the quality of an AI tool cannot be judged only by whether it appears useful in the moment.
Medical students also need to understand that regulation is not separate from clinical practice. Europe’s AI rules already include provisions related to AI literacy, and the wider implementation of the AI Act continues to develop, with different obligations applying on different dates depending on the type of system. The European Commission notes that provisions related to AI literacy have applied since 2 February 2025, while broader timelines continue toward 2 August 2026 and beyond.
That makes the current moment especially important for medical education. Students are entering a profession in which questions about safety, explanation, documentation, human oversight, and data responsibility will become more visible, not less. They need support to engage with these topics early, before they encounter them only under pressure in practice.
AIMS speaks directly to this challenge. Its emphasis on effective and ethical AI, practical teaching resources, and applied learning helps connect technical change with professional responsibility. It recognises that future doctors need more than awareness of new tools. They need the confidence to question them, the judgement to use them appropriately, and the ethical grounding to protect patient interests.
Europe’s latest debate on clinical trial data is therefore about more than legislation. It is a signal of the kind of healthcare culture we need: innovative, but careful; ambitious, but accountable; digital, but still deeply human.
That is the kind of future medical education should be preparing for.
On 12 March 2026, the EDPB and EDPS said the proposed European Biotech Act should protect sensitive health and genetic data through clearer controller roles, limited retention, stronger safeguards for further processing, consistency with the AI Act, and use of pseudonymisation where appropriate. The Commission’s AI Act guidance page also states that AI literacy provisions have applied since 2 February 2025, with broader implementation milestones continuing through 2026 and 2027.

