Loading Now

Artificial Intelligence in Pharmacovigilance: Eight Action Items for Life Sciences Companies

An abstract representation of AI in pharmacovigilance with interconnected nodes and a digital network theme, in blue and white colors.

The CIOMS Draft Report offers guidance on integrating AI into pharmacovigilance, aligning with the EU AI Act and providing action items for life sciences companies. It emphasizes transparency, human oversight, and accountability in AI systems while inviting stakeholders to comment on the draft by June 2025. This report could significantly shape global regulatory expectations around pharmacovigilance.

The Council for International Organizations of Medical Sciences (CIOMS) has made headlines with its Draft Report, detailing principles and best practices for integrating artificial intelligence (AI) into pharmacovigilance. This comes alongside the EU Artificial Intelligence Act, which aims to provide a global framework for AI systems in various sectors, including healthcare. With public comments open until June 6, 2025, stakeholders have a chance to influence the future of AI in pharmacovigilance.

The EU AI Act, which kicked off in 2024, marks a significant step as the first comprehensive legal framework for AI. It employs a risk-based approach to classify AI systems into four categories, with healthcare and pharmacovigilance considered as high-risk. Under this classification, the scrutiny is intense — think transparency, risk management, and human oversight, all of which are essential for protecting patient safety. Interestingly, what qualifies as high-risk in pharmacovigilance isn’t all black and white; it depends on specific use cases, making regulation a bit of a puzzle.

CIOMS’ Draft Report specifically translates these high-level EU AI requirements into means of action for pharmacovigilance. It aligns with the European Medicines Agency’s (EMA) call for responsible AI use in the medicinal product lifecycle. The report stresses the need for risk assessment and model performance documentation that sticks to good pharmacovigilance practices. The message? Organizations should harness AI in a way that’s both compliant and ethically sound, all while keeping an eye on the regulatory landscape.

On the U.S. side of the Atlantic, the FDA has yet to finalize comprehensive guidelines on AI, but the January 2025 guidance document lays out important considerations for drug and biological products linked to regulatory decisions. The Draft Report echoes this, offering a splashboard of thoughts for companies navigating AI in pharmacovigilance as the FDA formulates its own approach. With the new presidential administration championing AI, there’s a sense of urgency for organizations to get involved in these discussions.

So what does the Draft Report say life sciences companies should do? The first item is to translate the regulatory principles outlined in both the EU AI Act and FDA Guidance into practical actions within their pharmacovigilance processes. Companies are urged to take the guidelines and use cases offered by the Draft Report as roadmaps, crafting risk assessments tailored to specific operational needs. This includes having suitable human oversight in place to address the potential impacts of AI on patient safety.

Next, the report stresses operationalizing human oversight in pharmacovigilance tasks. It guides companies in structuring oversight through various models, including human in the loop and human on the loop. It’s crucial to ensure human accountability aligns with both ethical expectations and regulatory requirements. The aim is to keep a finger on the pulse of AI systems while preparing for possible pitfalls down the line.

The report also outlines essential steps for ensuring the validity and robustness of AI applications. This involves establishing reference standards and incorporating real-world PV data for model validation. Plus, keeping a constant eye on model performance to catch drifts or new risks is paramount. With all the complexities involved in pharmacovigilance data, companies need to stay vigilant against biases that could skew outcomes.

Transparency is another pillar highlighted in the Draft Report, aligned with the EU AI Act’s demand for clear documentation. Companies are guided on how to maintain transparency with stakeholders on the workings of their AI systems while ensuring that they keep regulators in the loop as well. This entails documenting everything from model architecture to performance evaluations, thus fortifying trust when it counts.

Addressing data privacy and compliance is a given in the Draft Report. Reinforcing frameworks like the EU General Data Protection Regulation is key, especially in light of new generative AI technologies. Moreover, organizations are urged to adopt stringent data handling practices to ensure the confidentiality of sensitive patient information. It’s all about preventing any potential for re-identification of data, which could seriously undermine public trust.

Non-discrimination is a significant theme within the report, emphasizing that both the EU and the FDA want AI systems that don’t perpetuate biases. The Draft Report offers recommendations for evaluating training datasets and addressing biases that could influence outcomes. In short, organizations must prioritize inclusiveness not just as a regulatory requirement but as an ethical imperative.

Finally, the report suggests establishing robust governance structures. These should include cross-functional teams, assigned responsibilities, and ongoing compliance checks. By documenting actions and managing changes proactively, organizations can ensure seamless communication with regulators and enhance accountability.

As the public consultation period for the Draft Report winds on, stakeholders in life sciences have a golden opportunity to shape future guidelines. Engaging in this conversation grants them the chance to leave a mark on the regulatory landscape surrounding AI in pharmacovigilance. For companies in the U.S., involvement in these discussions could pave the way for a clearer regulatory pathway down the line.

This post reflects the current state of the Draft Report as of the posting date. Sidley Austin LLP holds no obligation to update this content or cover any future developments relating to it.

The CIOMS Draft Report is advancing the conversation on AI’s role in pharmacovigilance, bridging the regulatory demands of Europe and the U.S. for the life sciences industry. Companies are urged to adapt AI implementation strategies, embracing risk assessments, human oversight, transparency, and governance, all while participating in the crucial consultation process that could shape future regulations. This moment could redefine how AI integrates into patient safety practices.

Original Source: datamatters.sidley.com

Rajesh Choudhury is a renowned journalist who has spent over 18 years shaping public understanding through enlightening reporting. He grew up in a multicultural community in Toronto, Canada, and studied Journalism at the University of Toronto. Rajesh's career includes assignments in both domestic and international bureaus, where he has covered a variety of issues, earning accolades for his comprehensive investigative work and insightful analyses.

Post Comment