Written by Martin Holm-Petersen, Chief Strategy Officer
Preamble
This will be the first in a series of articles about the application of artificial intelligence (AI) and advanced automation in our industry, with a focus on the pharmacovigilance (PV) use-case. The intention of the series of article is to explain the current state of automation and the issues associated with AI, as well as diving into solutions that are available and will become available in the future. Despite the author’s association with Qinecsa, a leading technology and services company within PV, the articles are written as objectively as possible, and explicitly mentioning Qinecsa’s position, where relevant.
PV is slow on technology adoption – for good reasons
Much like other industries, AI-driven automation is slated to change life sciences profoundly over the coming years. PV is no different and could be considered highly suitable, as it is data-driven with well-defined, repetitive processes. The promise and potential of new technologies in the domain have largely been explored; it has been a leading topic of interest at PV conferences and other forums for the past decade. There is a clear desire across regulatory agencies and the industry to better manage high data volumes, increasing trial complexity, and support demands for faster commercialization of medicinal products.
Yet, PV functions have been slow to adopt new technology for various reasons. The most prominent one is that PV is a highly regulated and risk-averse domain where human lives are at stake. Hence, changes to ways of working are always considered carefully. Central to the adoption challenge is the somewhat reactive culture of PV departments, as well as a literacy gap on AI, observed across healthcare.
History and challenges
PV is vital to understanding the risks of treatments, weighing risks against clinical benefits, and improving patient safety and care. Emerging in the wake of the 1961 thalidomide disaster, it allows the industry to determine whether a drug or treatment is safe and has a profile where benefits outweigh risks.
Despite this crucial role, PV has been slow to adapt to a changing environment. A 2024 survey of PV professionals indicated 66% of organizations were taking a reactive approach to adverse event (AE) review, and 18% were relying on manual or outdated methods. The same survey highlighted a very limited use (5%) of AI/machine learning.
A persistent challenge in adopting new approaches is the continuously increasing complexity of regulatory requirements, as well as the year-on-year double-digit increase in adverse event case volumes. Reactivity becomes a necessary habit for managing external impacts, leaving less time for innovative change. The traditional solution to managing increasing case volumes has been to engage with outsourcing vendors that reduce the cost per case through economies of scale in large offshore operations.
Other longstanding challenges are holding back PV functions. Parallel safety organizations with different functionality and overlapping structures create greater complexity. Inconsistent reporting structures across business units lead to complex resource negotiations, unclear accountabilities, and a lack of clear leadership.
Deploying the right person in the right job with the right level of experience can also be a challenge. Smaller biotech companies report losing PV talent to large pharmaceutical companies and PV is seen as a high or medium priority for talent attraction and retention across the industry. Moreover, data scientists and technology graduates are often not attracted to the PV function or are not even aware of its existence.
Aside from practical challenges, cultural hurdles must also be overcome. There can be a lack of understanding about the consequences of failing to get things right and the impact on patient safety. Resistance to change can also arise as we move away from long-established practices and embrace innovative solutions.
The Technology Solution – and its’ associated challenges
PV functions have, in some processes like adverse event case processing, been automated for many years using rule-based techniques. There has been steady adoption of digital reporting and harmonization using established data standards such as E2B. Integrations of different systems in the PV technology ecosystem have long been a focus, aiming to achieve automation and improve the quality of moving data across organizational boundaries and processes. Basic digitalization and automation options are not yet exhausted; more could be done, such as ensuring that reports are always digitally captured via websites and forms, directly from patients and healthcare professionals. Larger organizations have typically invested more in automation than smaller ones due to economies of scale. Since the emergence of Robotic Process Automation (RPA), some companies have also been able to automate additional repetitive tasks, such as extracting emails from mailboxes to create new cases in the safety database.
New AI technologies offer an additional and, to some extent, parallel opportunity for radically more efficient and scalable case intake and processing, improved employee productivity, faster hypothesis-to-testing cycles, and enhancements in generating knowledge artifacts such as periodic reports. However, with these significant benefits, there are at present material challenges with AI. Being knowledgeable about the challenges, at least to the level of being able to engage and understand how technology providers try to address them, is crucial to being able to apply them meaningfully. Some key challenges of AI are listed below:
1. Access and Integration of AI Models
AI models may be available from many providers but integrating them into existing processes and systems is not straightforward. The term “orchestration” is sometimes used to describe setting up processing chains, such as a safety database calling an AI service to extract adverse event content from a document and associated metadata, before passing the case data back and on to human review if necessary.
2. Data Quality and Integration
Training AI models requires vast amounts of high-quality, structured data. Pharmacovigilance data comes from diverse sources, including electronic health records, social media, and adverse event reports, making data preparation complex.
3. Bias in AI Models
AI can inherit biases from the data it is trained on or even fabricate data, potentially leading to inaccurate risk assessments or disparities in drug safety monitoring.
4. Lack of Explainability
Regulatory bodies like the FDA and EMA expect transparency in adverse event detection, but many AI models function as “black boxes,” making it hard to explain conclusions.
5. Regulatory Compliance
AI-driven pharmacovigilance must comply with GDPR, HIPAA, and other regulations governing patient data privacy and security. Many commercially available generative AI models have been trained on data that did not respect intellectual property or GDPR. Additionally, there are concerns about sending adverse event data with personally identifiable information to AI models.
6. False Positives and Negatives
AI may miss real adverse drug reactions, requiring human oversight and validation.
7. Intellectual Property Issues
AI-driven drug safety algorithms may raise questions about data ownership, proprietary algorithms, and the use of patient-reported data in commercial AI systems.
Adding up these potential challenges specific to AI should rightfully raise some concerns about the maturity of AI as a solution and does suggest some insights are needed when procuring or implementing the technology into your process. Hence, pursuing AI and getting to a mature state also drives a need for more technology-savvy individuals to work in the PV function, which, as established in the section above, is already suffering from a shortage of these profiles. Alternatively, engaging with technology providers or tech-enabled outsourcing vendors is the emerging solution that seems to be commonly pursued; however, there is still a significant impact on the function—moving resources from manual data entry and writing, etc. Instead, people need to understand the flow of the data, how to check and verify it, and how to deal with exceptions while maintaining oversight.
Building culture around technology
The transformation of PV is not just about harnessing new technologies. It is also vital to establish the right cultures and processes. We need to merge two skill sets – technology expertise and PV expertise – within a framework that facilitates change management. In many cases, this means upskilling staff to understand both worlds.
For sourcing needs in the future, we must to develop strategic sourcing processes and engage outsourcing partners with tech-enabled processes to provide strategic input and a solid approach to automation/AI.
To support PV transformation, we must use up-to-date, credible messaging to ensure that everyone understands the importance of PV and its future potential through new technology, not least in order to attract the right talent to the function. Change must be driven by strong leadership, based on a clear strategy and data-driven decisions. A collaborative industry culture that fosters knowledge sharing and best practices will help protect patients.
Finally, we need to engage employees in the transformation. Technology understanding should be emphasized as fundamental to everyone’s role, with ongoing feedback and proactive improvements. A future ready PV system will be based on a cyclical approach driven by functional-technical expertise at every stage.
Where are the Regulators?
As mentioned above, regulatory authorities in Europe and the USA have by now set some initial expectations for the use of AI. Explainability/transparency, being an important aspect along with e.g. model training data concerning GDPR and the legality/intellectual property of such data. The EU AI Act also addresses the question of AI literacy, and introduces requirements on educational initiatives to organisations that are embarking on the AI path.
Regulatory agencies themselves should be seen as equally keen as industry to adopt automation and AI. The agencies are in their own PV operations looking at automating using AI, of course with the pressure of having to deliver on governmental commitments on time and quality with constrained budgets.
Some of our coming articles will focus on the emerging regulatory frameworks for AI and dive specifically into European initiatives such as Language and Health Data Spaces that aims to deliver training data and models that can comply with regulatory expectations.
Conclusion
PV is predominantly reactive today and has taken a fairly conservative approach to technology due to its compliance-driven and risk-averse nature. AI adoption in PV is still in its early stages, with some maturity questions yet to be resolved and need knowledge to implement and operate. To prepare for the future with technology, automation and AI, the PV community must ensure that the “future state” operating model is clearly defined, backed by strong leadership and investment in people, fostering a culture of technology focus and of change. Knowledge sharing across the industry should be encouraged as the challenges of applying advanced technology within PV are for the benefit of patients all over the world.
The essential question becomes how PV functions will be able to adapt the organizational culture to better harness technology, ideally while transforming from a necessary but reactive, cost function into a proactive, value-creating partner that saves time, makes better use of resources, and transforms data into actionable insights. Success depends on strong leadership to drive a cultural shift and a cyclical approach to improvement, leveraging technologies and expertise at every stage.