Taming in vivo data – the last (data) frontier in pharmaceutical research?

Historically, in large pharmaceutical R&D operations, in vivo studies have occupied a slightly awkward position in terms of data management. Data from the earliest stages of research – high throughout experimentation and the like – while not without its own challenges, is relatively straightforward and generally well handled. At the other end of the R&D lifecycle, patient safety and the rigorous demands of regulators have forced the industry to grasp the nettle of managing and communicating clinical trial data, resulting in the industry-wide interchange standards for clinical data established by CDISC.

Picture1In vivo work in the lead optimisation phase, though, falls between the two – it’s at least as complex as clinical data, and so variable that it‘s been difficult to create software tools capable of handling the diversity of studies flexibly enough. The result, almost inevitably, has been widespread informality in data management practices, an explosion of Excel files and burdensome manual data collation and analysis processes. But recent developments in the industry, such as the increased investigation of drug combinations and the need to explore ever more complicated dosing schedules, are stretching current practices to breaking point.

At the same time, as pharmaceutical R&D costs continue to rise rapidly, the industry has become increasingly sensitive to late-stage project failures. Drug project teams are recognising that they need to move into clinical stages with a much better understanding of their compounds and better able to predict the outcomes of clinical trials. Wringing as much value as possible out of in vivo data is a major part of the answer to this.

Many organisations have recognised that modelling, simulation and analytics are capabilities that can help them to both drive down the cost of research and accelerate a compound’s progress through the pipeline. If computer models can predict with a sufficient degree of certainty that a particular compound will, for example, only be effective at unacceptably toxic doses, then long and costly clinical trials can be avoided and effort focused on compounds more likely to succeed.

But modellers can’t develop the models they need to do this without timely access to good quality in vivo data, preferably at the individual subject level. (Summarised data is often fairly readily available, but isn’t good enough to support the maths behind some of these complex models, especially for population-based approaches.)

In the meantime, governments and the general public are, rightly, continuing to put pressure on drug companies to reduce the amount of in vivo testing that they conduct. Everybody wants the industry to move towards a future where reliably predictive computer models can reduce or in some cases even eliminate in vivo experimentation.

Over the last couple of years, AstraZeneca have developed PredICT, an in vivo information platform that provides a comprehensive solution to the many challenges of effective in vivo data management across all therapeutic areas, from the initial point of data capture and analysis for the study in hand, right through to reuse of historical data by modellers months or even years after the original work was carried out. Tessella have contributed key business analysis and technical delivery to all phases of the development of this AstraZeneca project.

The key to this has been the development of a set of global, interdisciplinary data standards that both provide a common language for scientists to describe and interpret in vivo work and act as a data interchange standard, enabling information systems to talk to each other. With these established, software tools that support the efficient capture of high quality in vivo data can be developed in parallel with systems that allow a constantly growing database of in vivo studies to be browsed, collated, visualised and exported to popular modelling tools, where modellers and pharmacometricians can develop further insights.

A perhaps counterintuitive finding has been that by providing the right tools, it’s possible to not only increase the completeness and accuracy of the data that bioscientists capture, but also reduce the amount of time they spend on data entry, enabling them to focus on science and innovation rather than paperwork. So, while PredICT’s business case was aimed at serving the data needs of modellers, embedding its standards and tools throughout the organisation has been beneficial for everyone involved.

AstraZeneca and Tessella co-presented a poster on this work at the 7th Noordwijkerhout Symposium on Pharmacokinetics, Pharmacodynamics and Systems Pharmacology, www.systemspharmacology.eu

 

 

Technology and Consulting

© Copyright 2017 Tessella
All rights reserved