Big data and analytics, e-health, next generation supply chain. These are some important developments that were high on the agenda at the annual DIA EuroMeeting in April 2016. The meeting dedicated a full track to big data and e-health with seven separate sessions held over 2 days. Each of these attracted huge interest with many attendees from the industry eager to hear what the experts had to say.
By Dr. Michael Stephan, Client Relationship Executive, CSC Life Sciences
It was evident from the presentations that these are concepts that resonate strongly in academia and with the regulatory authorities, but few in the industry really grasp the full potential of big data. Several presenters pointed out that a difficult question for life sciences companies can be, “just what is big data”? Across the various sessions, the definition for big data was that it’s about drawing on a lot of data from a lot of different sources for an end goal, such as to improve the approach taken in clinical study protocols.
Without a true understanding of what terms such as big data or e-health mean, they become little more than buzzwords and, poorly executed, a big data approach is potentially risky. This is because simply collecting massive amounts of data and not knowing or understanding what should be done with this data could lead companies to build correlations that don’t reflect what the data does or can offer, and therefore make incorrect assumptions.
Perhaps one of the best known examples of how big data can draw incorrect correlations is Google Flu Trends (GFT), which predicted more than double the number of doctor visits related to flu than the Centers for Disease Control (CDC) predicted. GFT had drawn on a data-rich model that focused on correlation, while the CDC used estimates from laboratory surveillance reports and were based on causation.
What examples such as the issue with GFT show is that companies need to establish and define an approach before undertaking a project with big data. It’s essential to know what you want to achieve and how you can support those goals with a big data approach. Just as with a clinical trial, it’s essential to take a scientific approach: decide the parameters, understand inclusion and exclusion criteria, define the statistical method to be used before you start the study, and analyse the data according to those parameters after study completion.
Recognizing the Potential
But while caution must be taken with how big data is used, it has enormous potential for life sciences companies. During his session, Frank Wartenberg, President Central Europe, IMS Health, drew on examples from scientific publications that show a big data approach supported cancer research and treatment. Big data and analytics also provide companies and regulators with real-world insight into how drugs are used and tolerated after they receive marketing authorization. This is important because clinical studies provide limited insight: a drug might be tested on 2,000 or 3,000 patients but used by millions. By applying big data, you can get broader insights into patient use.
It was clear from the sessions that the industry is excited by the opportunities big data affords, but not always sure how to put these into practice. This is an area in which my colleagues at CSC have been pioneers. For example, as far back as 2014, Cytolon turned to us for help with matching cancer patients to cord-blood donors for stem-cell treatment. In response, CSC developed an internet-based matching solution that uses a big data-style graph database that matches DNA from a patient’s blood with DNA from the donated cord blood.
Making Use of Supply Chain Data
Another theme that resonated during the conference was that of the next-generation supply chain. The focus for the supply chain is on track and trace, serialization, and the upcoming Identification of Medicinal Products (IDMP) standards. Conference presenters noted that these capabilities aren’t just about responding to regulatory requirements, but also enable companies to improve the security of their products against counterfeiting. One speaker noted that serialization should start with the API and continue through to delivery to the patient. The question companies need to be asking is how they can use the data about distribution to improve the supply chain. By taking a big data and analytics approach to the supply chain, life sciences companies can start to assess how long products take from manufacturing to delivery to the patient in order to move away from a stock-based model and toward an order-based model.
In addition, companies can take data from social media networks about patient usage, issues with products, allergies and so forth and feed that into the manufacturing process to manage this based on demand.
The take-home message from the conference is that the themes that are high priorities for companies – big data and analytics, e-health, the supply chain – are major focus areas for CSC. Companies are increasingly seeking help with digitizing the life sciences value chain so that they can respond to the demands of patients and of regulators quickly and progressively.