AI has transformed many aspects of life sciences, playing a huge role in helping to analyse vast amounts of data more quickly than humans can and enabling us to improve efficiencies – from drug discovery to clinical decision-making, from optimised productivity through to personalised healthcare and automated disease diagnosis. All the time helping us to optimise innovation, improve efficiencies, build new tools and help regulators define the framework in which AI operates.
As the volume of data being consumed by AI continues to grow exponentially, we need to remain cognisant of the challenges we face.
One of the biggest hurdles is being able to adopt and implement the infrastructure and tools needed to run facilities using AI. Implementing that infrastructure can create significant additional costs and investment. It also needs to be maintained and supported in the right way to provide you with the necessary agility to continue transforming. Making sure you select the right vendor and solution and have the right contracts in place will play a big part in that.
By its nature, AI in life sciences will access sensitive personal information. As more personal information gets collected there are inevitably more risks posed to our privacy. Whilst the underlying questions may be the same as where data is used in any technology (Is the data being used fairly, lawfully and transparently? Do people understand how their data is being used? How is data kept secure?), the stakes are raised in complex AI systems. Data protection regimes need to provide the appropriate framework to protect individuals whilst not stifling opportunity. Big data approaches were applied to track and identify Covid-19 infections across the globe throughout the pandemic – access to that data proved invaluable in many ways – we need to carefully consider what to learn from that.
Big data also brings complex governance issues to many organisations. Whilst gaining access to a high volume of data from various sources (website and social media interactions, remote-monitoring sensors, wearables, clinical visits, imaging and so on) is crucial to big data, many organisations are not yet designed to handle such volumes. Ensuring the integrity and security of your data is maintained is fundamental to opening up the opportunities it presents.
It is important for us to think about AI and big data at the beginning of any research planning instead of acquiring data first and thinking later. If we are to take advantage of AI models at the same (or an accelerated) pace, these (and many other) challenges must be addressed head-on.