Pharma companies are looking to better understand how Generative AI (GAI) can facilitate innovation. Early adopters can gain distinct advantages while properly managing the top-of-mind potential risks discussed below. Francesca Blythe and Steve McInerney explain the corporate governance principles that should be considered.
GAI creates new content from inputs such as text, images, computer code, or chemical structures. GAI has the potential to improve many processes for pharma companies, for example by speeding up drug discovery and development, enhancing medical imaging/diagnostics, and streamlining clinical trials. It may also offer strategic market advantages when it comes to actively utilizing the vast amounts of data that pharma companies hold.
However, these advantages must be balanced against the risks in this novel space, particularly in relation to fairness, ethics, privacy, and cybersecurity. Pharma companies wishing to use GAI should therefore consider implementing a GAI governance framework to mitigate legal risks, to ensure regulatory compliance and the responsible/ethical use of GAI, and to foster customer and patient trust. Such a framework will likely include the following:
- Compliance with fast developing AI legislation and guidance. These range from soft law principles or standards, such as the U.S. NIST AI Risk Management Framework and various guidance documents published by (healthcare) and privacy regulators, to prescriptive legislation such as the EU’s forthcoming AI Act. Pharma companies should identify those frameworks to which they are or will be subject, consider their intended use of GAI and then develop their GAI governance framework accordingly.
- Consider which types of GAI to approve. Broadly speaking, GAI systems fall into two categories: ‘open’ and ‘closed’ systems. An ‘open’ or public GAI system typically allows any data that is input to be used to further train the GAI algorithm. Pharma companies opting for this licensing model should install strict guardrails around the types of data which personnel are permitted to input. Pharma companies may alternatively or additionally elect to license a ‘closed’ system in which none of the data input by the company is publicly disclosed, and the GAI algorithm will only be trained on the company’s data. The risks can differ depending on the model adopted.
- Identify and document acceptable use cases. Identify specific activities for which the use of GAI is permitted, prohibited, and restricted, and document these in an internal GAI policy. Such a policy should, inter alia, (a) identify the specific GAI systems permitted for use, (b) identify the types of data which can be input and the acceptable sources of data, (c) provide guidance on the level of human review expected and when/how corrective measures should be implemented, and (d) identify any additional technical, organizational, or contractual measures. It is critically important that, for example, protections for clinical trial data continue to be observed, even within a closed system. Use cases should be regularly reviewed and amended, and personnel given clear direction on them.
- Stakeholder consultation on the use of GAI. Set up an international multidisciplinary team composed of representatives from various departments across the company, including clinical, regulatory affairs, cybersecurity, and privacy. Consider establishing an AI Ethics Board and appointing an AI Officer.
- Have regard for any unintended consequences of GAI. Given the inherent sensitivities around the commercial use of AI by pharma companies — particularly in the context of patient data — it is paramount that regard be given for any unintended consequences of the use of GAI. To mitigate such risks, consider utilizing impact assessments to measure and mitigate against harm and consider whether there are fairness and ethics frameworks to be followed.
This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.