Pandemic-related disruptions have accelerated much-needed change in clinical operations, but this change has been accompanied by questions about data collection and data quality. In a recent survey commissioned by Oracle Health Sciences, more than 75 percent of industry respondents indicated that limitations in patients’ ability to attend on-site visits sped up their adoption of decentralized clinical trial (DCT) approaches.1 A separate survey conducted by Greenphire found that 84 percent of sponsors and contract research organizations are actively seeking to increase their use of technology to better support DCTs.2 This momentum behind DCTs has, however, raised concerns among sponsors about how to collect data remotely while ensuring data reliability and quality.1
Understanding challenges to data quality in DCTs
Challenges related to data accuracy and quality are intrinsic to clinical research. With traditional paper-based methods, manual entry of information from the electronic health record (EHR) into the electronic data capture (EDC) system creates the risk of transcription errors. The collection of paper-based patient-reported outcomes (PROs) comes with concerns about recall bias or missing data.
With electronic methods of data capture, information is entered directly into the EDC in real-time. While the risk of transcription errors and recall bias may be reduced, the issue of how to ensure data quality remains. For example, electronic patient-reported outcome (ePRO) systems may be configured to perform input validation and allow data entry only within specified time windows, but how do you verify that it was actually the patient who entered that data?
As sponsors move to DCTs, one of the most common steps they have taken has been to implement both patient-and investigator-facing technologies.1 Adopting these technologies has added layers of complexity to the planning and processes required to ensure data quality. The use of apps, ePROs, and wearable devices may increase patient convenience, provide real-time data, and reduce site burden. Still, it may also require different approaches for collecting and managing data and complying with evolving regulatory guidance. Implementing technologies such as eConsent may involve additional training and technical considerations.
Understanding what processes are required to integrate data from where it is collected to where it can be analyzed and do this efficiently and securely is the key to optimizing data quality in DCTs. Technology can be used to enable these processes, but only if it has been well-vetted to minimize risk to sponsors.
Redefining data quality
Data quality has historically been tethered to an insistence on on-site monitoring coupled with 100 percent source data verification (SDV). However, studies have shown that SDV is not necessarily synonymous with data quality.3,4
Even prior to the pandemic, there has been a trend toward targeted monitoring and reduced SDV, which lessens the number of data points clinical research associates must verify against source data on site. The proliferation of technologies for remote data capture, monitoring, and even study-related assessments may help accelerate the adoption of a new definition for data quality.
Ensuring data quality in an evolving landscape
In this evolving clinical trial landscape, risk-based quality management (RBQM) is more relevant than ever. RBQM is an approach to prospectively identifying and managing risk across the entire lifecycle of a study—and across all roles—to improve clinical trial quality and outcomes. The process of implementing RBQM focuses on examining the objectives of the trial, defining critical factors to achieving those objectives, and then creating a plan to prevent risks to those factors from negatively impacting outcomes.
RBQM technologies are systems designed to proactively protect against potential threats, both known and unknown. When appropriately implemented as part of a clinical trial, RBQM technology provides robust control for early risk detection and prevention. Continual risk monitoring can lead to earlier clarification and mitigation of problems and even identify issues that might not otherwise have been detected. Below are real examples of data quality issues and other risks that were pinpointed in studies using Premier’s RBQM technology:
- Detection of an incorrectly randomized patient
- Detection of use of a prohibited concomitant medication
- Early identification of data entry errors normally found through month-end review or line listings
- Early identification of a blinded rater that was not only completing the wrong assessment forms but also filling them out incorrectly
- Identification of a safety signal through alerts for severe changes in blood pressure between study visits
- Early detection of the need for site re-training based on protocol deviations
Enabling the shift to DCTs
Adoption of DCT approaches can benefit patients, sites, and sponsors alike. Successful implementation of these approaches requires careful consideration of the regulatory guidance, processes, and technologies necessary to ensure data quality and manage risk. This is a cross-functional, multi-stakeholder effort that may lead to the added downstream benefits of increased patient engagement, enhanced data quality, improved safety, reduced site burden, and a higher likelihood of trial success.
1 Oracle. Survey: COVID-19 the Tipping Point for Decentralized Clinical Trials, November 18, 2020.
2 PharmaVoice.com. Centering on the Patient by Decentralizing Clinical Trials. Available at https://www.pharmavoice.com/article/2020-11-decentralizing-clinical-trials/.
3 Tantsyura V, et al. Risk-based monitoring: A closer statistical look at source document verification, queries, study size effects, and data quality. Ther Innov Regul Sci. 2015;49(6):903-910.
4 Houston L, Probst Y, Martin A. Assessing data quality and the variability of source data verification auditing methods in clinical research settings. J Biomed Inform. 2018;85:25-32.