Real Operational Solutions for Quality in DCT – Beyond Theory, Close to Audit

In our previous article (“What a Real Quality System Looks Like in a DCT”), we dismantled the myth that technology alone guarantees quality. We showed that traceability isn't just about having the right software — it's about a way of working that must be designed, implemented, and maintained consistently. We discussed the real-world issues: inconsistent digital sources, unclear team roles, and the loss of control over data.

This article picks up exactly where that one left off.

We won't repeat the principles — we assume they're already understood.

Here, we focus on how to actually build a system that works in decentralized clinical trials — and why so few sites apply it correctly.

Whether you're part of an SMO, a hospital-affiliated site, or operate independently, quality is no longer optional. It's a strategic advantage in a decentralized ecosystem.

1. Operationally Critical Elements for a Functional Quality System in DCT

Everyone talks about decentralization, but few acknowledge the truth from the ground: technology solves nothing if the teams don’t know how to control it. And in DCTs, “control” does not mean oversight — it means continuous calibration.
Each data source, each app, and every patient interaction can become a vulnerability unless it’s embedded into a living, auditable, real-time quality system.

1.1 Managing Multiple Digital Data Sources (ePRO, Wearables, Apps)

In decentralized trials, data no longer flows solely through the investigator. It comes fragmented: ePRO apps, wearable sensors, web platforms, API interfaces. If these aren’t mapped from the beginning, they can cause inconsistencies that escape audits or even compromise data integrity.

🔹 How to properly map your sources:
Before study launch, all digital sources must be identified and integrated into a clear operational diagram:

  • What kind of data does each source provide (e.g., symptoms, steps, pulse)?
  • In what format is it recorded?
  • Who is responsible for validating each source?

This mapping must be included in the SOP and known and applied by the entire local team — not just the sponsor or CRO.

🔹 Distributed, not delegated, responsibility:
It’s not enough for the sponsor or tech vendor to declare the platform validated. The site must clearly define:

  • Who checks ePRO data and when?
  • Who ensures wearable data is correctly transmitted to the EDC?
  • What happens if internet connectivity fails? Who checks and when?

🔹 Common audit-blind errors:

  • Data lacking timestamps synchronized with server time
  • No human validation of extreme values
  • No local backup — meaning that if the vendor’s server fails, the site can’t prove anything

⚠️ Critical Recommendation:
For all critical sources (e.g., ePRO and wearables), implement minimum redundancy: cloud-synced data + local captures (automated logs, confirmation emails, offline weekly archiving).
It’s one of the simplest protections during audits — and one of the most neglected.

🔹  How the mapping is integrated into the SOP

Mapping digital sources is not just a general reference in a SOP. It must be integrated in the form of a visual annex (e.g., a process flow diagram) that outlines:

  • source → what type of data it transmits,
  • responsibility → who reviews and validates it,
  • frequency → when and how the verification is performed,
  • back-up → how the local evidence is stored.

This annex must be reviewed with every protocol amendment and formally approved by the Principal Investigator or Study Coordinator. Sites without advanced IT infrastructure can even use manual formats (e.g., printed PDFs with checkboxes or Excel files that are verified and signed).

1.2 The Evidence and Traceability System for Digital Interactions

“Audit trail” is one of the most invoked concepts in DCTs. But without a clear and distributed traceability system, this ideal becomes a dangerous myth. Without well-calibrated tools, data may be collected — but not demonstrable. And in decentralized trials, proof of process sometimes matters more than the result itself.

🔹 What you must be able to prove, concretely:
A valid digital traceability system must answer:

  • Who generated a specific piece of data?
  • When was it recorded, verified, and synced?
  • Which platforms were involved in data transfer?

Without this clear reconstruction, the data is not just non-compliant — it’s unreliable.

🔹 Real risks if traceability is missing:

  • API tools sending data from wearables to the EDC don’t always log user-specific activity
  • Without transmission proof, any integration failure leads to irrecoverable data loss
  • The site cannot prove if a data point was manually entered, auto-imported, or later modified

🔹 How to build an efficient audit trail system:

  1. Auto-generated digital logs from each platform (ePRO, EDC, wearables), exported regularly and stored locally
  2. Weekly checkpoints defined and executed by the study coordinator (e.g., completeness check)
  3. Cross-platform archival: a system where mobile apps, online platforms, and local files can be correlated with minimal steps

🔹 Real-world auditability criteria:

  • Every data point can be reconstructed back to source (not just viewed)
  • Any modification includes user ID + timestamp
  • Missing data is auto-flagged (e.g., API timeout, sync failure, server error)

⚠️ Operational Tip:
Each site must define a "traceability control point" within the SOP, with assigned responsibility and periodic checks.
It doesn't require tech investment — just procedural discipline — and it’s one of the strongest signs of DCT quality maturity.

🔹 Practical Example – The ADAPTABLE Study (Medidata)
One of the clearest examples of successful digital traceability comes from the ADAPTABLE study (cardiology, USA), where the audit trail was built on:

– Integration of ePRO and wearable data sources into an API-based architecture with log access for the site team,
– Verification checkpoints validated during the pre-trial phase,
– Automatic confirmation of each patient–platform interaction via email and exportable logs,
– Dedicated operational training for site teams — including offline scenario simulations.

This model is frequently cited as a best practice reference for digital traceability in decentralized clinical trials (DCTs).

1.3 The Team’s Role in Maintaining Quality – Not Just the PI, but the Entire Site

One of the biggest traps in decentralized trials is assuming that quality responsibility lies solely with the PI. In reality, quality control in a DCT is a collective function.
Sites that avoid systemic deviations are those that distribute responsibility clearly, actively, and in a documented way.

🔹 What operational quality looks like for a study coordinator:

  • Creating study-specific checklists (not generic templates)
  • Actively flagging critical points: ePRO delays, data sync issues, inter-source discrepancies
  • Maintaining structured logs per digital source — with manual notes on exceptions, backups, and local checks

🔹 Practice-based example:
Medidata case studies show that sites with a designated “Quality Anchor” — someone responsible daily for operational coherence — achieve:

  • 47% faster audit response time
  • 35% fewer systemic deviations
  • Up to 2 weeks faster data reconciliation (data lock compliance)

🔹 The SMO’s role in quality:
An SMO is not just a provider of space or staff. It is the guardian of the quality system, which requires:

  • Formal and functional designation of a quality lead
  • Internal monthly self-assessment cycle: logs, gap assessments, SOP review
  • Active escalation procedures — not just theoretical frameworks

🔴 Without this collective system, the DCT becomes a collection of disconnected apps — not a coherent clinical process.

2. Operational tools recommended for study sites and SMOs


Quality in DCT doesn’t depend on how many platforms you use — but on how well you can control the ones you already have. Most sites and SMOs don’t have access to advanced IT infrastructure — but that’s not an excuse. It’s an opportunity to build simple, robust, and verifiable control systems. A high-quality site is not the one with fancy software, but the one that never loses a critical data point.

2.1. Standardized documents that actually work
• SOPs must be more than just a set of files stored on a server. They must reflect the day-to-day operational reality of the site:
– Who does what, when, using which tools?
– What happens when an app crashes or fails to sync?
– How is manual data verification performed during internet downtime?

🔹 What a valid SOP must include in a DCT setting:
• Specific procedures for decentralized workflows: remote verification, sync errors, wearable data integrity
• Clear responsibilities: who verifies what, how often, and what fallback procedures apply
• SOPs must be tailored to the actual configuration of the study (devices, apps, data flow)
• Generic forms are not acceptable — SOPs must describe the actual site-level workflow

• All scenarios must be included in the SOPs, validated by the local team, and tested regularly
• Operational logs must be clearly structured (e.g., data check log, device sync log, deviation log) and stored in an accessible format — digital or printed

🔹 Minimum recommended set of operational logs:
• Daily or weekly logs of received digital data
• Platform-specific data completeness checklists
• Logs of sync deviations or cross-platform transfer errors

• Audit trails must also be documented locally — not only in the sponsor’s platform — to allow quick demonstration during an audit
• Every data change must include a user ID and timestamp
• Missing or invalid data must be flagged, explained, and documented with root cause analysis

📌 Operational recommendation:
Each site must define a “quality documentation package” — a concrete set of SOPs, logs, and traceability exports that can be shown at any time during an audit to prove system-level control. This is one of the most tangible signs that a site has real operational quality.

🔹 It’s not enough to simply store logs — they must be easily accessible, structured, and verifiable, especially in the event of an unannounced audit.

2.2. Simple but effective systems – no advanced IT required
Sites without complex digital infrastructure can implement hybrid alternatives:
• Synced tables with controlled access — to track task completion and data verification
• Weekly screenshots or PDF exports from used platforms — archived locally
• Manual backup of critical data in encrypted offline files
• Periodic checklist-based verifications — digital or printed — for each device or app

🔹 Additional methods that work in practice:
• Centralized tables updated daily with patient data collection events (even manually)
• Weekly exports of data from ePRO or wearable platforms (CSV, PDF)
• Semi-automated cross-checks that flag missing or delayed data from integrated platforms

🔹 Redundancy = resilience:
• Critical confirmations (e.g., API transfers) must be stored in two places: in the platform and in a separate archive (e.g., email or local server)
• If paper is used as a backup for digital, define exactly how often it's completed, who checks it, and how reconciliation is handled

🔹 Tools that work even without high-tech infrastructure:
• Version-controlled SOPs with signature or acknowledgment logs
• Shared folders with restricted access and access logs
• Screenshots or email confirmations saved in the patient’s file

📌 These simple tools can be implemented at any site and are viewed positively during audits — as long as they are used consistently and properly documented. Auditors are more interested in consistency and control than in technological sophistication.

2.3. Processes that must never be outsourced
Even if the sponsor or CRO provides technical support, there are critical functions that must remain under the site’s internal control:
• Reconciliation of data from multiple sources (ePRO, EDC, wearables)
• Verification of critical points — e.g., extreme values, missing sync, lack of confirmation
• Local team training — not just initial, but recurrent (hands-on training, role-play, error reviews)

🔹 Processes that must remain 100% under site control:
• Deviation management and root cause analysis
• Staff retraining following protocol amendments or tech updates
• Documented oversight for each platform used

📌 Sites that outsource these processes lose visibility over quality — and are the first to be penalized during audits
📌 Sponsors are increasingly tracking which sites maintain operational control — and this directly affects future study allocations

📌 Key takeaway: Quality is not delegated. If you can’t prove how your center maintains oversight over digital systems, your site will be seen as high-risk — regardless of enrollment speed.

🔹 Practical example – The REACT-EU Study (Medidata)
An excellent real-world example of efficient DCT process implementation is the REACT-EU study, conducted in sites without advanced digital infrastructure. Local teams used manual logs validated weekly, screenshots archived offline, and printed checklists — all rigorously documented and accepted during audit. This model clearly demonstrates that DCT quality can be achieved without sophisticated technology, as long as procedural discipline and local control are in place.
📌 This is a validated precedent that disproves the belief that decentralization requires high-tech — and reinforces the critical role of local site teams in maintaining data integrity.
🟦 Without this minimum form of internal control, no site can prove the quality of its data. And in DCT, what cannot be proven — does not exist.

2.4. Site-centricity – The Operational Philosophy That Distinguishes Trusted Sites from High-risk Ones

Most conversations around decentralization focus on the patient. But from an operational quality standpoint, the critical question is different: “How prepared is the site to function as an active node within a digital ecosystem?”

IQVIA introduces a key framework for assessing site maturity in DCTs — known as site-centricity. It’s not about complete autonomy, but rather about a site’s ability to demonstrate real control over the digital processes it participates in.

🔹 Site-centricity means:

  • Being able to show, at any time, how digital data from platforms is verified and reconciled.
  • Having designated individuals, procedures, and evidence that the site controls the system — not just uses it.
  • Maintaining living documentation that reflects day-to-day operations, not just what is written in the SOPs.

📌 Site-centricity is not about how many platforms you use — but how easily you can demonstrate that what you use actually works and is under your control.

🔹 Three indicators that define a “site-centric” site:

  1. Demonstrable control: internal verification mechanisms exist beyond reliance on sponsor platforms.
  2. Clear ownership of responsibilities: every digital interaction has a clearly assigned local owner.
  3. Autonomous decision-making: the site has the freedom and ability to intervene in real-time when operational errors or data flow issues arise.

🔹 The difference between “participant-centric” and “site-centric”
While many initiatives focus on patient experience, no positive experience can compensate for a lack of data quality. DCT quality relies on both:

  • a safe and seamless experience for the participant,
  • and a robust control architecture at the site level.

📌 Sites that understand and apply the “site-centric” philosophy are the ones that become strategic partners for sponsors — not just task executors.

🔴 Without this mindset shift, no audit success and no recruitment performance can guarantee long-term allocation of future trials.

🔍 3. How Sponsors Identify Trustworthy Sites in DCTs: Concrete Signals, Not Impressions

In a decentralized clinical trial, continued collaboration is not determined by patient enrollment numbers or randomization speed — but by the site’s ability to control what cannot be directly observed.

📌 For sponsors, a trustworthy site is not the one that claims to follow procedures — it’s the one that can prove it at any time, with data, logs, and concrete actions.

🔹 Here’s what separates “trusted” sites from “high-risk” ones in the eyes of sponsors:

3.1. Ability to respond to audits anytime, with clear documentation
• Trusted sites have a ready-made “audit kit”: logs, exports, transmission confirmations, local backups, and documented deviations.
• High-risk sites ask for extra time, search across systems, or send incomplete files.
💡 Real example: In a DCT conducted by Medidata, sites with weekly validated logs were granted two additional trials without requiring a physical audit.

3.2. Operational response to errors — not just notifications to the sponsor
• Trusted sites detect deviations before the CRO does.
• They maintain an internal log of recurring issues and propose operational solutions.
💡 Example: In a phase II DCT, sites that reported discrepancies between ePRO and wearable data before central detection were included in the study’s steering committee.

3.3. Consistency between SOPs and real-life workflows
• Trusted sites have SOPs that match exactly what’s happening on the ground.
• They don’t rely on generic sponsor templates — instead, they personalize procedures and update them after each protocol amendment.
💡 Indicator: Sponsors increasingly request real SOP copies and check whether the procedures are reflected in the site’s operational logs.

3.4. Ongoing training and internal self-assessment
• Trusted sites can demonstrate that the local team is re-trained every time something changes in the digital workflow.
• Training doesn’t just mean signed attendance sheets — it includes role-playing, real case scenarios, and hands-on reconciliation sessions.
📌 Sites that document such training sessions report 30–50% fewer errors in the first study months.

3.5. Documented operational autonomy
• Sites that can make local decisions in edge cases (e.g., internet outage, delayed data, missing confirmations) without halting the study earn greater sponsor trust.
• But this autonomy must be documented — with justifications, exception logs, and confirmed actions.
💡 Without this ability, the site becomes a weak link in the decentralized network.

🔚 A trusted site is not one that never makes mistakes — it’s one that can demonstrate, at any moment, how it detects, manages, and corrects those mistakes.

📌 In decentralized trials, sponsor trust is earned through evidence. And that evidence doesn’t live in platforms — it lives in the hands of the local team.

4. What a Real Quality System Looks Like in DCT

A real quality system doesn’t start with technology — it starts with a fundamental question:
Can your site prove, at any moment, that its data is accurate, complete, and traceable?

📌 In a decentralized clinical trial, what cannot be demonstrated does not exist. Every platform, every digital interaction, every uncontrolled deviation becomes a potential risk point that reflects back on the site.

🧩 Quality is not a SOP. It’s not a software tool. It’s not an inspection passed two years ago.
It is a daily way of working, where:

  • the team knows what to verify and when,
  • every step is documented in an auditable way,
  • and the processes are strong enough to work even when technology fails.

A site that achieves this is no longer a mere executor of tasks.
It becomes a strategic and trusted partner, capable of ensuring continuity, safety, and value in an increasingly decentralized research ecosystem.

📌 In the DCT era, the sites that can demonstrate local control over quality are the ones that will stay in the game.
The others will be inevitably replaced.

🟦 How much of what you call “quality” in your DCT can actually be demonstrated — not just claimed?
In the era of decentralization, documentation is no longer enough. You must prove that every single data point matters — and that you know why.

Cambridge: A Maturity Model for Clinical Trials Management Ecosystem
🔗 https://www.cambridge.org/core/journals/journal-of-clinical-and-translational-science/article/maturity-model-for-clinical-trials-management-ecosystem/CBD72518D2D8EBD079BAF3477A4827B4

IQVIA: Empowering Clinical Research Sites and Sponsors in the Patient-centric Era
🔗 https://www.iqvia.com/-/media/iqvia/pdfs/library/white-papers/bcs2024-1062-04apr-pscs-site-whitepaper-tankersley.pdf

Medidata: Case Study Collection on DCT
🔗 https://www.medidata.com/en/life-science-resources/medidata-blog/decentralized-clinical-trials-case-study-collection/

FDA: Conducting Clinical Trials with Decentralized Elements (Guidance 2023)
🔗 https://www.fda.gov/media/167696/download

ATUM MEDICAL RESEARCH
office@atummedicalresearch.com+40 730 061 161
Aleea Mihail Sadoveanu
nr. 16B, 700491 Iasi
Romania