Home Clinical Trials Commercial Pharma Decentralized Trials Methodology Ethics Team Request Early Access

METHODOLOGY

Patient Intelligence: The Science Behind Intera

A technical overview of how Intera constructs behavioral digital twins of patient populations, where the methodology comes from, how it works, and where it falls short.

Intera Research Β· 2026


What is a Digital Twin

A digital twin is a living model of a real system, such as a factory, a warehouse, a supply chain, or a customer, that you can use to run controlled experiments before committing real money or real risk. The core idea is simple: build a faithful representation of the system, then simulate changes to it before implementing them in the world.

McKinsey has tracked this technology since the early 2020s, framing it as a digital replica of an object, person, system, or process, contextualized in its environment and used to simulate real situations and outcomes. The key word is contextualized. A digital twin is not a static model. It is conditioned on the specific environment it is meant to represent.

The questions a digital twin answers are decision questions:

"Where will this break?" / "What's driving the failure?" / "If we change X, what happens to Y?"

These are not descriptive questions. They are interventional. The value of a digital twin is not that it describes a system. It is that it lets you change the system safely before you act on it.

How Industry Adopted Digital Twins

The technology developed in phases. It started in engineering, where the value proposition was clearest: simulate a physical system before building or modifying it, and avoid costly real-world failures.

MANUFACTURING

The earliest large-scale deployments were in manufacturing. Automakers and aerospace companies began building virtual replicas of production facilities to simulate changes before implementing them. Work that previously required weeks of real-world trial, including line reconfigurations, throughput optimizations, and equipment placement, could be run in simulation first. BMW scaled this approach across more than thirty production sites, using digital twin environments to accelerate planning cycles that previously required physical modification.

SUPPLY CHAIN

Supply chain adoption followed. The use case was modeling end-to-end interactions from manufacturing through warehousing through distribution through returns, and using predictive AI layered on top to make the twin prescriptive, not just descriptive. Instead of asking where inventory is, supply chain twins asked where inventory will fail to be and what to do before it does.

LOGISTICS AND WAREHOUSING

Physics-based simulation extended the model into warehouse operations. Companies began simulating warehouse layouts, robot interactions, and demand surge scenarios before deploying physical changes. The output was not a single recommendation. It was a ranked set of changes with predicted operational impact. Test the design virtually, then deploy only the modifications that actually move throughput.

SMART BUILDINGS AND INFRASTRUCTURE

As systems became more complex, digital twins became the operational layer. Smart building platforms emerged as digital replicas of physical spaces, monitoring energy consumption, occupancy patterns, and maintenance events in real time, with simulation capabilities layered on top for scenario planning. The twin became the interface between operators and complex infrastructure.

The Move to Human Systems

The inflection point came when the twin concept moved from physical systems to human systems. The question shifted from "how will this machine behave" to "how will this person behave."

Gartner formalized this as the "digital twin of a customer," a simulation of customer experience used to predict future behaviors from online and physical interaction data. The premise is that individual and population-level behavior can be modeled with sufficient fidelity to run controlled experiments: test how a customer segment will respond to a new interface, a pricing change, or a service modification before deploying it.

This is the same logic as the factory twin applied to people. The difference is that human systems introduce variables that physical systems do not: psychological state, social context, financial pressure, trust, health status, and lived experience. These are harder to model. But the decision value is identical: simulate before you commit.

The adoption signal in adjacent industries is now visible. Major consumer health companies have begun hiring for roles explicitly requiring digital twin and customer simulation capabilities, integrating behavioral simulation into customer experience strategy rather than treating it as a research tool.

The pattern across all industry adoption: digital twins start in engineering and operations, then migrate up the stack into decision-making. Human behavior is the last frontier of that migration.

The Gap in Healthcare

Healthcare is the domain where behavioral prediction carries the highest decision stakes, and where the current infrastructure for patient insight is the least equipped to provide it.

Clinical trial teams design protocols without reliable forecasts of which patients will complete them. Commercial pharma teams build patient support programs without knowing which patient segments will actually engage. Hospital systems deploy readmission prevention resources without predicting which patients are at highest risk before discharge.

The current methods for patient insight share a common flaw: they capture stated intent rather than predicted behavior. Survey panels, focus groups, and advisory boards measure what patients say they will do. The say-do gap in healthcare is well-documented. Patients overstate willingness to initiate therapy, understate access barriers, and misreport adherence intent under social desirability pressure. The insight derived from these methods is structurally biased toward outcomes that are more optimistic than reality.

The result is predictable: clinical trial attrition rates that have not meaningfully improved in twenty years, patient support programs with engagement rates in the single digits, and commercial launch forecasts that consistently miss on adherence.

Intera's premise is that behavioral digital twins of patient populations can close this gap, not by replacing human research, but by providing a faster, cheaper, and less biased signal for decisions that currently get made without one.

How Intera Builds Synthetic Patients

Each synthetic patient is constructed across four conditioning layers. These layers are designed to capture the dimensions of a real patient that drive behavioral outcomes, not the dimensions that are easiest to measure.

LAYER 1

Structured Demographic and Clinical Inputs

Age, sex, indication, comorbidities, medication history, insurance status, geography, and clinical state. These are the inputs that define who the patient is in the healthcare system, the structured record.

LAYER 2

Psychosocial Traits via COM-B Framework

Capability (physical and psychological ability to engage with the protocol), Opportunity (environmental and social factors that enable or prevent engagement), and Motivation (automatic and reflective motivational states). The COM-B framework is the validated behavioral science model Intera uses to structure psychosocial conditioning.

LAYER 3

Narrative Data

Patient diaries, forum posts, longitudinal survey records, and qualitative research that encode personality, lived experience, and emotional style. Narrative data captures what structured records miss: the texture of how a patient experiences illness and healthcare interaction.

LAYER 4

Validated Psychometrics

Big Five personality scores, domain-specific behavioral scales (health locus of control, illness perception, treatment burden scales), and activation scores. These are standardized instruments with known predictive validity for healthcare behavior.

Persona cohorts are sampled to match the demographic and clinical distributions of a real target population. Where structured intake records exist from partners, sponsors, or data collaborators, they improve calibration fidelity. Where they do not, the platform runs on population priors derived from large-scale behavioral and clinical datasets.

What We Simulate

The conditioning layers define who the patient is. The simulation layer models what that patient will do when placed in a specific protocol, program, or care pathway.

Intera simulates patient behavior across three classes of decision context:

CLINICAL TRIALS

Protocol adherence and dropout

Given a specific protocol design, including visit schedule, assessment burden, intervention complexity, and site structure, which patient segments will complete it, which will disengage, at what point, and for what behavioral reasons. Placebo response probability by segment. Screening inflation risk. In-trial adherence trajectories.

COMMERCIAL PHARMA

Therapy initiation and persistence

How defined patient segments move through initiation, titration, refill, and switch. Where access friction, cost barriers, and clinical anxiety create gaps. How specific messaging variants land with specific segments before media spend is committed.

DECENTRALIZED TRIALS AND DIGITAL PROGRAMS

Technology interaction and dropout

How patient segments interact with app onboarding flows, ePRO schedules, wearable requirements, and remote visit cadence. Where behavioral friction creates abandonment before it is observable in completion data.

The core principle across all three: simulate before you commit. Instead of modeling equipment performance, we model human decision-making, predicting how patients move through protocols and care pathways before real-world resources are deployed.

Validation and Limitations

Intera is designed to be evaluated like a forecasting system, not treated like a black box. We benchmark outputs against held-out outcomes and compare against simple baselines to quantify lift.

15%

Mean absolute error in retention prediction. Retention curves back-tested against historical study outcomes using held-out scenarios. Compared against historical-average baseline.

97%

Directional agreement in back-tests. Message and scenario simulations validated against held-out human panel results. Measured as agreement on segment-level direction and rank order.

Where the model falls short

Social desirability bias in language models. LLM-driven personas can inherit the social desirability tendencies present in their training data, overstating prosocial behaviors and understating stigmatized ones. We actively monitor for this and disclose it per simulation, but it has not been fully eliminated.

Protocol-specificity requires engagement. Generic outputs are less useful than protocol-calibrated ones. Simulations run on population priors without sponsor-specific data are less precise than those conditioned on actual intake records or study history. The platform improves with engagement.

Outputs are directional signals, not oracles. Intera is not a replacement for human research. It is an augmentation layer for decisions that currently get made without a behavioral signal. We do not claim perfect predictions and do not recommend acting on Intera outputs alone for high-stakes regulatory decisions.

See the methodology in practice.

We run protocol-calibrated simulations for CROs, sponsors, and life sciences consultancies. A pilot takes four weeks.

Request Early Access