Why Tech Transfer Needs a New Blueprint

January 13, 2023

Contributed Commentary by William Scott-Dunn, PhD, IDBS 

January 13, 2023 | Because the future of BioPharma depends on quality data, you might expect a continuous, two-way flow of richly contextualized information from development to manufacturing. Instead, these functions often feel as if they are separated by an invisible wall.  

While the enterprise may be joint, development and manufacturing cultures are distinct. Poor information sharing can make findings feel like closely guarded secrets. This siloed approach was the accepted norm in earlier eras. Now, it is due for a major renovation. 

Currently, development and manufacturing are joined by the "technology transfer" process: not a steady flow of information, but a static handoff built around carefully drafted and signed documentation. According to one McKinsey & Company survey, the tech transfer process usually takes 18 to 30 months to move required information from development to manufacturing.   

With the wealth of tools for change at our fingertips, it is time to knock down the walls and draw a new blueprint for knowledge sharing. Ideally, tech transfer in its current form will become a relic, and data integrity will become the mortar binding development and manufacturing together.  

How Today's Tech Transfer Processes Fall Short 

Rigorous, meticulous tech transfer processes exists for a reason: when scaling from clinical trial supply to commercial manufacturing processes and supply chain activities, it is essential to get things right. Current tech transfer processes are designed to ensure that only safe, effective therapies reach patients. 

But even when current tech transfer processes go according to plan, they can still create huge inefficiencies and delays. Because of data loss or incomplete data context, the process development steps leading to tech transfer often must be repeated. This slows down progress to market and patients miss out as a result. 

Consider an example that may be painfully familiar: a fed-batch bioreactor process developed in the UK for manufacturing therapeutic proteins at several sites across the globe. In this case, tech transfer was considered a success. All the documentation was in order; the ink was wet from a dozen signatures.  

Yet as manufacturing began, the bioreactor manufacturing team observed consistently lower performance at one specific manufacturing site. Bioreactor culture growth and final product titer were just within acceptable limits, whereas levels at other sites were well within.  

To replicate and investigate the differences between sites, the development team spent months creating scale-down models in laboratory and pilot bioreactor cultures. One potential cause they investigated was a failure to follow processes correctly on the manufacturing side; unsurprisingly, the relationship between development and manufacturing suffered as a result. 

Finally, deeper investigation using an Ishikawa (fishbone) diagram fleshed out with some small manually aligned and pooled datasets identified that raw material might be a primary cause. Fed-batch bioreactor cultures require many raw materials over the duration of the production run, and differences in these raw materials can contribute to significant variability.  

After several expensive scale-down runs using materials sourced directly from the problem site, material variability was eventually shown to be the problem. Further investigation eventually identified the exact raw material, but not soon enough: manufacturing had already shifted the supply chain to ship all raw materials from a well-performing location.  

In this instance, tech transfer checked all the right regulatory boxes. Data were entered carefully and in the right place using Excel, Word, the electronic lab notebook (ELN) and the laboratory information system (LIMS). Results were dutifully captured and signed-off on as being valid. But the results data were often missing an important ingredient: context.  

Actions and records must be combined and synthesized to generate scientific understanding. This investigation involved joining several databases and creating new SQL queries to develop a rudimentary material and culture genealogy. Tracing those genealogies was harder than it needed to be because results data was poorly labeled and contextualized. 

Imagining a New Blueprint for Tech Transfer 

How could it have been different? Instead of a one-time handoff, imagine a shared data backbone: a single source of truth between development and manufacturing. To be useful, this data backbone would need to embody a deep scientific understanding of the processes involved; it would also need to allow data security and access controls appropriate for external partners like CDMOs.  

But done well, a shared data backbone can eventually replace the one-time tech transfer approach, creating a blueprint for continuous knowledge sharing that makes rapid problem solving easy. 

A cross-functional development and manufacturing team could leverage native genealogies nested within visual analysis tools to trace the history of all the cultures and raw materials used at each site. Using contextualized information from the data backbone, they could rapidly detect changes in raw materials and correlations with any process shifts.  

With shared data, the team could spot how raw materials were combined and where they were being consumed. Multi-variate analysis could identify the primary contributors decreasing growth and product titer and link those changes to specific lots of raw material at specific sites. 

The speed of such analysis and intervention could save companies millions in lost revenue by speeding up time to market. This new blueprint radically reinvents the concept of tech transfer: instead of one-time documentation. The future should be built on shared resources accessible to all. Grab a brick, while they clear the rubble. Instead of walls, build bridges. 

William Scott-Dunn, Product Team Manager at IDBS, became a scientist to satisfy a relentless curiosity and solve problems for society, spending over a decade as a process development scientist for UCB, Lonza and Genentech. Joining IDBS eight years ago, his passion was focused and magnified by software solving the most pressing problems for today’s bioprocess scientists in the laboratory. His vision is to transform BioPharmaceutical creation by leveraging the huge potential of digital technology. William has a Ph.D. in Biochemical Engineering from the University of Birmingham. He can be reached at wscott@idbs.com.