Digital twins: The next evolutionary step in downstream development [Interview]

Biopharmaceutical process development until now heavily relies on experiments, which is time consuming and expensive. Over the past decades, various approaches have been pursued and allowed to successfully cut down development time and cost. Digital bioprocess twins present the next evolutionary step towards a more efficient downstream development and mark the beginning of a new era: Bioprocessing 4.0.

A digital twin is commonly understood as a virtual representation of a real-world process. Industries such as aircraft or automotive engineering fundamentally rely on the application of digital twins from concept to design, up to manufacturing and service. Although other industries seem to be far ahead, the digital twins of bioprocesses have started to affect the biopharmaceutical industry as well. As a game-changing solution, digital twins allow the replacement of laboratory experiments with in silico simulations, enabling a cheap and fast environment for research, development and innovation.

Digital twins will also be one key topic on the agenda of this year’s Chromatography Modeling Days (CMD2020), taking place in September in Heidelberg, Germany. As a passionate modeler and an early adopter , Dr. Felix Wittkopp will speak on the “Birth of digital twins: How to implement a modeling strategy in early-stage process development.” We asked him to answer some questions to shorten the waiting time until September…

DSP process depicted as digital twin with binary code

GoSilico: Felix, How would you describe your work in three sentences?

Dr. Felix Wittkopp: There is an enormous need for new, innovative drugs in several diseases and the task of our department is to develop an efficient, state-of-the-art protein purification process to supply these drugs. As patients are waiting, time is an important factor and we want to challenge our timelines wherever possible. The job of my team and I is to develop and implement new technologies, such as mathematical modeling and automation, that support our portfolio teams in developing optimal bioprocesses in an efficient way.

What would you name as an essential asset when working in downstream process development?

I am convinced that optimal solutions are a result of expert knowledge and urgency. This means that, in downstream processing, it is essential to have the ultimate will to understand complex systems and to optimize workflows. This includes the ability to look at challenges with a different perspective and question existing solutions. In the end, your bioprocess is a combination of theoretical background knowledge, the knowledge you extracted from the data, and your knowledge about the required process properties for instance regarding system limitations after scale-up. Thereby, urgency is introduced by the short period of time available for generating data and extracting knowledge out of the data.

The digital twin concept is the next evolutionary step of the classical design-build-test-learn cycle of engineering for product development.

Dr. Felix Wittkopp, Roche Diagnostics GmbH

You work on early stage process development e.g. for complex new molecule formats. Working in that field, what is especially challenging?

In early-stage development, you are facing two contrary goals. As our activities are preclinical and there is an enormous need for new medicines, we want to challenge our timelines wherever possible. On the other hand, the products we develop have to have the highest standards as patient’s safety is of upmost importance. More concrete this means, that the product quality needs to be stable after entering the clinical trials limiting the implementation of changes of the production process.

In this environment, the application of innovative tools, such as mathematical modeling or automation, has the potential to deliver on both goals. Applying automation, we produce more data in shorter timelines. Mathematical modeling connects different data sets through their fundamental properties; through which we extract the maximum knowledge out of the data available and create deep process understanding.

The limitations I observe in early-stage development derive from our nested bioprocess development scheme. This is a fast approach, but for instance, the development of the protein purification process happens simultaneously to the cell line selection, the development of the cell culture process, as well as the development of the analytical methods. It often happens that the final cell line is not available until the final large-scale material supply for our department. Hence, the prediction of this supply based on data, which is not fully representative in respect of the impurity pattern and analytical readouts, is challenging.

Another challenge in our business is the complex molecule formats which represent the majority of our portfolio and also require sophisticated analytical methods. High-resolution analytical methods, like mass spectroscopy or size exclusion chromatography, are well established but are not suited for a large number of samples. Unfortunately, large datasets are a prerequisite for the calibration of consistent mathematical models. One solution to this dilemma is the introduction of suitable simplifications, like the grouping of related protein impurities in one analytical readout. However, this approach limits the validity of a model for later applications and reduces the flexibility; for instance to upstream changes. In my opinion, this obvious analytical bottleneck will only be solved by new, innovative analytical methods.

You already presented some case studies in which digital twins were applied. What’s your perception of a digital twin?

The digital twin concept is the next evolutionary step of the classical design-build-test-learn cycle of engineering for product development. Testing every possible solution of a product can be laborious work, especially when a product is a complex process with a large number of influencing factors. Instead, you teach a digital version of your product with real world data, and ideally theoretical knowledge of physical or chemical parameters, until the digital twin can perform reliable predictions. As soon as a digital model is available, different designs can be predicted in silico within seconds and complex parameter dependencies can be investigated to find hidden patterns and gain insights.

There are great examples from other industries like aviation. For instance, the development of aircraft turbines was a time consuming and expensive effort. Today, mathematical models and digital twins are used for the design, testing and even maintenance of these products.

Dr Felix Wittkopp Roche Diagnostics GmbH
Dr. Felix Wittkopp has been working as a Scientist for Roche Pharma Research and Early Development in the department for Bioprocess Research since 2017. The department is part of the Roche Innovation Center Munich.

And what is the benefit of that approach in bioprocessing?

When I started at Roche, I learned quickly that there was a massive amount of practical experience in downstream process development which was used to define standard processes for certain molecule formats. An experienced team member can develop a protein purification process quite quickly and can define, for instance, a step elution of a chromatography column by only doing a few experiments. As long as you need experimental purification data of a molecule before you can do predictions, you cannot compete with these timelines.

However, in order to reduce the time-to-market and fulfill Quality-by-design obligations, the requirements for processes in early-stage development have been continuously rising over the last years. Today, the subsequent departments like our clinical supply facility expect that project teams develop ranges for all process parameters instead of single, optimal values. In addition, interconnections between different process parameters have to be understood to be able to define a design space, where the product quality as well as the process yield can be guaranteed.

To achieve this deep process understanding, traditional, wet-lab based process development would require excess resources in terms of material, time and manpower. The application of modeling reduces the number of experiments to a reasonable amount, which is still needed for model calibration and verification. Using digital twins, experiments can be predicted within seconds instead of performing them in the lab over several hours. By doing large samplings at different process conditions, process parameters can be characterized, understood and optimized in short timelines. Investigating process robustness can be done within days instead of weeks. I am convinced that these are the work packages where I as a modeler can support and make a difference for even very experienced project teams in early-stage development.

However, an even greater benefit is created when the digital twin is further developed from the data obtained during late-clinical-phase technical development. The digital twin concept enables the direct use of early-stage data and knowledge and therefore facilitates the process transfer and refined process development. In addition, applications like online process control could be implemented. So far, none of our early-stage models has reached this milestone but we are working on it and expect it in the near future.

How do you set up your digital twins? Is there a formula for success or an ideal workflow?

At Roche, we identified two scenarios, where the application of chromatography modeling can have a direct impact to downstream process development.

In the first scenario, you directly calibrate a chromatography model after the molecule format is selected and early cell culture material from a pool of different clones is available. With this approach, the whole process characterization and optimization is performed in silico and therefore timelines can be much shorter. The disadvantage of this approach is, that certain input and output parameters can change, like the protein impurity pattern or the analytic readout. This leads to an iterative workflow, where you verify and improve your model as soon as new insights and new data is available.

In the second scenario, the project team develops the process according to a standard approach and the modeling team starts with their activities as soon as the data is fully representative of the clinical supplies. In this approach, the process is already fixed and the goal of the digital twin is to support the process robustness study and define design spaces. Also, a strategy for the continuous improvement of the material quality during the clinical phases can be developed in silico. One disadvantage of this scenario is that there is almost no time for further implementations of changes to the process. Thus, even if the model is able to find room for process optimization, an implementation of changes might be impossible. Another challenge of this scenario is that the timeline for model calibration and application is extremely short. You can only be successful by having optimized modeling workflows, standardized datasets, high computational power, and sufficient work force.

However, independent to which approach you perform, the most important factor for success is a good communication to the project team. The complexity of the model should always be adapted to the needs of the project and the design space that the project team wants to predict.
At the beginning of our modeling activities, I realized, that this communication can be quite challenging. On the one side, I did not know on which work packages I really can have an impact and also what the requirements of my models were. On the other side, project teams of course did not know which benefits a mechanistic model can provide and how much time you need for setting it up. To facilitate this interface, we created standard modeling workflows including: databases, standard experimental designs for certain questions, standard data structures, questionnaires, and tools for evaluation. Since then, we gained much efficiency and could provide models to almost all our portfolio projects.

I am convinced that we should use all the digital tools available to get as much knowledge out of our data and decrease our timelines; patients are waiting.

Dr. Felix Wittkopp, Roche Diagnostics GmbH

How do you assess the future relevance of computer-guided or in silico process development?

I think in silico process development in its fundamentals is nothing completely new. Most of our theoretical understanding of protein chromatography was introduced in the last century. The reason why these tools have such a momentum right now is that software packages as well as standardized datasets are available now. If you want to create a chromatography model, you do not have to solve partial differential equations with self-written scripts anymore. That increases the number of chromatography modeling applications in industry and brings resources to academic groups doing research in this field.

Another supporting element is the aspiration to shorten the time-to-market of new drugs. The main reasons for this are the medical need by the patients, the competitive pressure by other companies and also the huge financial investments which are necessary to bring a new molecular entity to the market. I am convinced that we should use all the digital tools available to get as much knowledge out of our data and decrease our timelines; patients are waiting.

We are convinced, that the concept of digital twins and the so-called bioprocessing 4.0 will assert themselves soon in the biopharmaceutical branch. What could accelerate this process?

In my opinion, the main prerequisites for bioprocessing 4.0 are organized, high-quality data including standardized features, computational power, sophisticated algorithms, a high degree of automation and experts who know the biotech industry and know how to implement these technologies. Several of these experts, with a strong IT background who want to use their skills for developing innovative drugs, were hired in the past years. I am amazed to see that tools like cloud computing, data warehouses and lakes, smart data searching tools, automation platforms and auto AI were realized already.

Now it is time to change the mindset in bioprocessing and enable the power of bioprocessing 4.0. We have to leave the comfort zone, where there is usually enough time for performing lab experiments. In addition, repetitive workflows should always be assessed for automation opportunities. The best way to accelerate this process is developing innovative solutions in combinations with practical orientated case studies to demonstrate the benefits. Then, it is again a combination of expert knowledge and urgency to do now what patients need next.

Testimonial CMD Feedback, participants gathering outside discussing modeling issues
Getting into the details of digital twins and chromatography modeling at CMD.

Felix, thanks a lot for your informative and profound insights! At Chromatography Modeling Days 2020, you will also be speaking on the “Birth of digital twins” – a good opportunity to discuss the topic even deeper. We are looking forward to your talk and to having you in Heidelberg.

Thank you. It was a great experience coming together as a chromatography modelling community at last years CMD2019. I really enjoyed the various case studies and the excellent modelling discussions which you usually do not have at more general conferences. I am looking forward to participating at and contributing to the upcoming CMD2020 in September.

Do NOT follow this link or you will be banned from the site!