Process Optimisation Based on Large Databases of Routinely Monitored Industrial Process Data

Authors

  • Karin Kovar
  • Thomas K. Friedli
  • Dusan Roubicek
  • David S. Langenegger
  • Markus Keller
  • Hans-Peter Meyer

DOI:

https://doi.org/10.2533/000942905777675688

Keywords:

Computer-intensive methods, Data-driven statistical methods, Intervention-impact analysis, Large database

Abstract

Huge amounts of data are routinely logged and stored during the monitoring of biotechnological production processes. A concept is described to extract and analyse the information these data contain and to subsequently apply it for process improvement. In total, roughly 100,000 time series of raw and derived signals which stemmed from 173 high-cell-density processes with recombinant microorganisms at 50 m3 scale (working volume) were processed. As is often the case, no mathematical process models were readily available and therefore data-driven, computer-intensive methods were applied. These endeavours helped to stimulate a change in manufacturing strategy, which in turn has led to an increase in the final product titre of 26% on average.

Downloads

Published

2005-10-26

How to Cite

[1]
K. Kovar, T. K. Friedli, D. Roubicek, D. S. Langenegger, M. Keller, H.-P. Meyer, Chimia 2005, 59, 753, DOI: 10.2533/000942905777675688.