Skip to main content
Sage Choice logoLink to Sage Choice
. 2021 Apr 19;26(4):408–414. doi: 10.1177/24726303211008861

A Machine Vision Approach for Bioreactor Foam Sensing

Jonas Austerjost 1,*, Robert Söldner 1,*,, Christoffer Edlund 2, Johan Trygg 2, David Pollard 3, Rickard Sjögren 2
PMCID: PMC8293757  PMID: 33874798

Abstract

Machine vision is a powerful technology that has become increasingly popular and accurate during the last decade due to rapid advances in the field of machine learning. The majority of machine vision applications are currently found in consumer electronics, automotive applications, and quality control, yet the potential for bioprocessing applications is tremendous. For instance, detecting and controlling foam emergence is important for all upstream bioprocesses, but the lack of robust foam sensing often leads to batch failures from foam-outs or overaddition of antifoam agents. Here, we report a new low-cost, flexible, and reliable foam sensor concept for bioreactor applications. The concept applies convolutional neural networks (CNNs), a state-of-the-art machine learning system for image processing. The implemented method shows high accuracy for both binary foam detection (foam/no foam) and fine-grained classification of foam levels.

Keywords: foam sensor, machine vision, deep learning, process analytical technology, bioprocessing

Introduction

Foam emergence is a commonly observed phenomenon in bioprocess upstream applications. 1 Foam, which is mainly provoked by the combination of gassing needed to support the cell culture and the release of lipids and proteins from cells, has an adverse effect on the operation of the bioreactor and the cell culture productivity. 2 Loss of cell viability leading to cell rupture and foaming can then occur due to lack of specific nutrients or mechanical stresses from bursting gas bubbles and agitation. The generated foam can then rapidly develop, in some cases during the course of minutes, and block exhaust gas filters, resulting in reactor overpressure, reduced sterility integrity, and ultimately batch failure, stressing the need to detect and prevent foam emergence. 3

Established strategies to prevent and eliminate foaming within a bioprocess rely predominantly on chemical methods, but mechanical and physical methods are used as well. Mechanical and physical methods are only able to destroy existing foam, whereas chemical methods are capable of preventing the emergence of foam as well as eliminating present foam. 4 Typical mechanical strategies to break foam include liquid sprayers, centrifugal foam breakers, or orifice foam breakers, while physical strategies include the application of ultrasonic or thermal probes to break existing foam.46 Chemical strategies rely on the addition of so-called antifoam agents to the cell culture broth. These antifoam agents are surface active substances, which influence the surface properties of the medium toward a decreased foaming ability, including commonly used agents such as silicone oils, polypropylene glycol, and glycerol esters. 7 Due to being relatively inexpensive and easy to handle and add into bioprocessing equipment, chemical foam prevention and elimination strategies are most often used. Nevertheless, antifoam agents influence mass transport and high concentrations may negatively affect the volumetric mass oxygen transfer coefficient as well as the dissolved oxygen concentration, which are important parameters for aerobic cell culture processes. 8 This can result in decreased cell growth and reduced product titers.9,10 Furthermore, antifoam agents may cause fouling of filters and membranes in subsequent downstream processing applications and therefore accelerate material fatigue and decrease purification efficiency.11,12 This underlines the need for a well-considered antifoam agent feeding strategy based on reliable sensor data. Established foam sensors in bioprocessing include conductivity or capacitance probes placed within the bioreactor. The disadvantages of these contact-based sensing strategies are the fouling and coating of probes, which typically results in false-positive signals. 4 The outcome of this is high maintenance costs and possible overaddition of chemical antifoams, which can ultimately lead to batch failure. Contactless foam sensing can be reached via ultrasound sensors, but is prone to temperature shifts, humidity, and false-positive signals from splashing caused by agitation. 13

As foam is a key bioprocess parameter that can be visually identified, machine vision-based approaches are promising candidates to detect foam emergence. Traditional machine vision workflows rely on extensive feature engineering, where complex algorithms are hand-engineered to achieve the task at hand.14,15 Achieving good predictive performance from such a system is difficult, and they suffer from low robustness to changes in imaging conditions. However, in the past decade deep convolutional neural networks (CNNs) as well as openly available large-scale annotated data sets have contributed to exceptional progress in machine vision. 16 By training CNNs end to end on large data sets, CNN-based machine vision has outperformed traditional methods for a wide variety of vision tasks and now completely dominates the field. 17

In this study, we present a machine vision-based strategy to detect foam within a small-scale (250 mL), single-use bioreactor setup ( Fig. 1 ). This concept was implemented using off-the-shelf hardware components and open-source machine learning software libraries. The established system showed high accuracy in both binary foam detection and fine-grained classification and has been identified as a promising approach to overcome the drawbacks of conventional foam sensor systems like fouling and coating, as well as limited single-level functionalities. The noninvasive system shows proof of concept for the application to bioreactor formats for both single-use and stainless steel formats.

Figure 1.

Figure 1.

Schematic diagram depicting the implemented machine vision-based foam detection in a single-use bioreactor setup. First, an image is acquired by a camera module, which is then classified by a CNN. The classification can be performed by either the implemented binary classification model or the developed fine-grained classification model.

Materials and Methods

Experimental Setup

All bioreactor experiments were performed using an Ambr250 high-throughput multiparallel bioreactor system that has become the biotech/biopharma industry state of the art for bioprocessing research and development (The Automation Partnership [Cambridge] Ltd., Cambridge, UK, part of the Sartorius Stedim Biotech Group). 18 The system comprises 12 or 24 disposable vessels (250 mL each) integrated into a liquid handling system for fully automated bioprocessing operation. The system is housed inside a biosafety cabinet to enable aseptic automated sample removal and collection. Two different camera modules have been placed in front of the Ambr250 system and were used to acquire the image material for the model development. A smartphone (Google Pixel 3a XL, Google LLC, Menlo Park, CA), as well as an action camera (apeman A79, Apeman International Co., Ltd., Shenzhen, China), were used for image data acquisition. An additional light-emitting diode (LED) light source (Godox LED64 LED, GODOX Photo Equipment Co., Ltd., Shenzhen, China) was used to introduce lighting varieties to the image data set during image data acquisition, in addition to the standard Ambr250 clean bench lighting modifications (clean bench light of biosafety cabinet on/off) (see Table 1 for performed experiments). Each device was fixated on the clean bench window using a dedicated suction cup holder ( Fig. 2A ).

Table 1.

Experimental Plan, Which Resulted from a Full-Factorial Design DoE with 2 Levels for Each Factor (Volume, Dye Addition, Clean Bench Light).

Experiment No. 1 2 3 4 5 6 7 8
Run order 2 6 3 7 4 1 5 8
Volume (mL) 200 240 200 240 200 240 200 240
Dye addition No No Yes Yes No No Yes Yes
Clean bench light of biosafety cabinet Off Off Off Off On On On On

Figure 2.

Figure 2.

(A) Experimental setup used for image acquisition. A smartphone, an action camera, and an LED light source have been mounted on a clean bench glass in front of a multiparallel small-scale bioreactor system via suction cup holders. (B) Performed six-step workflow to acquire a CNN, which is able to distinguish between different levels of foam in single-use, small-scale, bioreactors. The shown workflow is an example for the fine-grained classification model.

To include and identify important process and environmental parameters with respect to model quality, a design of experiments (DoE) using the Software MODDE (Sartorius Stedim Data Analytics AB, Umeå, Sweden) was performed. The experimental plan is shown in Table 1 and was generated by using a full-factorial design (FFD) with two levels for each factor. The “Volume (mL)” entry within the experimental plan corresponds to the filling volume of the cultivation vessel (200 mL/240 mL). The “Dye addition” entry indicates if 50 µL of food dye (Orange Red, Suchuangyi Technology Co., Ltd., Shenzhen, China) were added to the media or not (yes/no). The “Clean bench light” entry specifies if the clean bench light (which is part of the safety cabinet in which the Ambr250 system is placed) was turned on or off (on/off). To prevent any experimenter bias, the order of execution was assigned at random (“Run order” row). In addition, the external light source (“LED light”) ( Fig. 2A ) was arbitrarily turned on and off to further introduce diversity into the acquired image material. To provoke variously strong characteristics of foam levels, different levels of air supply (5–50 mL/min) and different additions (100 µL to 1 mL) of a 0.5 g/mL bovine serum albumin (BSA) solution (BSA acquired from Sigma-Aldrich Chemie GmbH, Taufkirchen, Germany) have been added to the used medium (4Cell XtraCHO Stock & Adaptation, Sartorius Stedim Cellca GmbH, Ulm, Germany). Furthermore, stirrer speed adjustments were performed during the image acquisition phase (500–1500 rpm). Video material was initially recorded, and the video frames were subsequently extracted. A video of every experiment was recorded for 20 to 25 min to collect a diverse data set of different foam quantities. The resolution of the acquired images was 1920 × 1080 pixels for the smartphone camera and 1520 × 2688 pixels for the action camera, respectively.

Model Training

After image material acquisition, regions of interest (ROIs) were manually annotated using the cloud-based image annotation platform Dataloop (Dataloop AI, Herzliya, Israel). The ROIs that contain the bioreactor vessel from a side view were cropped out, rescaled to 250 × 250 pixels, and assigned to a class (no foam, low foam, medium foam, high foam) by a single subject matter expert trained in bioprocessing scenarios to reduce the risk of introducing inconsistent labels (see Supplemental Material for example images and their assigned classes). The resulting data set, which formed the foundation for model generation and validation, is specified in Table 2 .

Table 2.

Acquired Image Data Set and Manually Annotated Classes.

Data Set Whole Data Set Action Camera Smartphone
No foam 982 17 965
Low foam 2183 142 2041
Medium foam 1542 124 1418
High foam 428 61 367
Total 5135 344 4791

The generated data shown in Table 2 comprise the annotated image material generated by the action camera, which took an image every 10 s, and the annotated image material originating from the smartphone camera video, with an image every 90th frame (corresponding to one image every 3 s, downsampled from original acquisition at 30 frames per second). For experiments on binary classification, which distinguishes between no foam and foam, the classes low foam, medium foam, and high foam were combined into the single class foam, whereas as all classes were used for fine-grained classification.

The annotation data and the cropped raw image data were exported and used to train CNN models for image classification, using the Python programming language (version 3.6) and the deep learning framework PyTorch (version 1.4) (Facebook Research, Menlo Park, CA). For both binary foam detection and fine-grained classification, a ResNet-18 model neural network was used. 19 ResNet variants are widely used for image classification, and ResNet-18 is the smallest model in this family.

Both the binary and fine-grained models where trained with cross-entropy loss for 30 epochs, a batch size of 50 images (largest batch size fitting on the graphics processing unit [GPU] used for training), a learning rate of 1.2e−5 (chosen after pilot experiments evaluating the learning rate influence on model convergence on the validation set), the Adam optimizer, 20 and random horizontal flips for data augmentation. For each training, the model with lowest validation loss was saved and used for evaluation (see Supplemental Material for corresponding loss plots). The validation data were created by taking 10% of the training data at random and using that split for all models doing the same task (binary or fine-grained classification). Due to class imbalance, because of more foam than no-foam images present, class weights of 0.4, corresponding to the ratio of foam and no-foam images, were introduced for the images having foam in them when training the binary classifier but not the fine-grained one.

Model Validation

To validate the models, a subset of the images was excluded from model training and only used to evaluate models’ classification performance as a test set. For the binary classifier, all images from one smartphone video capture were excluded to constitute a test set of 672 images, where 477 images contained foam and 195 did not. Due to the low number of high-foam images in any given video capture, another test set with 512 images was designed and used for the fine-grained model containing 98 no-foam, 218 low-foam, 154 medium-foam, and 386 high-foam images. Since the classes are imbalanced, evaluation accuracy as commonly defined, that is, the ratio of correct classifications, will be biased toward the class with the most labels. Instead, the classification performance was evaluated by calculating the F1 score, defined as

Precision=TP(TP+FP) (1)
Recall=TP(TP+FN) (2)
F1=(Precision*Recall)(Precision+Recall) (3)

Here, the TP, TN, FN, and FP values depict the true-positive/true-negative and false-negative/false-positive predictions, respectively. “Precision” indicates, out of how many images that were classified as containing foam, what ratio was correctly predicted. Recall indicates the ratio of all images containing foam that were correctly classified as containing foam. The F1 score is the harmonic mean of the precision, and recall and is widely used to provide a single evaluation metric for classification models when classes are imbalanced.

To qualitatively validate the models’ predictions, the locations where the models attend for predictions were visualized using the GradCAM++ method.21,22 GradCAM++ uses a weighted combination of the positive partial derivatives of the last CNN layer to produce a heat map over the image highlighting regions the model pays much attention to when making its prediction. Although GradCAM++ does not provide a full explanation behind the prediction, the heat maps provide intuition whether the model predictions are based on sensible information.

Results and Discussion

Two foam classification models were generated following a six-step workflow ( Fig. 2B ). One of the resulting models is a binary model, which is able to distinguish between foam and no foam. The second model is a fine-grained model, which is able to classify foam-containing bioreactor images into the classes no foam, low foam, medium foam, and high foam.

The developed binary foam classification model showed strong performance, with an F1 score higher than 97% on an independent test set ( Table 3 ), indicating that the machine vision system reliably detects foam buildup. The fine-grained classifier showed promising results with around a 76% F1 score on the considerably more difficult task of fine-grained classification. Inspecting the confusion matrix for the fine-grained classifier shows that the main source of error is high-foam images mistaken as medium-foam ones and, to slightly lower degree, medium-foam images mistaken as low-foam ones ( Fig. 3B ). The models’ predictions were visualized using the GradCAM++ method 21 ( Fig. 4 ). This method provides a heat map representation that indicates regions of high decision-making allowing qualitative investigation of the models’ predictions. For the binary classifier, the model correctly focuses on to the liquid–gas interface when making a correct prediction, but focuses on other parts of the vessel when making incorrect predictions ( Fig. 4A ). A similar behavior can be observed for the fine-grained classification model ( Fig. 4B ). Here, too, the model focuses on the foam area for correct predictions and other parts of the vessel environment for incorrect predictions.

Table 3.

CNN Classification Performance on Foam Detection.

Model Precision (%) Recall (%) F1 Score (%)
Binary 97.95 96.95 97.45
Fine-grained 76.35 78.68 75.58

Figure 3.

Figure 3.

Confusion matrices of the developed classification models. (A) Confusion matrix for the binary classifier mode. (B) Confusion matrix for the fine-grained classifier model. Each row indicates the true image labels; columns indicate the respective model’s prediction. Figures indicate the proportion of model predictions for images of the labels within each row.

Figure 4.

Figure 4.

Exemplary GradCAM++ visualizations of the developed classification models. (A) Visualizations of the binary classification model. (B) Visualizations of the fine-grained classification model. Image boxes from left to right: Raw input image; respective GradCAM++ heat map, where blue means low attention and red means high; and the input image with the corresponding heat map overlay.

These visualizations allow interpretation of the CNN classifiers’ behavior and indicate that the developed models are capable of recognizing the foam region within the vessel area and using it for the classification tasks. Incorrect classifications have been observed, which are the result of the CNNs not focusing on the foam area of the vessel or edge cases introduced by the manual annotation of images. However, these failures may be avoided by averaging the predictions over time-consecutive sequences of images, instead of relying on only a single image, to receive a more robust signal. For example, the frame rate of video acquisition is 30 frames per second, and the worst-case wrong prediction with the binary foam detection is every 48 frames, on average, assuming uniformly randomly distributed failures over time. In this case, averaging the prediction over 30 frames per second may drastically reduce the impact of the misclassifications. This approach is applicable to actual cultivation setups, where foam emergence usually takes several seconds to minutes.

Concluding Remarks and Outlook

Conventional foam sensor probes show several disadvantages, as they are prone to fouling and coating, and sensitive to reactor conditions such as humidity, agitation splashing, and temperature. This can result in false-negative sensing and overdosing of antifoam, leading to batch failure. Furthermore, the expensive cost and the lack of robustness do not justify their application in single-use bioprocessing setups. The proposed combination of commodity camera modules and the developed CNN approach for real-time foam identification and quantification overcome these drawbacks. The initialized models can be implemented into real-life setups either by hardcoding the ROI into the image acquisition modules or, in the case of flexible module/equipment positions, via preceding object detection tasks that deliver the appropriate ROI, such as deep cropping approaches. 23 The developed algorithm demonstrated high performance regarding foam identification (binary classification, foam/no foam) with an F1 score of 98% on an independent test set. Furthermore, fine-grained classification of foam levels (no foam, low foam, medium foam, high foam) showed good results; the model achieved an F1 score of 76% over all classes, indicating great promise for image-based foam quantification. The main source of error here was distinguishing between medium-foam and high-foam images, a task difficult even for a subject matter expert if no metric scale for orientation is provided. Furthermore, the foam height is usually not distributed equally along the foam surface, which adds further complexity to this machine vision task. However, it has been shown that the established system provides an inexpensive, accurate, and flexible alternative to traditional foam-sensing systems.

Going forward, implementing an antifoam agent feed strategy based on the resulting fine-grained sensor signal to ensure an antifoam agent addition based on demand would minimize negative effects on bioprocessing equipment features as well as cell behavior. A resulting reduction of batch failures based on antifoam overdosing would improve the efficiency of process development as well as manufacturing processes. Other useful additions to the concept system include implementation of outlier detection capabilities to reduce the impact of process artifacts, for example, the accidental blocking of the camera view to the bioreactor with an object or operator hand during routine operation. Potentially, further accuracy could be added to the model by introducing exact foam metrics (via surface or volume measurements) as annotation data. Additionally, to further reduce the risk of biased or inconsistent labels, a diverse data set labeled by multiple subject matter experts whose assessments are then aggregated, for instance by majority voting, is preferrable, especially in difficult-to-judge edge cases. Furthermore, both models presented in this work can definitely be optimized for higher performance by tuning the model architecture, learning rate, loss function, and so on, which is something we leave for future work.

To conclude, the presented concept results show great promise for the application of machine vision to implement cheap, flexible, and robust monitoring for foam control for upstream bioprocessing. It is anticipated that this machine vision methodology will be further expanded to other areas of bioprocessing.

Supplemental Material

sj-pdf-1-jla-10.1177_24726303211008861 – Supplemental material for A Machine Vision Approach for Bioreactor Foam Sensing

Supplemental material, sj-pdf-1-jla-10.1177_24726303211008861 for A Machine Vision Approach for Bioreactor Foam Sensing by Jonas Austerjost, Robert Söldner, Christoffer Edlund, Johan Trygg, David Pollard and Rickard Sjögren in SLAS Technology

Footnotes

Supplemental material is available online with this article.

Declaration of Conflicting Interests: The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The authors received no financial support for the research, authorship, and/or publication of this article.

ORCID iDs: Jonas Austerjost Inline graphic https://orcid.org/0000-0002-2080-1556

Christoffer Edlund Inline graphic https://orcid.org/0000-0003-0003-3681

References

  • 1. Etoc A., Delvigne F., Lecomte J. P.; et al. Foam Control in Fermentation Bioprocess: From Simple Aeration Tests to Bioreactor. Appl. Biochem. Biotechnol. 2006, 130, 392–404. [DOI] [PubMed] [Google Scholar]
  • 2. Routledge S. J. Beyond De-Foaming: The Effects of Antifoams on Bioprocess Productivity. Comput. Struct. Biotechnol. J. 2012, 3, e201210001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Vardar-Sukan F. Foaming: Consequences, Prevention and Destruction. Biotechnol. Adv. 1998, 16, 913–948. [Google Scholar]
  • 4. Flickinger M. C., Delvigne F., Lecomte J. Foam Formation and Control in Bioreactors. In Encyclopedia of Industrial Biotechnology; John Wiley & Sons: Hoboken, NJ, 2010; pp 1–13. [Google Scholar]
  • 5. Vardar-Sukan F. Foaming and Its Control in Bioprocesses. In Recent Advances in Biotechnology; Springer Netherlands: Dordrecht, 1992; pp 113–146. [Google Scholar]
  • 6. Goldberg M., Rubin E. Mechanical Foam Breaking. Ind. Eng. Chem. Process Des. Dev. 1967, 6, 195–200. [Google Scholar]
  • 7. Junker B. Foam and Its Mitigation in Fermentation Systems. Biotechnol. Prog. 2007, 23, 767–784. [DOI] [PubMed] [Google Scholar]
  • 8. Kawase Y., Moo-Young M. The Effect of Antifoam Agents on Mass Transfer in Bioreactors. Bioprocess Eng. 1990, 5, 169–173. [Google Scholar]
  • 9. Wang J., Honda H., Park Y. S.; et al. Effect of Dissolved Oxygen Concentration on Growth and Production of Biomaterials by Animal Cell Culture. In Animal Cell Technology: Basic & Applied Aspects; Springer Netherlands: Dordrecht, 1994; pp 191–195. [Google Scholar]
  • 10. Restelli V., Wang M. D., Huzel N.; et al. The Effect of Dissolved Oxygen on the Production and the Glycosylation Profile of Recombinant Human Erythropoietin Produced from CHO Cells. Biotechnol. Bioeng. 2006, 94, 481–494. [DOI] [PubMed] [Google Scholar]
  • 11. Liew M. K. H., Fane A. G., Rogers P. L. Fouling Effects of Yeast Culture with Antifoam Agents on Microfilters. Biotechnol. Bioeng. 1997, 53, 10–16. [DOI] [PubMed] [Google Scholar]
  • 12. Mohamad Pauzi S., Anak Halbert D., Azizi S.; et al. Effect of Organic Antifoam’s Concentrations on Filtration Performance. In Journal of Physics: Conference Series; Institute of Physics Publishing: Bristol, UK, 2019; Vol. 1349, p 12141. [Google Scholar]
  • 13. Rod R. L. Ultrasonic Liquid Level Sensor. In 1957 IRE National Convention Record (IRECON 1957); Institute of Electrical and Electronics Engineers: Piscataway, NJ, 1957; pp 36–38. [Google Scholar]
  • 14. Condé B. C., Fuentes S., Caron M.; et al. Development of a Robotic and Computer Vision Method to Assess Foam Quality in Sparkling Wines. Food Control 2017, 71, 383–392. [Google Scholar]
  • 15. Cimini A., Pallottino F., Menesatti P.; et al. A Low-Cost Image Analysis System to Upgrade the Rudin Beer Foam Head Retention Meter. Food Bioprocess. Technol. 2016, 9, 1587–1597. [Google Scholar]
  • 16. Wahab N., Khan A., Lee Y. S. Transfer Learning Based Deep CNN for Segmentation and Detection of Mitoses in Breast Cancer Histopathological Images. Microscopy 2019, 68, 216–233. [DOI] [PubMed] [Google Scholar]
  • 17. Hussain M., Bird J. J., Faria D. R. A Study on CNN Transfer Learning for Image Classification. In Advances in Intelligent Systems and Computing; Springer Verlag: Berlin, 2019; Vol. 840, pp 191–202. [Google Scholar]
  • 18. Sandner V., Pybus L. P., McCreath G.; et al. Scale-Down Model Development in Ambr Systems: An Industrial Perspective. Biotechnol. J. 2019, 14, 1700766. [DOI] [PubMed] [Google Scholar]
  • 19. He K., Zhang X., Ren S.; et al. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition; IEEE Computer Society: Washington, DC, 2016; Vol. 2016-Decem, pp 770–778. [Google Scholar]
  • 20. Kingma D. P., Ba J. L. Adam: A Method for Stochastic Optimization. In 3rd International Conference on Learning Representations (ICLR 2015)—Conference Track Proceedings; San Diego, CA, May 7–9, 2015. [Google Scholar]
  • 21. Selvaraju R. R., Cogswell M., Das A.; et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. Int. J. Comput. Vis. 2020, 128, 336–359. [Google Scholar]
  • 22. GitHub. stefannc/GradCAM-Pytorch: A Pytorch Implementation of GradCAM, GradCAM++, and Smooth-GradCAM++. https://github.com/stefannc/GradCAM-Pytorch (accessed Oct 16, 2020).
  • 23. Wang W., Shen J. Deep Cropping via Attention Box Prediction and Aesthetics Assessment. In Proceedings of the IEEE International Conference on Computer Vision; Institute of Electrical and Electronics Engineers: Piscataway, NJ, 2017; Vol. 2017-Octob, pp 2205–2213. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

sj-pdf-1-jla-10.1177_24726303211008861 – Supplemental material for A Machine Vision Approach for Bioreactor Foam Sensing

Supplemental material, sj-pdf-1-jla-10.1177_24726303211008861 for A Machine Vision Approach for Bioreactor Foam Sensing by Jonas Austerjost, Robert Söldner, Christoffer Edlund, Johan Trygg, David Pollard and Rickard Sjögren in SLAS Technology


Articles from Slas Technology are provided here courtesy of SAGE Publications

RESOURCES