Abstract
A new method allows researchers to automatically assign cells into different cell types and tissues, a step which is critical for understanding complex organisms.
Research organism: P. dumerilii
Related research article Zinchenko V, Hugger J, Uhlmann V, Arendt D, Kreshuk A. 2023. MorphoFeatures for unsupervised exploration of cell types, tissues and organs in volume electron microscopy. eLife 12:e80918. doi: 10.7554/eLife.80918.
Since the advent of microscopy in the 17th century, it has become well established that organisms are divided into tissues made up of different types of cells, with cells of the same type typically performing the same role. This simplifies the task of understanding a biological system immensely, as there are many fewer cell types than individual cells (Masland, 2001).
Categorizing tissues and cell types has always been done manually, usually by grouping cells that look the same based on their shape, internal structures and various other features. This is also true for images collected using modern day techniques, such as electron microscopy, which can provide three-dimensional reconstructions of tissue samples, or even entire small organisms less than a millimeter cube in size.
While electron microscopy images can be automatically subdivided or ‘segmented’ into individual cells, assigning each one to a cell type by hand is both difficult and time consuming; for example, in a recent project, it took several experts many months to categorize one half of the fruit fly brain (Scheffer et al., 2020). The whole task becomes even more challenging if the object being studied is not a well-known model organism. Now, in eLife, Valentyna Zinchenko, Johannes Hugger, Virginie Uhlmann Detlev Arendt and Anna Kreshuk of the European Molecular Biology Laboratory report a new method that could simplify this process (Zinchenko et al., 2023).
Zinchenko et al. based their program on a machine learning method called unsupervised, contrastive learning (van den Oord et al., 2018). The program works by grouping cell types without having received prior examples of ‘human-classified’ cell types or features (i.e., it is unsupervised) and by finding features that maximize the difference (or contrast) between examples that should be grouped together and those that should not. The method requires many examples, both of cells that should be grouped together, and those that should not. For the positive examples (those that should be grouped together), Zinchenko et al. created synthetic copies of each existing cell with minor modifications, such as different rotations, reflections, and texture or shape variations. In this case, the original cell and the modified cell should be grouped together. For negative examples (those that are of different types), they picked pairs of cells at random from their sample. This will be wrong occasionally but it is sufficiently accurate to train their model while allowing unsupervised operation.
Machine learning was then applied to find features shared by the positive examples only. The system combined the learned features of each original cell into a vector that summarizes the cell’s shape and texture. The team found that cells belonging to the same type were close together within the space of the vector, which can be visualized and interpreted by existing dimension reduction techniques (McInnes et al., 2018).
Zinchenko et al. then tested their model on a three-dimensional reconstruction of the annelid worm (Platynereis dumerilii) obtained through electron microscopy. Their computer model was able to match the different cell types and could identify subgroups of cells that could not be distinguished using human-specified features. Moreover, when compared to a gene expression map of the whole animal, the cells that had been classified as similar based on their features also shared similar genetic signatures, more so than cells that had previously been clustered using ‘human-designed’ features.
Next, they extended their classification method to consider both the shape and texture of each cell, and a combination of these features of all physically adjacent cells. Grouping these enhanced features revealed different tissues and organs within the animal. The classification system of the model strongly agreed with human results, but also found subtle tissue distinctions and rare features that had been overlooked by humans examining the same data set. For example, the analyses revealed a specific type of neuron in the midgut region of the worm, which had previously not been confirmed to be located in this area of the body.
The ‘unsupervised’ aspect of the method created by Zinchenko et al. is critical because it means the program does not require a full library of the relevant cell types (or a full list of the features that can distinguish between the cell types). Instead, the program learns these characteristics from the data itself (Figure 1). This is particularly useful for systems where the cell types are not known. Moreover, it is not restricted to using cell features that humans deem important, such as the roundness of a cell or the presence of dark vesicles. This means that the model can often outperform humans and work without bias, as it is not told what to expect and is thus less likely to overlook rare or unexpected cell types.
Electron microscopy and related techniques provide an incredible level of detail, including the shape, location and structure of every cell. But analyzing this flood of data by hand is nearly impossible and automated techniques are desperately needed to unlock the potential of these findings (Eberle and Zeidler, 2018). Significant progress has already been made in turning some tasks, such as cell segmentation and identifying synapses, into automatic processes, leaving cell and tissue identification as some of the most time-consuming manual steps (Januszewski et al., 2018; Huang et al., 2018). By helping to automate this step, Zinchenko et al. make a critical step in the journey of understanding these invaluable but intimidating data sets.
Biography
Louis K Scheffer is in the Janelia Research Campus, HHMI, Ashburn, United States
Competing interests
No competing interests declared.
References
- Eberle AL, Zeidler D. Multi-beam scanning electron microscopy for high-throughput imaging in connectomics research. Frontiers in Neuroanatomy. 2018;12:112. doi: 10.3389/fnana.2018.00112. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Huang GB, Scheffer LK, Plaza SM. Fully-automatic synapse prediction and validation on a large data set. Frontiers in Neural Circuits. 2018;12:87. doi: 10.3389/fncir.2018.00087. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Januszewski M, Kornfeld J, Li PH, Pope A, Blakely T, Lindsey L, Maitin-Shepard J, Tyka M, Denk W, Jain V. High-precision automated reconstruction of neurons with flood-filling networks. Nature Methods. 2018;15:605–610. doi: 10.1038/s41592-018-0049-4. [DOI] [PubMed] [Google Scholar]
- Masland RH. The fundamental plan of the retina. Nature Neuroscience. 2001;4:877–886. doi: 10.1038/nn0901-877. [DOI] [PubMed] [Google Scholar]
- McInnes L, Healy J, Melville J. Umap: Uniform Manifold Approximation and Projection for Dimension Reduction. arXiv. 2018 https://arxiv.org/abs/1802.03426
- Scheffer LK, Xu CS, Januszewski M, Lu Z, Takemura SY, Hayworth KJ, Huang GB, Shinomiya K, Maitlin-Shepard J, Berg S, Clements J, Hubbard PM, Katz WT, Umayam L, Zhao T, Ackerman D, Blakely T, Bogovic J, Dolafi T, Kainmueller D, Kawase T, Khairy KA, Leavitt L, Li PH, Lindsey L, Neubarth N, Olbris DJ, Otsuna H, Trautman ET, Ito M, Bates AS, Goldammer J, Wolff T, Svirskas R, Schlegel P, Neace E, Knecht CJ, Alvarado CX, Bailey DA, Ballinger S, Borycz JA, Canino BS, Cheatham N, Cook M, Dreher M, Duclos O, Eubanks B, Fairbanks K, Finley S, Forknall N, Francis A, Hopkins GP, Joyce EM, Kim S, Kirk NA, Kovalyak J, Lauchie SA, Lohff A, Maldonado C, Manley EA, McLin S, Mooney C, Ndama M, Ogundeyi O, Okeoma N, Ordish C, Padilla N, Patrick CM, Paterson T, Phillips EE, Phillips EM, Rampally N, Ribeiro C, Robertson MK, Rymer JT, Ryan SM, Sammons M, Scott AK, Scott AL, Shinomiya A, Smith C, Smith K, Smith NL, Sobeski MA, Suleiman A, Swift J, Takemura S, Talebi I, Tarnogorska D, Tenshaw E, Tokhi T, Walsh JJ, Yang T, Horne JA, Li F, Parekh R, Rivlin PK, Jayaraman V, Costa M, Jefferis GS, Ito K, Saalfeld S, George R, Meinertzhagen IA, Rubin GM, Hess HF, Jain V, Plaza SM. A connectome and analysis of the adult Drosophila central brain. eLife. 2020;9:e57443. doi: 10.7554/eLife.57443. [DOI] [PMC free article] [PubMed] [Google Scholar]
- van den Oord A, Li Y, Vinyals O. Representation Learning with Contrastive Predictive Coding. arXiv. 2018 https://arxiv.org/abs/1807.03748
- Zinchenko V, Hugger J, Uhlmann V, Arendt D, Kreshuk A. MorphoFeatures for unsupervised exploration of cell types, tissues and organs in volume electron microscopy. eLife. 2023;12:e80918. doi: 10.7554/eLife.80918. [DOI] [PMC free article] [PubMed] [Google Scholar]