Skip to main content
. 2023 Oct 26;13(21):14454–14469. doi: 10.1021/acscatal.3c03417

Figure 3.

Figure 3

Representation of protein sequences and structures for machine learning. Both sequences and structures can be represented by fixed or learned representations (also known as embeddings). While fixed representations are deterministic and based on inherent sequence or structural features, learned representations, as the name suggests, are often generated by neural networks via unsupervised learning on large unlabeled data sets. Simplified examples of each representation technique are shown. In each case, the final representations are numerical vectors suitable as input for machine learning models. Models can be trained on either just sequence or structural representations, as well as on a combination of both.