Representation of protein
sequences and structures for machine
learning. Both sequences and structures can be represented by fixed
or learned representations (also known as embeddings). While fixed
representations are deterministic and based on inherent sequence or
structural features, learned representations, as the name suggests,
are often generated by neural networks via unsupervised learning on
large unlabeled data sets. Simplified examples of each representation
technique are shown. In each case, the final representations are numerical
vectors suitable as input for machine learning models. Models can
be trained on either just sequence or structural representations,
as well as on a combination of both.