Accuracy |
Where TP is true positive; TN is true negative; FP is false positive; and FN is false negative. |
Artificial intelligence |
A process through which machines mimic “cognitive” functions that humans associate with other human minds, such as language comprehension. |
Area under the curve (AUC) |
A metric of binary classification; range from 0 to 1, 0 being always wrong, 0.5 representing random chance, and 1, the perfect score. |
Artificial neural network |
Computing systems that are inspired by, but not necessarily identical to, the biological neural networks that constitute human brain. |
Attribute |
Facts, details or characteristics of an entity. |
Autoencoder |
A class of artificial neural networks. |
Concept mapping |
A diagram that depicts suggested relationships between concepts. |
Convolutional neural network |
A class of artificial neural networks. |
Decision tree |
A tree-like graph or model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. |
Deep learning |
A subclass of a broader family of machine learning methods based on artificial neural networks. The designation “deep” signifies multiple layers of the neural network |
Entities |
A person, place, thing or concept about which data can be collected. Examples in the clinical domain include diseases/disorders, signs/symptoms, procedures, medications, anatomical sites |
F1 score |
Values range from 0 to 1 (perfect score) |
Graphics processing unit |
A specialized electronic circuit designed to perform very fast calculations needed for training artificial neural networks. |
K-nearest neighbors |
A non-parametric method used for classification and regression in pattern recognition |
Latent representation |
Word representations that are not directly observed but are rather inferred through a mathematical model |
Machine learning |
The scientific study of algorithms and probabilistic models that computer systems use in order to perform a specific task effectively without using explicit instructions, relying on patterns and inference instead |
Precision |
Where TP is true positive, and FP is false positive. |
Probabilistic methods |
A nonconstructive method, primarily used in combinatorics, for proving the existence of a prescribed kind of mathematical object |
Recall |
Where TP is true positive, and FN is false negative. |
Recurrent neural network |
A class of artificial neural networks |
Rule-based system |
Systems involving human-crafted or curated rule sets. |
Semantic representation |
Ways in which the meaning of a word or sentence is interpreted. |
Supervised learning |
Machine learning method that infers a function from labeled training data consisting of a set of training examples. |
Support vector machine |
Supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. |
tensor |
A mathematical object analogous to but more general than a vector, represented by an array of components that are functions of the coordinates of a space. |
Transfer learning |
A machine learning technique where a model trained on one task is re-purposed on a second related task. |
Unsupervised learning |
Self-organized Hebbian learning that helps find previously unknown patterns in data set without pre-existing labels. |
Word embedding |
The collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers. |