Table 1.
Glossary of key terms artificial intelligence.
| Artificial Intelligence (AI) | The ability of computer systems to perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. |
| Machine learning (ML) | A subset of AI that focuses on the development of algorithms and statistical models that enable computer systems to improve their performance on a specific task through experience, without being explicitly programmed. |
| Deep Learning (ML) | A subset of machine learning based on artificial neural networks with multiple layers, capable of learning complex patterns in large amounts of data. |
| Natural Language Processing (NLP) | A field of AI that focuses on the interaction between computers and humans using natural language, enabling machines to understand, interpret, and generate human language. |
| Computer vision | A field of AI that enables computers to gain high-level understanding from digital images or videos, aiming to automate tasks that the human visual system can do. |
| Robotics | The branch of technology that deals with the design, construction, operation, and use of robots, often incorporating AI for decision-making and task execution. |
| Deep neural networks (DNNs) | Artificial neural networks with multiple layers between the input and output layers, capable of modeling complex non-linear relationships. |
| Convolutional Neural Networks (CNNs) | A class of deep neural networks most commonly applied to analyze visual imagery, designed to automatically and adaptively learn spatial hierarchies of features. |
| Recurrent Neural Networks (RNNs) | A class of neural networks where connections between nodes form a directed graph along a temporal sequence, allowing it to exhibit temporal dynamic behavior. |
| Support vector machines (SVMs) | Supervised learning models used for classification and regression analysis, effective in high-dimensional spaces. |
| Principal Component Analysis (PCA) | A statistical procedure that uses orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables. |
| t-Distributed Stochastic Neighbor Embedding (t-SNE) | A machine learning algorithm for visualization that reduces dimensionality based on similarity of datapoints. |
| Uniform Manifold Approximation and Projection (UMAP) | A dimension reduction technique that can be used for visualization similarly to t-SNE, but also for general non-linear dimension reduction. |
| Principal Component Analysis (PCA) | AI systems that can provide human-understandable explanations for their decisions or predictions. |
| Explainable Artificial Intelligence (XAI) | AI systems that can provide human-understandable explanations for their decisions or predictions. |
| Gradient-weighted Class Activation Mapping (Grad-CAM) | A technique for producing visual explanations for decisions made by convolutional neural networks. |
| Local Interpretable Model-agnostic Explanations (LIME) | A technique that explains the predictions of any classifier in an interpretable and faithful manner. |
| Area Under the Curve of Receiver Operating Characteristic (AUC-ROC) | A performance measurement for classification problems at various thresholds settings, representing the degree of separability between classes. |
| Precision-recall curves | A graphical plot that illustrates the trade-off between precision and recall for different thresholds in a binary classifier system. |
| F1 scores | The harmonic mean of precision and recall, providing a single score that balances both metrics. |
| Calibration plots | Graphical representations of the agreement between predicted probabilities and observed frequencies, used to assess the calibration of probabilistic predictions. |
| Decision curve analysis | A method for evaluating and comparing prediction models that accounts for the clinical consequences of using a model. |
| SHapley Additive exPlanations (SHAP) | A game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions. |