Supplementary Table 1.
Artificial intelligence | Refers to the development of computer systems that can perform tasks that typically require human intelligence. These tasks include learning, reasoning, problem-solving, perception, and natural language understanding. It covers machine learning, deep learning and natural language processing under its umbrella. |
Machine learning | It is a subset of artificial intelligence that focuses on enabling computers to learn from data. Instead of being explicitly programmed, ML algorithms use statistical techniques to improve their performance on a specific task over time. According to the techniques used in the model, it can be classified into different models. |
Artificial neural network | An artificial neural network is a computational model inspired by the structure and function of the human brain. It consists of interconnected nodes (neurons) organized in layers, allowing the network to learn complex patterns and relationships from data. They serve as the foundation for more advanced machine learning techniques, such as deep learning, which involves neural networks with many hidden layers, enabling them to automatically learn hierarchical representations of data. |
Deep learning | Deep Learning is an advenced subfield of Machine Learning that involves neural networks with many layers (deep neural networks). It has been particularly successful in tasks such as image and speech recognition, and it allows systems to automatically learn hierarchical representations of data. |
Natural language processing | Natural Language Processing (NLP) is a branch of AI that focuses on the interaction between computers and human languages. NLP enables computers to understand, interpret, and generate human language, facilitating communication between machines and humans. |
Large language models | Large Language Models are advanced AI models that are trained on massive datasets to understand and generate human-like language. They have applications in various NLP tasks and can exhibit a high level of language understanding and generation. |
GPT | Generative Pre-trained Transformer (GPT) is a type of large language model developed by OpenAI. It utilizes a transformer architecture and is pre-trained on diverse datasets, allowing it to perform various natural language processing tasks, such as text completion, translation, and summarization. Chat-GPT is a variant of GPT model fine-tuned specifically for generating conversational speech. GPT models are applied to different LLM variants for different (including scientific research) purposes. |
BARD | Building AutoML with Reinforcement Learning (BARD) is a large language model developed by Google AI. It is trained on a massive dataset of text and code. It has been empowered with a new AI model called the ‘Gemini’ that can also exert image synthesis and production. BARD has the same essential features as an LLM such as natural language processing, translation, summarization etc. |
Prompt | In the scope of artificial intelligence and large language models, prompt refers to the input text or query that is provided to the model to guide its response. The prompt essentially sets the initial context for the model’s generation process and can have a significant impact on the output. |
AI, Artificial Intelligence; GPT, Generative Pre-trained Transformer; LLM, Large Language Model; NLP, Natural Language Processing.