| Zero-shot |
Directly provides task description
without
examples. |
Simple to use, no additional data needed. |
Simple classification, generation tasks (e.g., text mining
of MOF synthesis27) |
| Few-shot |
Provides a few examples to guide the model. |
Improves model understanding of the task. |
Moderately
complex tasks (e.g., property prediction through
SMILES21) |
| CoT |
Guides the model to reason step-by-step. |
Suitable
for complex reasoning tasks. |
Math problems, logical
reasoning (e.g., calculating chemical
equilibrium constants45) |
| APE |
Automatically generates and optimizes prompts
using the model’s
own capabilities. |
Reduces manual effort; may produce
more effective prompts than
human-designed ones. |
Tasks requiring efficient prompt
design. |
| ReAct |
Solves tasks
through dynamic reasoning and external actions |
Suitable
for multistep reasoning and external interaction tasks;
improves transparency. |
Complex question answering, tasks
requiring external knowledge
(e.g., prediction and generation of MOFs50) |
| RAG |
Combines retrieval from
external knowledge bases with generation
to produce accurate answers. |
Improves accuracy and reliability;
handles tasks requiring
external knowledge. |
Open-domain question answering,
fact-based tasks (e.g., transform
words in battery research13) |
| Meta-prompting |
Uses a meta-prompt to guide
the model in generating specific
subprompts or task decompositions. |
Enhances model’s
ability to understand and execute complex
tasks; highly flexible. |
Complex task decomposition,
multistep reasoning tasks (e.g.,
autonomous chemical research1) |