Overview
The ability of large language models to learn new tasks from examples provided within the input prompt without parameter updates.
More in Artificial Intelligence
Neural Scaling Laws
Models & ArchitectureEmpirical relationships describing how AI model performance improves predictably with increases in model size, training data volume, and computational resources.
Bayesian Reasoning
Reasoning & PlanningA statistical approach to AI that uses Bayes' theorem to update probability estimates as new evidence becomes available.
AI Pipeline
Infrastructure & OperationsA sequence of data processing and model execution steps that automate the flow from raw data to AI-driven outputs.
AI Ethics
Foundations & TheoryThe branch of ethics examining moral issues surrounding the development, deployment, and impact of artificial intelligence on society.
AI Safety
Safety & GovernanceThe interdisciplinary field dedicated to making AI systems safe, robust, and beneficial while minimizing risks of unintended consequences.
Ontology
Foundations & TheoryA formal representation of knowledge as a set of concepts, categories, and relationships within a specific domain.
Symbolic AI
Foundations & TheoryAn approach to AI that uses human-readable symbols and rules to represent problems and derive solutions through logical reasoning.
AI Robustness
Safety & GovernanceThe ability of an AI system to maintain performance under varying conditions, adversarial attacks, or noisy input data.