Overview
Safety mechanisms and constraints implemented around AI systems to prevent harmful, biased, or policy-violating outputs while preserving useful functionality.
More in Artificial Intelligence
AI Inference
Training & InferenceThe process of using a trained AI model to make predictions or decisions on new, unseen data.
Heuristic Search
Reasoning & PlanningProblem-solving techniques that use practical rules of thumb to find satisfactory solutions when exhaustive search is impractical.
Edge AI
Foundations & TheoryArtificial intelligence algorithms processed locally on edge devices rather than in centralised cloud data centres.
Bayesian Reasoning
Reasoning & PlanningA statistical approach to AI that uses Bayes' theorem to update probability estimates as new evidence becomes available.
Neural Architecture Search
Models & ArchitectureAn automated technique for designing optimal neural network architectures using search algorithms.
Strong AI
Foundations & TheoryA theoretical form of AI that would have consciousness, self-awareness, and the ability to truly understand rather than simulate understanding.
AI Chip
Infrastructure & OperationsA semiconductor designed specifically for AI and machine learning computations, optimised for parallel processing and matrix operations.
Model Distillation
Models & ArchitectureA technique where a smaller, simpler model is trained to replicate the behaviour of a larger, more complex model.