Overview
The application of models trained in one language to perform tasks in another language, leveraging shared multilingual representations learned during pre-training.
Cross-References(1)
More in Natural Language Processing
Information Extraction
Parsing & StructureThe process of automatically extracting structured information from unstructured or semi-structured text sources.
Word2Vec
Semantics & RepresentationA neural network model that learns distributed word representations by predicting surrounding context words.
Context Window
Semantics & RepresentationThe maximum amount of text a language model can consider at once when generating a response.
Hallucination Detection
Semantics & RepresentationTechniques for identifying when AI language models generate plausible but factually incorrect or unsupported content.
Speech Synthesis
Speech & AudioThe artificial production of human speech from text, also known as text-to-speech.
Structured Output
Semantics & RepresentationThe generation of machine-readable formatted responses such as JSON, XML, or code from language models, enabling reliable integration with downstream software systems.
Text Generation
Generation & TranslationThe process of producing coherent and contextually relevant text using AI language models.
GPT
Semantics & RepresentationGenerative Pre-trained Transformer — a family of autoregressive language models that generate text by predicting the next token.