Overview
The process of creating a concise and coherent summary of a longer text document while preserving key information.
More in Natural Language Processing
Top-K Sampling
Generation & TranslationA text generation strategy that restricts the model to sampling from the K most probable next tokens.
GloVe
Semantics & RepresentationGlobal Vectors for Word Representation — an unsupervised learning algorithm for obtaining word vector representations from aggregated word co-occurrence statistics.
Structured Output
Semantics & RepresentationThe generation of machine-readable formatted responses such as JSON, XML, or code from language models, enabling reliable integration with downstream software systems.
GPT
Semantics & RepresentationGenerative Pre-trained Transformer — a family of autoregressive language models that generate text by predicting the next token.
Question Answering
Generation & TranslationAn NLP task where a system automatically answers questions posed in natural language based on given context.
Text-to-Speech
Speech & AudioTechnology that converts written text into natural-sounding spoken audio using neural networks, enabling voice interfaces, accessibility tools, and content narration.
Seq2Seq Model
Core NLPA neural network architecture that maps an input sequence to an output sequence, used in translation and summarisation.
Long-Context Modelling
Semantics & RepresentationTechniques and architectures that enable language models to process and reason over extremely long input sequences, from tens of thousands to millions of tokens.