1 Understanding foundation models
Foundation models mark a shift from building task-specific forecasters to reusing a single, large, pre-trained model across many scenarios. Trained on massive, diverse datasets and equipped with millions to billions of parameters, they can be adapted to new problems via fine-tuning or used directly in a zero-shot manner. In time series, this enables one model to forecast across heterogeneous frequencies and patterns (trends, seasonality, holidays) and to perform related tasks such as anomaly detection and classification. The chapter motivates this paradigm through its growing presence in everyday applications and frames the book’s hands-on focus: defining what foundation models are, clarifying where they excel or struggle, and applying them to practical forecasting problems.
At the architectural core of most foundation models is the Transformer. Time series inputs are tokenized and mapped into embeddings, with positional encodings added to preserve temporal order. The encoder’s self-attention (often multi-headed) learns rich dependencies across time, while the decoder uses masked attention to prevent peeking into the future and cross-attends to encoder outputs to generate forecasts autoregressively across the horizon. A final projection layer returns predictions to the target scale. Understanding this flow—and the associated hyperparameters—helps practitioners fine-tune models effectively and diagnose when certain designs (for example, handling exogenous variables or multivariate targets) may or may not fit a use case.
The chapter closes by weighing benefits against trade-offs. Benefits include simpler, out-of-the-box pipelines, usefulness with limited data, lower expertise barriers, and reuse across tasks and datasets. Trade-offs involve privacy and governance concerns when using proprietary services, limited control over a model’s built-in capabilities and horizons, the possibility that a specialized model outperforms a general one, and substantial compute and storage needs (mitigable via APIs). Ultimately, adopting a foundation model is an empirical decision guided by performance and cost. The book proceeds to build intuition with a small model, then evaluates leading time series foundation models and LLM-based approaches on real data, culminating in a comparative capstone against classical statistical methods.
Result of performing linear regression on two different datasets. While the algorithm to build the linear model stays the same, the model is definitely very different depending on the dataset used.
Simplified Transformer architecture from a perspective of time series. The raw series enters at the bottom left of the figure, flows through an embedding layer and positional encoding before going into the decoder. Then, the output comes from the decoder one value at a time until the entire horizon is predicted.
Visualizing the result of feeding a time series through an embedding layer. The input is first tokenized, and an embedding is learned. The result is the abstract representation of the input made by the model.
Visualizing positional encoding. Note that the positional encoding matrix must be of the same size as the embedding. Also note that sine is used in even positions, while cosine is used on odd positions. The length of the input sequence is vertical in this figure.
We can see that the encoder is actually made of many encoders which all share the same architecture. An encoder is made of a self-attention mechanism and a feed forward layer.
Visualizing the self-attention mechanism. This is where the model learns relationships between the current token (dark circle) and past tokens (light circles) in the same embedding. In this case, the model assigns more importance (depicted by thicker connecting lines) to closer data points than those farther away.
Visualizing the decoder. Like the encoder, the decoder is actually a stack of many decoders. Each is composed of a masked multi-headed attention layer, followed by a normalization layer, a multi-headed attention layer, another normalization layer, a feed forward layer, and a final normalization layer. The normalization layers are there to keep the model stable during training.
Visualizing the decoder in detail. We see that the output of the encoder is fed to the second attention layer inside the decoder. This is how the decoder can generate predictions using information learned by the encoder.
Summary
- A foundation model is a very large machine learning model trained on massive amounts of data that can be applied on a wide variety of tasks.
- Derivatives of the Transformer architecture are what powers most foundation models.
- Advantages of using foundation models include simpler forecasting pipelines, a lower entry barrier to forecasting, and the possibility to forecast even when few data points are available.
- Drawbacks of using foundation models include privacy concerns, and the fact that we do not control the model’s capabilities. Also, it might not be the best solution to our problem.
- Some forecasting foundation models were designed with time series in mind, while others repurpose available large language models for time series tasks.
Time Series Forecasting Using Foundation Models ebook for free