pypots.forecasting

pypots.forecasting.transformer

The implementation of Transformer for the partially-observed time-series forecasting task.

Refer to the papers Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.

Notes

This implementation is inspired by https://github.com/WenjieDu/SAITS

class pypots.forecasting.transformer.Transformer(n_steps, n_features, n_pred_steps, n_pred_features, n_encoder_layers, n_decoder_layers, d_model, n_heads, d_k, d_v, d_ffn, dropout=0, attn_dropout=0, batch_size=32, epochs=100, patience=None, training_loss=<class 'pypots.nn.modules.loss.MSE'>, validation_metric=<class 'pypots.nn.modules.loss.MSE'>, optimizer=<class 'pypots.optim.adam.Adam'>, num_workers=0, device=None, saving_path=None, model_saving_strategy='best', verbose=True)[source]

Bases: BaseNNForecaster

The PyTorch implementation of the Transformer model [47].

Parameters:
  • n_steps (int) – The number of time steps in the time-series data sample.

  • n_features (int) – The number of features in the time-series data sample.

  • n_pred_steps (int) – The number of steps in the forecasting time series.

  • n_pred_features (int) – The number of features in the forecasting time series.

  • n_encoder_layers (int) – The number of layers in the Transformer encoder.

  • n_decoder_layers (int) – The number of layers in the Transformer decoder.

  • d_model (int) – The dimension of the model’s backbone. It is the input dimension of the multi-head self-attention layers.

  • n_heads (int) – The number of heads in the multi-head self-attention mechanism. d_model must be divisible by n_heads, and the result should be equal to d_k.

  • d_k (int) – The dimension of the keys (K) and the queries (Q) in the DMSA mechanism. d_k should be the result of d_model divided by n_heads. Although d_k can be directly calculated with given d_model and n_heads, we want it be explicitly given together with d_v by users to ensure users be aware of them and to avoid any potential mistakes.

  • d_v (int) – The dimension of the values (V) in the DMSA mechanism.

  • d_ffn (int) – The dimension of the layer in the Feed-Forward Networks (FFN).

  • dropout (float) – The dropout rate for all fully-connected layers in the model.

  • attn_dropout (float) – The dropout rate for the attention mechanism.

  • batch_size (int) – The batch size for training and evaluating the model.

  • epochs (int) – The number of epochs for training the model.

  • patience (Optional[int]) – The patience for the early-stopping mechanism. Given a positive integer, the training process will be stopped when the model does not perform better after that number of epochs. Leaving it default as None will disable the early-stopping.

  • training_loss (Union[Criterion, type]) – The customized loss function designed by users for training the model. If not given, will use the default loss as claimed in the original paper.

  • validation_metric (Union[Criterion, type]) – The customized metric function designed by users for validating the model. If not given, will use the default MSE metric.

  • optimizer (Union[Optimizer, type]) – The optimizer for model training. If not given, will use a default Adam optimizer.

  • num_workers (int) – The number of subprocesses to use for data loading. 0 means data loading will be in the main process, i.e. there won’t be subprocesses.

  • device (Union[str, device, list, None]) – The device for the model to run on. It can be a string, a torch.device object, or a list of them. If not given, will try to use CUDA devices first (will use the default CUDA device if there are multiple), then CPUs, considering CUDA and CPU are so far the main devices for people to train ML models. If given a list of devices, e.g. [‘cuda:0’, ‘cuda:1’], or [torch.device(‘cuda:0’), torch.device(‘cuda:1’)] , the model will be parallely trained on the multiple devices (so far only support parallel training on CUDA devices). Other devices like Google TPU and Apple Silicon accelerator MPS may be added in the future.

  • saving_path (Optional[str]) – The path for automatically saving model checkpoints and tensorboard files (i.e. loss values recorded during training into a tensorboard file). Will not save if not given.

  • model_saving_strategy (Optional[str]) – The strategy to save model checkpoints. It has to be one of [None, “best”, “better”, “all”]. No model will be saved when it is set as None. The “best” strategy will only automatically save the best model after the training finished. The “better” strategy will automatically save the model during training whenever the model performs better than in previous epochs. The “all” strategy will save every model after each epoch training.

  • verbose (bool) – Whether to print out the training logs during the training process.

fit(train_set, val_set=None, file_type='hdf5')

Train the classifier on the given data.

Parameters:
  • train_set (Union[dict, str]) – The dataset for model training, should be a dictionary including the key ‘X’, or a path string locating a data file. If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is time-series data for training, can contain missing values. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include the key ‘X’.

  • val_set (Union[dict, str, None]) – The dataset for model validating, should be a dictionary including the key ‘X’, or a path string locating a data file. If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is time-series data for validation, can contain missing values. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include the key ‘X’.

  • file_type (str) – The type of the given file if train_set and val_set are path strings.

Return type:

None

forecast(test_set, file_type='hdf5', **kwargs)

Forecast the future the input with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The data samples for testing, should be array-like with shape [n_samples, n_steps, n_features], or a path string locating a data file, e.g. h5 file.

  • file_type (str) – The type of the given file if X is a path string.

Returns:

Forecasting results.

Return type:

array-like, shape [n_samples, n_pred_steps, n_features],

load(path)

Load the saved model from a disk file.

Parameters:

path (str) – The local path to a disk file saving the trained model.

Return type:

None

Notes

If the training environment and the deploying/test environment use the same type of device (GPU/CPU), you can load the model directly with torch.load(model_path).

predict(test_set, file_type='hdf5', **kwargs)

Make predictions for the input data with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The test dataset for model to process, should be a dictionary including keys as ‘X’, or a path string locating a data file supported by PyPOTS (e.g. h5 file). If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is the time-series data for processing. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include ‘X’ key.

  • file_type (str) – The type of the given file if test_set is a path string.

Returns:

The dictionary containing the forecasting results as key ‘forecasting’ and latent variables if necessary.

Return type:

result_dict

save(saving_path, overwrite=False)

Save the model with current parameters to a disk file.

A .pypots extension will be appended to the filename if it does not already have one. Please note that such an extension is not necessary, but to indicate the saved model is from PyPOTS framework so people can distinguish.

Parameters:
  • saving_path (str) – The given path to save the model. The directory will be created if it does not exist.

  • overwrite (bool) – Whether to overwrite the model file if the path already exists.

Return type:

None

to(device)

Move the model to the given device.

Parameters:

device (Union[str, device]) – The device to move the model to. It can be a string or a torch.device object.

Return type:

None

pypots.forecasting.timellm

The implementation of Time-LLM for the partially-observed time-series forecasting task.

Refer to the paper Ming Jin, Shiyu Wang, Lintao Ma, Zhixuan Chu, James Y. Zhang, Xiaoming Shi, Pin-Yu Chen, Yuxuan Liang, Yuan-Fang Li, Shirui Pan, and Qingsong Wen. Time-LLM: Time Series Forecasting by Reprogramming Large Language Models. In the 12th International Conference on Learning Representations, 2024.

Notes

This implementation is inspired by the official one https://github.com/KimMeen/Time-LLM

class pypots.forecasting.timellm.TimeLLM(n_steps, n_features, n_pred_steps, n_pred_features, term, llm_model_type, n_layers, patch_size, patch_stride, d_llm, d_model, d_ffn, n_heads, dropout, domain_prompt_content, batch_size=32, epochs=100, patience=None, training_loss=<class 'pypots.nn.modules.loss.MSE'>, validation_metric=<class 'pypots.nn.modules.loss.MSE'>, optimizer=<class 'pypots.optim.adam.Adam'>, num_workers=0, device=None, saving_path=None, model_saving_strategy='best', verbose=True)[source]

Bases: BaseNNForecaster

The PyTorch implementation of the TimeLLM forecasting model [4].

Parameters:
  • n_steps (int) – The number of time steps in the time-series data sample.

  • n_features (int) – The number of features in the time-series data sample.

  • n_pred_steps (int) – The number of steps in the forecasting time series.

  • n_pred_features (int) – The number of features in the forecasting time series.

  • term (str) – The forecasting term, which can be either ‘long’ or ‘short’.

  • llm_model_type (str) – The type of the LLM model. It can be one of [“LLaMA”, “GPT2”, “BERT”].

  • n_layers (int) – The number of layers in the TimeLLM model.

  • patch_size (int) – The length of the patch for the TimeLLM model.

  • patch_stride (int) – The stride for the patching process in the TimeLLM model.

  • d_llm (int) – The dimension of the LLM model. Given llm_model_type, it should be 4096 for LLaMA, 768 for GPT2 and BERT.

  • d_model (int) – The dimension of the model.

  • d_ffn (int) – The dimension of the feed-forward network.

  • n_heads (int) – The number of heads in each layer of TimeLLM.

  • dropout (float) – The dropout rate for the model.

  • domain_prompt_content (str) – The prompt content for the domain knowledge.

  • batch_size (int) – The batch size for training and evaluating the model.

  • epochs (int) – The number of epochs for training the model.

  • patience (Optional[int]) – The patience for the early-stopping mechanism. Given a positive integer, the training process will be stopped when the model does not perform better after that number of epochs. Leaving it default as None will disable the early-stopping.

  • training_loss (Union[Criterion, type]) – The customized loss function designed by users for training the model. If not given, will use the default loss as claimed in the original paper.

  • validation_metric (Union[Criterion, type]) – The customized metric function designed by users for validating the model. If not given, will use the default MSE metric.

  • optimizer (Union[Optimizer, type]) – The optimizer for model training. If not given, will use a default Adam optimizer.

  • num_workers (int) – The number of subprocesses to use for data loading. 0 means data loading will be in the main process, i.e. there won’t be subprocesses.

  • device (Union[str, device, list, None]) – The device for the model to run on. It can be a string, a torch.device object, or a list of them. If not given, will try to use CUDA devices first (will use the default CUDA device if there are multiple), then CPUs, considering CUDA and CPU are so far the main devices for people to train ML models. If given a list of devices, e.g. [‘cuda:0’, ‘cuda:1’], or [torch.device(‘cuda:0’), torch.device(‘cuda:1’)] , the model will be parallely trained on the multiple devices (so far only support parallel training on CUDA devices). Other devices like Google TPU and Apple Silicon accelerator MPS may be added in the future.

  • saving_path (Optional[str]) – The path for automatically saving model checkpoints and tensorboard files (i.e. loss values recorded during training into a tensorboard file). Will not save if not given.

  • model_saving_strategy (Optional[str]) – The strategy to save model checkpoints. It has to be one of [None, “best”, “better”, “all”]. No model will be saved when it is set as None. The “best” strategy will only automatically save the best model after the training finished. The “better” strategy will automatically save the model during training whenever the model performs better than in previous epochs. The “all” strategy will save every model after each epoch training.

  • verbose (bool) – Whether to print out the training logs during the training process.

fit(train_set, val_set=None, file_type='hdf5')

Train the classifier on the given data.

Parameters:
  • train_set (Union[dict, str]) – The dataset for model training, should be a dictionary including the key ‘X’, or a path string locating a data file. If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is time-series data for training, can contain missing values. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include the key ‘X’.

  • val_set (Union[dict, str, None]) – The dataset for model validating, should be a dictionary including the key ‘X’, or a path string locating a data file. If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is time-series data for validation, can contain missing values. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include the key ‘X’.

  • file_type (str) – The type of the given file if train_set and val_set are path strings.

Return type:

None

forecast(test_set, file_type='hdf5', **kwargs)

Forecast the future the input with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The data samples for testing, should be array-like with shape [n_samples, n_steps, n_features], or a path string locating a data file, e.g. h5 file.

  • file_type (str) – The type of the given file if X is a path string.

Returns:

Forecasting results.

Return type:

array-like, shape [n_samples, n_pred_steps, n_features],

load(path)

Load the saved model from a disk file.

Parameters:

path (str) – The local path to a disk file saving the trained model.

Return type:

None

Notes

If the training environment and the deploying/test environment use the same type of device (GPU/CPU), you can load the model directly with torch.load(model_path).

predict(test_set, file_type='hdf5', **kwargs)

Make predictions for the input data with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The test dataset for model to process, should be a dictionary including keys as ‘X’, or a path string locating a data file supported by PyPOTS (e.g. h5 file). If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is the time-series data for processing. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include ‘X’ key.

  • file_type (str) – The type of the given file if test_set is a path string.

Returns:

The dictionary containing the forecasting results as key ‘forecasting’ and latent variables if necessary.

Return type:

result_dict

save(saving_path, overwrite=False)

Save the model with current parameters to a disk file.

A .pypots extension will be appended to the filename if it does not already have one. Please note that such an extension is not necessary, but to indicate the saved model is from PyPOTS framework so people can distinguish.

Parameters:
  • saving_path (str) – The given path to save the model. The directory will be created if it does not exist.

  • overwrite (bool) – Whether to overwrite the model file if the path already exists.

Return type:

None

to(device)

Move the model to the given device.

Parameters:

device (Union[str, device]) – The device to move the model to. It can be a string or a torch.device object.

Return type:

None

pypots.forecasting.gpt4ts

The implementation of GPT4TS for the partially-observed time-series forecasting task.

Refer to the paper Tian Zhou, Peisong Niu, Xue Wang, Liang Sun, Rong Jin. One Fits All: Power General Time Series Analysis by Pretrained LM. NeurIPS 2023.

Notes

This implementation is inspired by the official one https://github.com/DAMO-DI-ML/NeurIPS2023-One-Fits-All

class pypots.forecasting.gpt4ts.GPT4TS(n_steps, n_features, n_pred_steps, n_pred_features, term, patch_size, patch_stride, n_layers, train_gpt_mlp, d_ffn, dropout, embed='fixed', freq='h', batch_size=32, epochs=100, patience=None, training_loss=<class 'pypots.nn.modules.loss.MSE'>, validation_metric=<class 'pypots.nn.modules.loss.MSE'>, optimizer=<class 'pypots.optim.adam.Adam'>, num_workers=0, device=None, saving_path=None, model_saving_strategy='best', verbose=True)[source]

Bases: BaseNNForecaster

The PyTorch implementation of the GPT4TS forecasting model [14].

Parameters:
  • n_steps (int) – The number of time steps in the time-series data sample.

  • n_features (int) – The number of features in the time-series data sample.

  • n_pred_steps (int) – The number of steps in the forecasting time series.

  • n_pred_features (int) – The number of features in the forecasting time series.

  • term (str) – The forecasting term, which can be either ‘long’ or ‘short’.

  • patch_size (int) – The size of the patch for the patching mechanism.

  • patch_stride (int) – The stride for the patching mechanism.

  • n_layers (int) – The number of hidden layers to use in GPT2.

  • train_gpt_mlp (bool) – Whether to train the MLP in GPT2 during tuning.

  • d_ffn (int) – The hidden size of the feed-forward network .

  • dropout (float) – The dropout rate for the model.

  • embed (str) – The embedding method for the model.

  • freq – The frequency of the time-series data.

  • batch_size (int) – The batch size for training and evaluating the model.

  • epochs (int) – The number of epochs for training the model.

  • patience (Optional[int]) – The patience for the early-stopping mechanism. Given a positive integer, the training process will be stopped when the model does not perform better after that number of epochs. Leaving it default as None will disable the early-stopping.

  • training_loss (Union[Criterion, type]) – The customized loss function designed by users for training the model. If not given, will use the default loss as claimed in the original paper.

  • validation_metric (Union[Criterion, type]) – The customized metric function designed by users for validating the model. If not given, will use the default MSE metric.

  • optimizer (Union[Optimizer, type]) – The optimizer for model training. If not given, will use a default Adam optimizer.

  • num_workers (int) – The number of subprocesses to use for data loading. 0 means data loading will be in the main process, i.e. there won’t be subprocesses.

  • device (Union[str, device, list, None]) – The device for the model to run on. It can be a string, a torch.device object, or a list of them. If not given, will try to use CUDA devices first (will use the default CUDA device if there are multiple), then CPUs, considering CUDA and CPU are so far the main devices for people to train ML models. If given a list of devices, e.g. [‘cuda:0’, ‘cuda:1’], or [torch.device(‘cuda:0’), torch.device(‘cuda:1’)] , the model will be parallely trained on the multiple devices (so far only support parallel training on CUDA devices). Other devices like Google TPU and Apple Silicon accelerator MPS may be added in the future.

  • saving_path (Optional[str]) – The path for automatically saving model checkpoints and tensorboard files (i.e. loss values recorded during training into a tensorboard file). Will not save if not given.

  • model_saving_strategy (Optional[str]) – The strategy to save model checkpoints. It has to be one of [None, “best”, “better”, “all”]. No model will be saved when it is set as None. The “best” strategy will only automatically save the best model after the training finished. The “better” strategy will automatically save the model during training whenever the model performs better than in previous epochs. The “all” strategy will save every model after each epoch training.

  • verbose (bool) – Whether to print out the training logs during the training process.

fit(train_set, val_set=None, file_type='hdf5')

Train the classifier on the given data.

Parameters:
  • train_set (Union[dict, str]) – The dataset for model training, should be a dictionary including the key ‘X’, or a path string locating a data file. If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is time-series data for training, can contain missing values. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include the key ‘X’.

  • val_set (Union[dict, str, None]) – The dataset for model validating, should be a dictionary including the key ‘X’, or a path string locating a data file. If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is time-series data for validation, can contain missing values. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include the key ‘X’.

  • file_type (str) – The type of the given file if train_set and val_set are path strings.

Return type:

None

forecast(test_set, file_type='hdf5', **kwargs)

Forecast the future the input with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The data samples for testing, should be array-like with shape [n_samples, n_steps, n_features], or a path string locating a data file, e.g. h5 file.

  • file_type (str) – The type of the given file if X is a path string.

Returns:

Forecasting results.

Return type:

array-like, shape [n_samples, n_pred_steps, n_features],

load(path)

Load the saved model from a disk file.

Parameters:

path (str) – The local path to a disk file saving the trained model.

Return type:

None

Notes

If the training environment and the deploying/test environment use the same type of device (GPU/CPU), you can load the model directly with torch.load(model_path).

predict(test_set, file_type='hdf5', **kwargs)

Make predictions for the input data with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The test dataset for model to process, should be a dictionary including keys as ‘X’, or a path string locating a data file supported by PyPOTS (e.g. h5 file). If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is the time-series data for processing. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include ‘X’ key.

  • file_type (str) – The type of the given file if test_set is a path string.

Returns:

The dictionary containing the forecasting results as key ‘forecasting’ and latent variables if necessary.

Return type:

result_dict

save(saving_path, overwrite=False)

Save the model with current parameters to a disk file.

A .pypots extension will be appended to the filename if it does not already have one. Please note that such an extension is not necessary, but to indicate the saved model is from PyPOTS framework so people can distinguish.

Parameters:
  • saving_path (str) – The given path to save the model. The directory will be created if it does not exist.

  • overwrite (bool) – Whether to overwrite the model file if the path already exists.

Return type:

None

to(device)

Move the model to the given device.

Parameters:

device (Union[str, device]) – The device to move the model to. It can be a string or a torch.device object.

Return type:

None

pypots.forecasting.moment

The package including the modules of MOMENT.

Refer to the paper Mononito Goswami, Konrad Szafer, Arjun Choudhry, Yifu Cai, Shuo Li, and Artur Dubrawski. “MOMENT: A Family of Open Time-series Foundation Models”. In ICML, 2024.

Notes

This implementation is inspired by the official one https://github.com/moment-timeseries-foundation-model/moment-research

class pypots.forecasting.moment.MOMENT(n_steps, n_features, n_pred_steps, n_pred_features, term, patch_size, patch_stride, transformer_backbone, transformer_type, n_layers, d_ffn, d_model, dropout, head_dropout, finetuning_mode, revin_affine, add_positional_embedding, value_embedding_bias, orth_gain, mask_ratio=0.3, batch_size=32, epochs=100, patience=None, training_loss=<class 'pypots.nn.modules.loss.MSE'>, validation_metric=<class 'pypots.nn.modules.loss.MSE'>, optimizer=<class 'pypots.optim.adam.Adam'>, num_workers=0, device=None, saving_path=None, model_saving_strategy='best', verbose=True)[source]

Bases: BaseNNForecaster

The PyTorch implementation of the MOMENT forecasting model [5].

Parameters:
  • n_steps (int) – The number of time steps in the time-series data sample.

  • n_features (int) – The number of features in the time-series data sample.

  • n_pred_steps (int) – The number of steps in the forecasting time series.

  • n_pred_features (int) – The number of features in the forecasting time series.

  • term (str) – The forecasting term, which can be either ‘long’ or ‘short’.

  • patch_size (int) – The patch length for patch embedding.

  • patch_stride (int) – The stride for patch embedding.

  • transformer_backbone (str) – The backbone of the transformer model. It has to be one of [“t5-small”,”t5-base”,”t5-large”,”t5-3b”,”t5-11b”, “google/flan-t5-small”,”google/flan-t5-base”,”google/flan-t5-large”,”google/flan-t5-xl”,”google/flan-t5-xxl”].

  • transformer_type (str) – The type of the transformer model. It has to be one of [“encoder_only”,”decoder_only”,”encoder_decoder”].

  • n_layers (int) – The number of layers in the transformer backbone.

  • d_ffn (int) – The hidden size of the feed-forward network.

  • d_model (int) – The hidden size of the model backbone.

  • d_ffn – The hidden size of the feed-forward network .

  • dropout (float) – The dropout rate for the model.

  • head_dropout (float) – The dropout rate for the head of the model.

  • finetuning_mode (str) – The fine-tuning mode for the model. It has to be one of [“linear-probing”,”end-to-end”,”zero-shot”].

  • revin_affine (bool) – Whether to use affine transformation in the RevIn module.

  • add_positional_embedding (bool) – Whether to add positional embedding in the model.

  • value_embedding_bias (bool) – Whether to add bias in the value embedding.

  • orth_gain (float) – The gain for the orthogonal initialization.

  • mask_ratio (float) – The ratio of the mask for the model.

  • batch_size (int) – The batch size for training and evaluating the model.

  • epochs (int) – The number of epochs for training the model.

  • patience (Optional[int]) – The patience for the early-stopping mechanism. Given a positive integer, the training process will be stopped when the model does not perform better after that number of epochs. Leaving it default as None will disable the early-stopping.

  • training_loss (Union[Criterion, type]) – The customized loss function designed by users for training the model. If not given, will use the default loss as claimed in the original paper.

  • validation_metric (Union[Criterion, type]) – The customized metric function designed by users for validating the model. If not given, will use the default MSE metric.

  • optimizer (Union[Optimizer, type]) – The optimizer for model training. If not given, will use a default Adam optimizer.

  • num_workers (int) – The number of subprocesses to use for data loading. 0 means data loading will be in the main process, i.e. there won’t be subprocesses.

  • device (Union[str, device, list, None]) – The device for the model to run on. It can be a string, a torch.device object, or a list of them. If not given, will try to use CUDA devices first (will use the default CUDA device if there are multiple), then CPUs, considering CUDA and CPU are so far the main devices for people to train ML models. If given a list of devices, e.g. [‘cuda:0’, ‘cuda:1’], or [torch.device(‘cuda:0’), torch.device(‘cuda:1’)] , the model will be parallely trained on the multiple devices (so far only support parallel training on CUDA devices). Other devices like Google TPU and Apple Silicon accelerator MPS may be added in the future.

  • saving_path (Optional[str]) – The path for automatically saving model checkpoints and tensorboard files (i.e. loss values recorded during training into a tensorboard file). Will not save if not given.

  • model_saving_strategy (Optional[str]) – The strategy to save model checkpoints. It has to be one of [None, “best”, “better”, “all”]. No model will be saved when it is set as None. The “best” strategy will only automatically save the best model after the training finished. The “better” strategy will automatically save the model during training whenever the model performs better than in previous epochs. The “all” strategy will save every model after each epoch training.

  • verbose (bool) – Whether to print out the training logs during the training process.

fit(train_set, val_set=None, file_type='hdf5')

Train the classifier on the given data.

Parameters:
  • train_set (Union[dict, str]) – The dataset for model training, should be a dictionary including the key ‘X’, or a path string locating a data file. If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is time-series data for training, can contain missing values. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include the key ‘X’.

  • val_set (Union[dict, str, None]) – The dataset for model validating, should be a dictionary including the key ‘X’, or a path string locating a data file. If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is time-series data for validation, can contain missing values. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include the key ‘X’.

  • file_type (str) – The type of the given file if train_set and val_set are path strings.

Return type:

None

forecast(test_set, file_type='hdf5', **kwargs)

Forecast the future the input with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The data samples for testing, should be array-like with shape [n_samples, n_steps, n_features], or a path string locating a data file, e.g. h5 file.

  • file_type (str) – The type of the given file if X is a path string.

Returns:

Forecasting results.

Return type:

array-like, shape [n_samples, n_pred_steps, n_features],

load(path)

Load the saved model from a disk file.

Parameters:

path (str) – The local path to a disk file saving the trained model.

Return type:

None

Notes

If the training environment and the deploying/test environment use the same type of device (GPU/CPU), you can load the model directly with torch.load(model_path).

predict(test_set, file_type='hdf5', **kwargs)

Make predictions for the input data with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The test dataset for model to process, should be a dictionary including keys as ‘X’, or a path string locating a data file supported by PyPOTS (e.g. h5 file). If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is the time-series data for processing. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include ‘X’ key.

  • file_type (str) – The type of the given file if test_set is a path string.

Returns:

The dictionary containing the forecasting results as key ‘forecasting’ and latent variables if necessary.

Return type:

result_dict

save(saving_path, overwrite=False)

Save the model with current parameters to a disk file.

A .pypots extension will be appended to the filename if it does not already have one. Please note that such an extension is not necessary, but to indicate the saved model is from PyPOTS framework so people can distinguish.

Parameters:
  • saving_path (str) – The given path to save the model. The directory will be created if it does not exist.

  • overwrite (bool) – Whether to overwrite the model file if the path already exists.

Return type:

None

to(device)

Move the model to the given device.

Parameters:

device (Union[str, device]) – The device to move the model to. It can be a string or a torch.device object.

Return type:

None

pypots.forecasting.tefn

The implementation of TEFN for the partially-observed time-series forecasting task.

Refer to the paper Tianxiang Zhan, Yuanpeng He, Yong Deng, Zhen Li, Wenjie Du, and Qingsong Wen. Time Evidence Fusion Network: Multi-source View in Long-Term Time Series Forecasting. In Arxiv, 2024.

Notes

This implementation is transferred from the official one https://github.com/ztxtech/Time-Evidence-Fusion-Network

class pypots.forecasting.tefn.TEFN(n_steps, n_features, n_pred_steps, n_pred_features, n_fod=2, apply_nonstationary_norm=False, batch_size=32, epochs=100, patience=None, training_loss=<class 'pypots.nn.modules.loss.MSE'>, validation_metric=<class 'pypots.nn.modules.loss.MSE'>, optimizer=<class 'pypots.optim.adam.Adam'>, num_workers=0, device=None, saving_path=None, model_saving_strategy='best', verbose=True)[source]

Bases: BaseNNForecaster

The PyTorch implementation of the TEFN forecasting model [7].

Parameters:
  • n_steps (int) – The number of time steps in the time-series data sample.

  • n_features (int) – The number of features in the time-series data sample.

  • n_pred_steps (int) – The number of steps in the forecasting time series.

  • n_pred_features (int) – The number of features in the forecasting time series.

  • n_fod (int) – The number of FOD (frame of discernment) in the TEFN model.

  • apply_nonstationary_norm (bool) – Whether to apply the non-stationary normalization to the input data.

  • batch_size (int) – The batch size for training and evaluating the model.

  • epochs (int) – The number of epochs for training the model.

  • patience (Optional[int]) – The patience for the early-stopping mechanism. Given a positive integer, the training process will be stopped when the model does not perform better after that number of epochs. Leaving it default as None will disable the early-stopping.

  • training_loss (Union[Criterion, type]) – The customized loss function designed by users for training the model. If not given, will use the default loss as claimed in the original paper.

  • validation_metric (Union[Criterion, type]) – The customized metric function designed by users for validating the model. If not given, will use the default MSE metric.

  • optimizer (Union[Optimizer, type]) – The optimizer for model training. If not given, will use a default Adam optimizer.

  • num_workers (int) – The number of subprocesses to use for data loading. 0 means data loading will be in the main process, i.e. there won’t be subprocesses.

  • device (Union[str, device, list, None]) – The device for the model to run on. It can be a string, a torch.device object, or a list of them. If not given, will try to use CUDA devices first (will use the default CUDA device if there are multiple), then CPUs, considering CUDA and CPU are so far the main devices for people to train ML models. If given a list of devices, e.g. [‘cuda:0’, ‘cuda:1’], or [torch.device(‘cuda:0’), torch.device(‘cuda:1’)] , the model will be parallely trained on the multiple devices (so far only support parallel training on CUDA devices). Other devices like Google TPU and Apple Silicon accelerator MPS may be added in the future.

  • saving_path (Optional[str]) – The path for automatically saving model checkpoints and tensorboard files (i.e. loss values recorded during training into a tensorboard file). Will not save if not given.

  • model_saving_strategy (Optional[str]) – The strategy to save model checkpoints. It has to be one of [None, “best”, “better”, “all”]. No model will be saved when it is set as None. The “best” strategy will only automatically save the best model after the training finished. The “better” strategy will automatically save the model during training whenever the model performs better than in previous epochs. The “all” strategy will save every model after each epoch training.

  • verbose (bool) – Whether to print out the training logs during the training process.

fit(train_set, val_set=None, file_type='hdf5')

Train the classifier on the given data.

Parameters:
  • train_set (Union[dict, str]) – The dataset for model training, should be a dictionary including the key ‘X’, or a path string locating a data file. If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is time-series data for training, can contain missing values. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include the key ‘X’.

  • val_set (Union[dict, str, None]) – The dataset for model validating, should be a dictionary including the key ‘X’, or a path string locating a data file. If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is time-series data for validation, can contain missing values. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include the key ‘X’.

  • file_type (str) – The type of the given file if train_set and val_set are path strings.

Return type:

None

forecast(test_set, file_type='hdf5', **kwargs)

Forecast the future the input with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The data samples for testing, should be array-like with shape [n_samples, n_steps, n_features], or a path string locating a data file, e.g. h5 file.

  • file_type (str) – The type of the given file if X is a path string.

Returns:

Forecasting results.

Return type:

array-like, shape [n_samples, n_pred_steps, n_features],

load(path)

Load the saved model from a disk file.

Parameters:

path (str) – The local path to a disk file saving the trained model.

Return type:

None

Notes

If the training environment and the deploying/test environment use the same type of device (GPU/CPU), you can load the model directly with torch.load(model_path).

predict(test_set, file_type='hdf5', **kwargs)

Make predictions for the input data with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The test dataset for model to process, should be a dictionary including keys as ‘X’, or a path string locating a data file supported by PyPOTS (e.g. h5 file). If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is the time-series data for processing. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include ‘X’ key.

  • file_type (str) – The type of the given file if test_set is a path string.

Returns:

The dictionary containing the forecasting results as key ‘forecasting’ and latent variables if necessary.

Return type:

result_dict

save(saving_path, overwrite=False)

Save the model with current parameters to a disk file.

A .pypots extension will be appended to the filename if it does not already have one. Please note that such an extension is not necessary, but to indicate the saved model is from PyPOTS framework so people can distinguish.

Parameters:
  • saving_path (str) – The given path to save the model. The directory will be created if it does not exist.

  • overwrite (bool) – Whether to overwrite the model file if the path already exists.

Return type:

None

to(device)

Move the model to the given device.

Parameters:

device (Union[str, device]) – The device to move the model to. It can be a string or a torch.device object.

Return type:

None

pypots.forecasting.segrnn

The package of the partially-observed time-series forecasting model SegRNN.

Refer to the paper Haixu Wu, Tengge Hu, Yong Liu, Hang Zhou, Jianmin Wang, and Mingsheng Long. SegRNN: Temporal 2D-Variation Modeling for General Time Series Analysis. In ICLR, 2023.

Notes

This implementation is inspired by the official one https://github.com/thuml/Time-Series-Library

class pypots.forecasting.segrnn.SegRNN(n_steps, n_features, n_pred_steps, n_pred_features, seg_len, d_model, dropout=0, batch_size=32, epochs=100, patience=None, training_loss=<class 'pypots.nn.modules.loss.MSE'>, validation_metric=<class 'pypots.nn.modules.loss.MSE'>, optimizer=<pypots.optim.adam.Adam object>, num_workers=0, device=None, saving_path=None, model_saving_strategy='best', verbose=True)[source]

Bases: BaseNNForecaster

The PyTorch implementation of the SegRNN model [25].

Parameters:
  • n_steps (int) – The number of time steps in the time-series data sample.

  • n_features (int) – The number of features in the time-series data sample.

  • n_pred_steps (int) – The number of steps in the forecasting time series.

  • n_pred_features (int) – The number of features in the forecasting time series.

  • seg_len (int) – The segment length for input of RNN.

  • d_model (int) – The dimension of RNN cell.

  • dropout (float) – The dropout rate of the output layer of SegRNN.

  • batch_size (int) – The batch size for training and evaluating the model.

  • epochs (int) – The number of epochs for training the model.

  • patience (Optional[int]) – The patience for the early-stopping mechanism. Given a positive integer, the training process will be stopped when the model does not perform better after that number of epochs. Leaving it default as None will disable the early-stopping.

  • training_loss (Union[Criterion, type]) – The customized loss function designed by users for training the model. If not given, will use the default loss as claimed in the original paper.

  • validation_metric (Union[Criterion, type]) – The customized metric function designed by users for validating the model. If not given, will use the default MSE metric.

  • optimizer (Optimizer) – The optimizer for model training. If not given, will use a default Adam optimizer.

  • num_workers (int) – The number of subprocesses to use for data loading. 0 means data loading will be in the main process, i.e. there won’t be subprocesses.

  • device (Union[str, device, list, None]) – The device for the model to run on. It can be a string, a torch.device object, or a list of them. If not given, will try to use CUDA devices first (will use the default CUDA device if there are multiple), then CPUs, considering CUDA and CPU are so far the main devices for people to train ML models. If given a list of devices, e.g. [‘cuda:0’, ‘cuda:1’], or [torch.device(‘cuda:0’), torch.device(‘cuda:1’)] , the model will be parallely trained on the multiple devices (so far only support parallel training on CUDA devices). Other devices like Google TPU and Apple Silicon accelerator MPS may be added in the future.

  • saving_path (Optional[str]) – The path for automatically saving model checkpoints and tensorboard files (i.e. loss values recorded during training into a tensorboard file). Will not save if not given.

  • model_saving_strategy (Optional[str]) – The strategy to save model checkpoints. It has to be one of [None, “best”, “better”, “all”]. No model will be saved when it is set as None. The “best” strategy will only automatically save the best model after the training finished. The “better” strategy will automatically save the model during training whenever the model performs better than in previous epochs. The “all” strategy will save every model after each epoch training.

  • verbose (bool) – Whether to print out the training logs during the training process.

fit(train_set, val_set=None, file_type='hdf5')

Train the classifier on the given data.

Parameters:
  • train_set (Union[dict, str]) – The dataset for model training, should be a dictionary including the key ‘X’, or a path string locating a data file. If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is time-series data for training, can contain missing values. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include the key ‘X’.

  • val_set (Union[dict, str, None]) – The dataset for model validating, should be a dictionary including the key ‘X’, or a path string locating a data file. If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is time-series data for validation, can contain missing values. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include the key ‘X’.

  • file_type (str) – The type of the given file if train_set and val_set are path strings.

Return type:

None

forecast(test_set, file_type='hdf5', **kwargs)

Forecast the future the input with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The data samples for testing, should be array-like with shape [n_samples, n_steps, n_features], or a path string locating a data file, e.g. h5 file.

  • file_type (str) – The type of the given file if X is a path string.

Returns:

Forecasting results.

Return type:

array-like, shape [n_samples, n_pred_steps, n_features],

load(path)

Load the saved model from a disk file.

Parameters:

path (str) – The local path to a disk file saving the trained model.

Return type:

None

Notes

If the training environment and the deploying/test environment use the same type of device (GPU/CPU), you can load the model directly with torch.load(model_path).

predict(test_set, file_type='hdf5', **kwargs)

Make predictions for the input data with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The test dataset for model to process, should be a dictionary including keys as ‘X’, or a path string locating a data file supported by PyPOTS (e.g. h5 file). If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is the time-series data for processing. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include ‘X’ key.

  • file_type (str) – The type of the given file if test_set is a path string.

Returns:

The dictionary containing the forecasting results as key ‘forecasting’ and latent variables if necessary.

Return type:

result_dict

save(saving_path, overwrite=False)

Save the model with current parameters to a disk file.

A .pypots extension will be appended to the filename if it does not already have one. Please note that such an extension is not necessary, but to indicate the saved model is from PyPOTS framework so people can distinguish.

Parameters:
  • saving_path (str) – The given path to save the model. The directory will be created if it does not exist.

  • overwrite (bool) – Whether to overwrite the model file if the path already exists.

Return type:

None

to(device)

Move the model to the given device.

Parameters:

device (Union[str, device]) – The device to move the model to. It can be a string or a torch.device object.

Return type:

None

pypots.forecasting.moderntcn

The package of the partially-observed time-series forecasting model ModernTCN.

Refer to the paper Donghao Luo, and Xue Wang. ModernTCN: A Modern Pure Convolution Structure for General Time Series Analysis. In The Twelfth International Conference on Learning Representations. 2024.

Notes

This implementation is inspired by the official one https://github.com/luodhhh/ModernTCN

class pypots.forecasting.moderntcn.ModernTCN(n_steps, n_features, n_pred_steps, n_pred_features, patch_size, patch_stride, downsampling_ratio, ffn_ratio, num_blocks, large_size, small_size, dims, small_kernel_merged=False, backbone_dropout=0.1, head_dropout=0.1, use_multi_scale=True, individual=False, apply_nonstationary_norm=False, batch_size=32, epochs=100, patience=None, training_loss=<class 'pypots.nn.modules.loss.MSE'>, validation_metric=<class 'pypots.nn.modules.loss.MSE'>, optimizer=<pypots.optim.adam.Adam object>, num_workers=0, device=None, saving_path=None, model_saving_strategy='best', verbose=True)[source]

Bases: BaseNNForecaster

The PyTorch implementation of the ModernTCN model [11].

Parameters:
  • n_steps (int) – The number of time steps in the time-series data sample.

  • n_features (int) – The number of features in the time-series data sample.

  • n_pred_steps (int) – The number of steps in the forecasting time series.

  • n_pred_features (int) – The number of features in the forecasting time series.

  • patch_size (int) – The size of the patch for the patching mechanism.

  • patch_stride (int) – The stride for the patching mechanism.

  • downsampling_ratio (float) – The downsampling ratio for the downsampling mechanism.

  • ffn_ratio (float) – The ratio for the feed-forward neural network in the model.

  • num_blocks (list) – The number of blocks for the model. It should be a list of integers.

  • large_size (list) – The size of the large kernel. It should be a list of odd integers.

  • small_size (list) – The size of the small kernel. It should be a list of odd integers.

  • dims (list) – The dimensions for the model. It should be a list of integers.

  • small_kernel_merged (bool) – Whether the small kernel is merged.

  • backbone_dropout (float) – The dropout rate for the backbone of the model.

  • head_dropout (float) – The dropout rate for the head of the model.

  • use_multi_scale (bool) – Whether to use multi-scale fusing.

  • individual (bool) – Whether to make a linear layer for each variate/channel/feature individually.

  • apply_nonstationary_norm (bool) – Whether to apply non-stationary normalization.

  • batch_size (int) – The batch size for training and evaluating the model.

  • epochs (int) – The number of epochs for training the model.

  • patience (Optional[int]) – The patience for the early-stopping mechanism. Given a positive integer, the training process will be stopped when the model does not perform better after that number of epochs. Leaving it default as None will disable the early-stopping.

  • training_loss (Union[Criterion, type]) – The customized loss function designed by users for training the model. If not given, will use the default loss as claimed in the original paper.

  • validation_metric (Union[Criterion, type]) – The customized metric function designed by users for validating the model. If not given, will use the default MSE metric.

  • optimizer (Optimizer) – The optimizer for model training. If not given, will use a default Adam optimizer.

  • num_workers (int) – The number of subprocesses to use for data loading. 0 means data loading will be in the main process, i.e. there won’t be subprocesses.

  • device (Union[str, device, list, None]) – The device for the model to run on. It can be a string, a torch.device object, or a list of them. If not given, will try to use CUDA devices first (will use the default CUDA device if there are multiple), then CPUs, considering CUDA and CPU are so far the main devices for people to train ML models. If given a list of devices, e.g. [‘cuda:0’, ‘cuda:1’], or [torch.device(‘cuda:0’), torch.device(‘cuda:1’)] , the model will be parallely trained on the multiple devices (so far only support parallel training on CUDA devices). Other devices like Google TPU and Apple Silicon accelerator MPS may be added in the future.

  • saving_path (Optional[str]) – The path for automatically saving model checkpoints and tensorboard files (i.e. loss values recorded during training into a tensorboard file). Will not save if not given.

  • model_saving_strategy (Optional[str]) – The strategy to save model checkpoints. It has to be one of [None, “best”, “better”, “all”]. No model will be saved when it is set as None. The “best” strategy will only automatically save the best model after the training finished. The “better” strategy will automatically save the model during training whenever the model performs better than in previous epochs. The “all” strategy will save every model after each epoch training.

  • verbose (bool) – Whether to print out the training logs during the training process.

fit(train_set, val_set=None, file_type='hdf5')

Train the classifier on the given data.

Parameters:
  • train_set (Union[dict, str]) – The dataset for model training, should be a dictionary including the key ‘X’, or a path string locating a data file. If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is time-series data for training, can contain missing values. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include the key ‘X’.

  • val_set (Union[dict, str, None]) – The dataset for model validating, should be a dictionary including the key ‘X’, or a path string locating a data file. If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is time-series data for validation, can contain missing values. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include the key ‘X’.

  • file_type (str) – The type of the given file if train_set and val_set are path strings.

Return type:

None

forecast(test_set, file_type='hdf5', **kwargs)

Forecast the future the input with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The data samples for testing, should be array-like with shape [n_samples, n_steps, n_features], or a path string locating a data file, e.g. h5 file.

  • file_type (str) – The type of the given file if X is a path string.

Returns:

Forecasting results.

Return type:

array-like, shape [n_samples, n_pred_steps, n_features],

load(path)

Load the saved model from a disk file.

Parameters:

path (str) – The local path to a disk file saving the trained model.

Return type:

None

Notes

If the training environment and the deploying/test environment use the same type of device (GPU/CPU), you can load the model directly with torch.load(model_path).

predict(test_set, file_type='hdf5', **kwargs)

Make predictions for the input data with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The test dataset for model to process, should be a dictionary including keys as ‘X’, or a path string locating a data file supported by PyPOTS (e.g. h5 file). If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is the time-series data for processing. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include ‘X’ key.

  • file_type (str) – The type of the given file if test_set is a path string.

Returns:

The dictionary containing the forecasting results as key ‘forecasting’ and latent variables if necessary.

Return type:

result_dict

save(saving_path, overwrite=False)

Save the model with current parameters to a disk file.

A .pypots extension will be appended to the filename if it does not already have one. Please note that such an extension is not necessary, but to indicate the saved model is from PyPOTS framework so people can distinguish.

Parameters:
  • saving_path (str) – The given path to save the model. The directory will be created if it does not exist.

  • overwrite (bool) – Whether to overwrite the model file if the path already exists.

Return type:

None

to(device)

Move the model to the given device.

Parameters:

device (Union[str, device]) – The device to move the model to. It can be a string or a torch.device object.

Return type:

None

pypots.forecasting.fits

The implementation of FITS for the partially-observed time-series forecasting task.

Refer to the paper Zhijian Xu, Ailing Zeng, and Qiang Xu. FITS: Modeling Time Series with 10k parameters. In The Twelfth International Conference on Learning Representations, 2024.

Notes

This implementation is inspired by the official one https://github.com/VEWOXIC/FITS

class pypots.forecasting.fits.FITS(n_steps, n_features, n_pred_steps, n_pred_features, cut_freq, individual=False, apply_nonstationary_norm=False, batch_size=32, epochs=100, patience=None, training_loss=<class 'pypots.nn.modules.loss.MSE'>, validation_metric=<class 'pypots.nn.modules.loss.MSE'>, optimizer=<class 'pypots.optim.adam.Adam'>, num_workers=0, device=None, saving_path=None, model_saving_strategy='best', verbose=True)[source]

Bases: BaseNNForecaster

The PyTorch implementation of the FITS forecasting model [8].

Parameters:
  • n_steps (int) – The number of time steps in the time-series data sample.

  • n_features (int) – The number of features in the time-series data sample.

  • n_pred_steps (int) – The number of steps in the forecasting time series.

  • n_pred_features (int) – The number of features in the forecasting time series.

  • cut_freq (int) – The cut-off frequency for the Fourier transformation.

  • individual (bool) – Whether to use individual Fourier transformation for each feature.

  • apply_nonstationary_norm (bool) – Whether to apply the non-stationary normalization to the input data.

  • batch_size (int) – The batch size for training and evaluating the model.

  • epochs (int) – The number of epochs for training the model.

  • patience (Optional[int]) – The patience for the early-stopping mechanism. Given a positive integer, the training process will be stopped when the model does not perform better after that number of epochs. Leaving it default as None will disable the early-stopping.

  • training_loss (Union[Criterion, type]) – The customized loss function designed by users for training the model. If not given, will use the default loss as claimed in the original paper.

  • validation_metric (Union[Criterion, type]) – The customized metric function designed by users for validating the model. If not given, will use the default MSE metric.

  • optimizer (Union[Optimizer, type]) – The optimizer for model training. If not given, will use a default Adam optimizer.

  • num_workers (int) – The number of subprocesses to use for data loading. 0 means data loading will be in the main process, i.e. there won’t be subprocesses.

  • device (Union[str, device, list, None]) – The device for the model to run on. It can be a string, a torch.device object, or a list of them. If not given, will try to use CUDA devices first (will use the default CUDA device if there are multiple), then CPUs, considering CUDA and CPU are so far the main devices for people to train ML models. If given a list of devices, e.g. [‘cuda:0’, ‘cuda:1’], or [torch.device(‘cuda:0’), torch.device(‘cuda:1’)] , the model will be parallely trained on the multiple devices (so far only support parallel training on CUDA devices). Other devices like Google TPU and Apple Silicon accelerator MPS may be added in the future.

  • saving_path (Optional[str]) – The path for automatically saving model checkpoints and tensorboard files (i.e. loss values recorded during training into a tensorboard file). Will not save if not given.

  • model_saving_strategy (Optional[str]) – The strategy to save model checkpoints. It has to be one of [None, “best”, “better”, “all”]. No model will be saved when it is set as None. The “best” strategy will only automatically save the best model after the training finished. The “better” strategy will automatically save the model during training whenever the model performs better than in previous epochs. The “all” strategy will save every model after each epoch training.

  • verbose (bool) – Whether to print out the training logs during the training process.

fit(train_set, val_set=None, file_type='hdf5')

Train the classifier on the given data.

Parameters:
  • train_set (Union[dict, str]) – The dataset for model training, should be a dictionary including the key ‘X’, or a path string locating a data file. If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is time-series data for training, can contain missing values. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include the key ‘X’.

  • val_set (Union[dict, str, None]) – The dataset for model validating, should be a dictionary including the key ‘X’, or a path string locating a data file. If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is time-series data for validation, can contain missing values. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include the key ‘X’.

  • file_type (str) – The type of the given file if train_set and val_set are path strings.

Return type:

None

forecast(test_set, file_type='hdf5', **kwargs)

Forecast the future the input with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The data samples for testing, should be array-like with shape [n_samples, n_steps, n_features], or a path string locating a data file, e.g. h5 file.

  • file_type (str) – The type of the given file if X is a path string.

Returns:

Forecasting results.

Return type:

array-like, shape [n_samples, n_pred_steps, n_features],

load(path)

Load the saved model from a disk file.

Parameters:

path (str) – The local path to a disk file saving the trained model.

Return type:

None

Notes

If the training environment and the deploying/test environment use the same type of device (GPU/CPU), you can load the model directly with torch.load(model_path).

predict(test_set, file_type='hdf5', **kwargs)

Make predictions for the input data with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The test dataset for model to process, should be a dictionary including keys as ‘X’, or a path string locating a data file supported by PyPOTS (e.g. h5 file). If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is the time-series data for processing. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include ‘X’ key.

  • file_type (str) – The type of the given file if test_set is a path string.

Returns:

The dictionary containing the forecasting results as key ‘forecasting’ and latent variables if necessary.

Return type:

result_dict

save(saving_path, overwrite=False)

Save the model with current parameters to a disk file.

A .pypots extension will be appended to the filename if it does not already have one. Please note that such an extension is not necessary, but to indicate the saved model is from PyPOTS framework so people can distinguish.

Parameters:
  • saving_path (str) – The given path to save the model. The directory will be created if it does not exist.

  • overwrite (bool) – Whether to overwrite the model file if the path already exists.

Return type:

None

to(device)

Move the model to the given device.

Parameters:

device (Union[str, device]) – The device to move the model to. It can be a string or a torch.device object.

Return type:

None

pypots.forecasting.timesnet

The package of the partially-observed time-series forecasting model TimesNet.

Refer to the paper Haixu Wu, Tengge Hu, Yong Liu, Hang Zhou, Jianmin Wang, and Mingsheng Long. TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis. In ICLR, 2023.

Notes

This implementation is inspired by the official one https://github.com/thuml/Time-Series-Library

class pypots.forecasting.timesnet.TimesNet(n_steps, n_features, n_pred_steps, n_pred_features, n_layers, top_k, d_model, d_ffn, n_kernels, dropout=0, apply_nonstationary_norm=False, batch_size=32, epochs=100, patience=None, training_loss=<class 'pypots.nn.modules.loss.MSE'>, validation_metric=<class 'pypots.nn.modules.loss.MSE'>, optimizer=<pypots.optim.adam.Adam object>, num_workers=0, device=None, saving_path=None, model_saving_strategy='best', verbose=True)[source]

Bases: BaseNNForecaster

The PyTorch implementation of the TimesNet model [18].

Parameters:
  • n_steps (int) – The number of time steps in the time-series data sample.

  • n_features (int) – The number of features in the time-series data sample.

  • n_pred_steps (int) – The number of steps in the forecasting time series.

  • n_pred_features (int) – The number of features in the forecasting time series.

  • n_layers (int) – The number of layers in the TimesNet model.

  • top_k (int) – The number of top-k amplitude values to be selected to obtain the most significant frequencies.

  • d_model (int) – The dimension of the model.

  • d_ffn (int) – The dimension of the feed-forward network.

  • n_kernels (int) – The number of 2D kernels (2D convolutional layers) to use in the submodule InceptionBlockV1.

  • dropout (float) – The dropout rate for the model.

  • apply_nonstationary_norm (bool) – Whether to apply non-stationary normalization to the input data for TimesNet. Please refer to [27] for details about non-stationary normalization, which is not the idea of the original TimesNet paper. Hence, we make it optional and default not to use here.

  • batch_size (int) – The batch size for training and evaluating the model.

  • epochs (int) – The number of epochs for training the model.

  • patience (Optional[int]) – The patience for the early-stopping mechanism. Given a positive integer, the training process will be stopped when the model does not perform better after that number of epochs. Leaving it default as None will disable the early-stopping.

  • training_loss (Union[Criterion, type]) – The customized loss function designed by users for training the model. If not given, will use the default loss as claimed in the original paper.

  • validation_metric (Union[Criterion, type]) – The customized metric function designed by users for validating the model. If not given, will use the default MSE metric.

  • optimizer (Optimizer) – The optimizer for model training. If not given, will use a default Adam optimizer.

  • num_workers (int) – The number of subprocesses to use for data loading. 0 means data loading will be in the main process, i.e. there won’t be subprocesses.

  • device (Union[str, device, list, None]) – The device for the model to run on. It can be a string, a torch.device object, or a list of them. If not given, will try to use CUDA devices first (will use the default CUDA device if there are multiple), then CPUs, considering CUDA and CPU are so far the main devices for people to train ML models. If given a list of devices, e.g. [‘cuda:0’, ‘cuda:1’], or [torch.device(‘cuda:0’), torch.device(‘cuda:1’)] , the model will be parallely trained on the multiple devices (so far only support parallel training on CUDA devices). Other devices like Google TPU and Apple Silicon accelerator MPS may be added in the future.

  • saving_path (Optional[str]) – The path for automatically saving model checkpoints and tensorboard files (i.e. loss values recorded during training into a tensorboard file). Will not save if not given.

  • model_saving_strategy (Optional[str]) – The strategy to save model checkpoints. It has to be one of [None, “best”, “better”, “all”]. No model will be saved when it is set as None. The “best” strategy will only automatically save the best model after the training finished. The “better” strategy will automatically save the model during training whenever the model performs better than in previous epochs. The “all” strategy will save every model after each epoch training.

  • verbose (bool) – Whether to print out the training logs during the training process.

fit(train_set, val_set=None, file_type='hdf5')

Train the classifier on the given data.

Parameters:
  • train_set (Union[dict, str]) – The dataset for model training, should be a dictionary including the key ‘X’, or a path string locating a data file. If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is time-series data for training, can contain missing values. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include the key ‘X’.

  • val_set (Union[dict, str, None]) – The dataset for model validating, should be a dictionary including the key ‘X’, or a path string locating a data file. If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is time-series data for validation, can contain missing values. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include the key ‘X’.

  • file_type (str) – The type of the given file if train_set and val_set are path strings.

Return type:

None

forecast(test_set, file_type='hdf5', **kwargs)

Forecast the future the input with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The data samples for testing, should be array-like with shape [n_samples, n_steps, n_features], or a path string locating a data file, e.g. h5 file.

  • file_type (str) – The type of the given file if X is a path string.

Returns:

Forecasting results.

Return type:

array-like, shape [n_samples, n_pred_steps, n_features],

load(path)

Load the saved model from a disk file.

Parameters:

path (str) – The local path to a disk file saving the trained model.

Return type:

None

Notes

If the training environment and the deploying/test environment use the same type of device (GPU/CPU), you can load the model directly with torch.load(model_path).

predict(test_set, file_type='hdf5', **kwargs)

Make predictions for the input data with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The test dataset for model to process, should be a dictionary including keys as ‘X’, or a path string locating a data file supported by PyPOTS (e.g. h5 file). If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is the time-series data for processing. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include ‘X’ key.

  • file_type (str) – The type of the given file if test_set is a path string.

Returns:

The dictionary containing the forecasting results as key ‘forecasting’ and latent variables if necessary.

Return type:

result_dict

save(saving_path, overwrite=False)

Save the model with current parameters to a disk file.

A .pypots extension will be appended to the filename if it does not already have one. Please note that such an extension is not necessary, but to indicate the saved model is from PyPOTS framework so people can distinguish.

Parameters:
  • saving_path (str) – The given path to save the model. The directory will be created if it does not exist.

  • overwrite (bool) – Whether to overwrite the model file if the path already exists.

Return type:

None

to(device)

Move the model to the given device.

Parameters:

device (Union[str, device]) – The device to move the model to. It can be a string or a torch.device object.

Return type:

None

pypots.forecasting.timemixer

The implementation of TimeMixer for the partially-observed time-series forecasting task.

Refer to the paper Shiyu Wang, Haixu Wu, Xiaoming Shi, Tengge Hu, Huakun Luo, Lintao Ma, James Y. Zhang, and Jun Zhou. “TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting”. In ICLR 2024.

Notes

This implementation is inspired by the official one https://github.com/kwuking/TimeMixer

class pypots.forecasting.timemixer.TimeMixer(n_steps, n_features, n_pred_steps, n_pred_features, term, n_layers, d_model, d_ffn, top_k, dropout=0, channel_independence=False, decomp_method='moving_avg', moving_avg=5, downsampling_layers=3, downsampling_window=2, use_norm=False, batch_size=32, epochs=100, patience=None, training_loss=<class 'pypots.nn.modules.loss.MSE'>, validation_metric=<class 'pypots.nn.modules.loss.MSE'>, optimizer=<class 'pypots.optim.adam.Adam'>, num_workers=0, device=None, saving_path=None, model_saving_strategy='best', verbose=True)[source]

Bases: BaseNNForecaster

The PyTorch implementation of the TimeMixer forecasting model [9].

Parameters:
  • n_steps (int) – The number of time steps in the time-series data sample.

  • n_features (int) – The number of features in the time-series data sample.

  • n_pred_steps (int) – The number of steps in the forecasting time series.

  • n_pred_features (int) – The number of features in the forecasting time series.

  • term (str) – The forecasting term, which can be either ‘long’ or ‘short’.

  • n_layers (int) – The number of layers in the TimeMixer model.

  • d_model (int) – The dimension of the model.

  • d_ffn (int) – The dimension of the feed-forward network.

  • top_k (int) – The number of top-k amplitude values to be selected to obtain the most significant frequencies.

  • dropout (float) – The dropout rate for the model.

  • channel_independence (bool) – Whether to use channel independence in the model.

  • decomp_method (str) – The decomposition method for the model. It has to be one of [‘moving_avg’, ‘dft_decomp’].

  • moving_avg (int) – The window size for moving average decomposition.

  • downsampling_layers (int) – The number of downsampling layers in the model.

  • downsampling_window (int) – The window size for downsampling.

  • use_norm (bool) – Whether to apply RevIN to the input data for TimeMixer.

  • batch_size (int) – The batch size for training and evaluating the model.

  • epochs (int) – The number of epochs for training the model.

  • patience (Optional[int]) – The patience for the early-stopping mechanism. Given a positive integer, the training process will be stopped when the model does not perform better after that number of epochs. Leaving it default as None will disable the early-stopping.

  • training_loss (Union[Criterion, type]) – The customized loss function designed by users for training the model. If not given, will use the default loss as claimed in the original paper.

  • validation_metric (Union[Criterion, type]) – The customized metric function designed by users for validating the model. If not given, will use the default MSE metric.

  • optimizer (Union[Optimizer, type]) – The optimizer for model training. If not given, will use a default Adam optimizer.

  • num_workers (int) – The number of subprocesses to use for data loading. 0 means data loading will be in the main process, i.e. there won’t be subprocesses.

  • device (Union[str, device, list, None]) – The device for the model to run on. It can be a string, a torch.device object, or a list of them. If not given, will try to use CUDA devices first (will use the default CUDA device if there are multiple), then CPUs, considering CUDA and CPU are so far the main devices for people to train ML models. If given a list of devices, e.g. [‘cuda:0’, ‘cuda:1’], or [torch.device(‘cuda:0’), torch.device(‘cuda:1’)] , the model will be parallely trained on the multiple devices (so far only support parallel training on CUDA devices). Other devices like Google TPU and Apple Silicon accelerator MPS may be added in the future.

  • saving_path (Optional[str]) – The path for automatically saving model checkpoints and tensorboard files (i.e. loss values recorded during training into a tensorboard file). Will not save if not given.

  • model_saving_strategy (Optional[str]) – The strategy to save model checkpoints. It has to be one of [None, “best”, “better”, “all”]. No model will be saved when it is set as None. The “best” strategy will only automatically save the best model after the training finished. The “better” strategy will automatically save the model during training whenever the model performs better than in previous epochs. The “all” strategy will save every model after each epoch training.

  • verbose (bool) – Whether to print out the training logs during the training process.

fit(train_set, val_set=None, file_type='hdf5')

Train the classifier on the given data.

Parameters:
  • train_set (Union[dict, str]) – The dataset for model training, should be a dictionary including the key ‘X’, or a path string locating a data file. If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is time-series data for training, can contain missing values. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include the key ‘X’.

  • val_set (Union[dict, str, None]) – The dataset for model validating, should be a dictionary including the key ‘X’, or a path string locating a data file. If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is time-series data for validation, can contain missing values. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include the key ‘X’.

  • file_type (str) – The type of the given file if train_set and val_set are path strings.

Return type:

None

forecast(test_set, file_type='hdf5', **kwargs)

Forecast the future the input with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The data samples for testing, should be array-like with shape [n_samples, n_steps, n_features], or a path string locating a data file, e.g. h5 file.

  • file_type (str) – The type of the given file if X is a path string.

Returns:

Forecasting results.

Return type:

array-like, shape [n_samples, n_pred_steps, n_features],

load(path)

Load the saved model from a disk file.

Parameters:

path (str) – The local path to a disk file saving the trained model.

Return type:

None

Notes

If the training environment and the deploying/test environment use the same type of device (GPU/CPU), you can load the model directly with torch.load(model_path).

predict(test_set, file_type='hdf5', **kwargs)

Make predictions for the input data with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The test dataset for model to process, should be a dictionary including keys as ‘X’, or a path string locating a data file supported by PyPOTS (e.g. h5 file). If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is the time-series data for processing. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include ‘X’ key.

  • file_type (str) – The type of the given file if test_set is a path string.

Returns:

The dictionary containing the forecasting results as key ‘forecasting’ and latent variables if necessary.

Return type:

result_dict

save(saving_path, overwrite=False)

Save the model with current parameters to a disk file.

A .pypots extension will be appended to the filename if it does not already have one. Please note that such an extension is not necessary, but to indicate the saved model is from PyPOTS framework so people can distinguish.

Parameters:
  • saving_path (str) – The given path to save the model. The directory will be created if it does not exist.

  • overwrite (bool) – Whether to overwrite the model file if the path already exists.

Return type:

None

to(device)

Move the model to the given device.

Parameters:

device (Union[str, device]) – The device to move the model to. It can be a string or a torch.device object.

Return type:

None

pypots.forecasting.csdi

The implementation of CSDI for the partially-observed time-series forecasting task.

Refer to the paper Yusuke Tashiro, Jiaming Song, Yang Song, and Stefano Ermon. CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation. In NeurIPS, 2021.

Notes

This implementation is inspired by the official one the official implementation https://github.com/ermongroup/CSDI

class pypots.forecasting.csdi.CSDI(n_steps, n_features, n_pred_steps, n_pred_features, n_layers, n_heads, n_channels, d_time_embedding, d_feature_embedding, d_diffusion_embedding, n_diffusion_steps=50, target_strategy='random', is_unconditional=False, schedule='quad', beta_start=0.0001, beta_end=0.5, batch_size=32, epochs=100, patience=None, optimizer=<class 'pypots.optim.adam.Adam'>, num_workers=0, device=None, saving_path=None, model_saving_strategy='best', verbose=True)[source]

Bases: BaseNNForecaster

The PyTorch implementation of the CSDI model [34].

Parameters:
  • n_steps (int) – The number of time steps in the time-series data sample.

  • n_features (int) – The number of features in the time-series data sample.

  • n_pred_steps (int) – The number of steps in the forecasting time series.

  • n_pred_features (int) – The number of features in the forecasting time series.

  • n_layers (int) – The number of layers in the CSDI model.

  • n_heads (int) – The number of heads in the multi-head attention mechanism.

  • n_channels (int) – The number of residual channels.

  • d_time_embedding (int) – The dimension number of the time (temporal) embedding.

  • d_feature_embedding (int) – The dimension number of the feature embedding.

  • d_diffusion_embedding (int) – The dimension number of the diffusion embedding.

  • is_unconditional (bool) – Whether the model is unconditional or conditional.

  • target_strategy (str) – The strategy for selecting the target for the diffusion process. It has to be one of [“mix”, “random”].

  • n_diffusion_steps (int) – The number of the diffusion step T in the original paper.

  • schedule (str) – The schedule for other noise levels. It has to be one of [“quad”, “linear”].

  • beta_start (float) – The minimum noise level.

  • beta_end (float) – The maximum noise level.

  • batch_size (int) – The batch size for training and evaluating the model.

  • epochs (int) – The number of epochs for training the model.

  • patience (Optional[int]) – The patience for the early-stopping mechanism. Given a positive integer, the training process will be stopped when the model does not perform better after that number of epochs. Leaving it default as None will disable the early-stopping.

  • optimizer (Union[Optimizer, type]) – The optimizer for model training. If not given, will use a default Adam optimizer.

  • num_workers (int) – The number of subprocesses to use for data loading. 0 means data loading will be in the main process, i.e. there won’t be subprocesses.

  • device (Union[str, device, list, None]) – The device for the model to run on. It can be a string, a torch.device object, or a list of them. If not given, will try to use CUDA devices first (will use the default CUDA device if there are multiple), then CPUs, considering CUDA and CPU are so far the main devices for people to train ML models. If given a list of devices, e.g. [‘cuda:0’, ‘cuda:1’], or [torch.device(‘cuda:0’), torch.device(‘cuda:1’)] , the model will be parallely trained on the multiple devices (so far only support parallel training on CUDA devices). Other devices like Google TPU and Apple Silicon accelerator MPS may be added in the future.

  • saving_path (Optional[str]) – The path for automatically saving model checkpoints and tensorboard files (i.e. loss values recorded during training into a tensorboard file). Will not save if not given.

  • model_saving_strategy (Optional[str]) – The strategy to save model checkpoints. It has to be one of [None, “best”, “better”, “all”]. No model will be saved when it is set as None. The “best” strategy will only automatically save the best model after the training finished. The “better” strategy will automatically save the model during training whenever the model performs better than in previous epochs. The “all” strategy will save every model after each epoch training.

  • verbose (bool) – Whether to print out the training logs during the training process.

fit(train_set, val_set=None, file_type='hdf5', n_sampling_times=1)[source]

Train the classifier on the given data.

Parameters:
  • train_set (Union[dict, str]) – The dataset for model training, should be a dictionary including the key ‘X’, or a path string locating a data file. If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is time-series data for training, can contain missing values. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include the key ‘X’.

  • val_set (Union[dict, str, None]) – The dataset for model validating, should be a dictionary including the key ‘X’, or a path string locating a data file. If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is time-series data for validation, can contain missing values. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include the key ‘X’.

  • file_type (str) – The type of the given file if train_set and val_set are path strings.

Return type:

None

forecast(test_set, file_type='hdf5', **kwargs)

Forecast the future the input with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The data samples for testing, should be array-like with shape [n_samples, n_steps, n_features], or a path string locating a data file, e.g. h5 file.

  • file_type (str) – The type of the given file if X is a path string.

Returns:

Forecasting results.

Return type:

array-like, shape [n_samples, n_pred_steps, n_features],

load(path)

Load the saved model from a disk file.

Parameters:

path (str) – The local path to a disk file saving the trained model.

Return type:

None

Notes

If the training environment and the deploying/test environment use the same type of device (GPU/CPU), you can load the model directly with torch.load(model_path).

predict(test_set, file_type='hdf5', n_sampling_times=1)[source]

Make predictions for the input data with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The test dataset for model to process, should be a dictionary including keys as ‘X’, or a path string locating a data file supported by PyPOTS (e.g. h5 file). If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is the time-series data for processing. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include ‘X’ key.

  • file_type (str) – The type of the given file if test_set is a path string.

  • n_sampling_times (int) – The number of sampling times for the model to sample from the diffusion process.

Returns:

The dictionary containing the forecasting results as key ‘forecasting’ and latent variables if necessary.

Return type:

result_dict

save(saving_path, overwrite=False)

Save the model with current parameters to a disk file.

A .pypots extension will be appended to the filename if it does not already have one. Please note that such an extension is not necessary, but to indicate the saved model is from PyPOTS framework so people can distinguish.

Parameters:
  • saving_path (str) – The given path to save the model. The directory will be created if it does not exist.

  • overwrite (bool) – Whether to overwrite the model file if the path already exists.

Return type:

None

to(device)

Move the model to the given device.

Parameters:

device (Union[str, device]) – The device to move the model to. It can be a string or a torch.device object.

Return type:

None

pypots.forecasting.bttf

The package of the partially-observed time-series forecasting model BTTF.

Refer to the paper Xinyu Chen and Lijun Sun. Bayesian Temporal Factorization for Multidimensional Time Series Prediction. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1–1, 2021.

Notes

This numpy implementation is the same with the official one from https://github.com/xinychen/transdim.

class pypots.forecasting.bttf.BTTF(n_steps, n_features, pred_step, rank, time_lags, burn_iter, gibbs_iter, gamma=10, multi_step=1, device=None)[source]

Bases: BaseForecaster

The implementation of the BTTF model [38].

Parameters:
  • n_steps (int) – The number of time steps in the time-series data sample.

  • n_features (int) – The number of features in the time-series data sample.

  • pred_step (int) – The number of time steps to forecast.

  • rank (int) – The rank of the low-rank tensor.

  • time_lags (list) – The time lags.

  • burn_iter (int) – The number of burn-in iterations.

  • gibbs_iter (int) – The number of Gibbs iterations.

  • gamma (int) – The parameter for the prior distribution in the paper.

  • multi_step (int) – The number of time steps to forecast at each iteration.

  • device (Union[str, device, list, None]) – The device for the model to run on. It can be a string, a torch.device object, or a list of them. If not given, will try to use CUDA devices first (will use the default CUDA device if there are multiple), then CPUs, considering CUDA and CPU are so far the main devices for people to train ML models. If given a list of devices, e.g. [‘cuda:0’, ‘cuda:1’], or [torch.device(‘cuda:0’), torch.device(‘cuda:1’)] , the model will be parallely trained on the multiple devices (so far only support parallel training on CUDA devices). Other devices like Google TPU and Apple Silicon accelerator MPS may be added in the future.

Notes

1). n_steps must be larger than pred_step;

2). n_steps - pred_step must be larger than max(time_lags);

fit(train_set, val_set=None, file_type='hdf5')[source]

Train the forecaster on the given data. :rtype: None

Warning

BTTF does not need to run fit(). Please run func forecast() directly.

predict(test_set, file_type='hdf5', **kwargs)[source]

Make predictions for the input data with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The test dataset for model to process, should be a dictionary including keys as ‘X’, or a path string locating a data file supported by PyPOTS (e.g. h5 file). If it is a dict, X should be array-like with shape [n_samples, n_steps, n_features], which is the time-series data for processing. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include ‘X’ key.

  • file_type (str) – The type of the given file if test_set is a path string.

Returns:

The dictionary containing the forecasting results as key ‘forecasting’ and latent variables if necessary.

Return type:

result_dict

forecast(test_set, file_type='hdf5', **kwargs)

Forecast the future the input with the trained model.

Parameters:
  • test_set (Union[dict, str]) – The data samples for testing, should be array-like with shape [n_samples, n_steps, n_features], or a path string locating a data file, e.g. h5 file.

  • file_type (str) – The type of the given file if X is a path string.

Returns:

Forecasting results.

Return type:

array-like, shape [n_samples, n_pred_steps, n_features],

load(path)

Load the saved model from a disk file.

Parameters:

path (str) – The local path to a disk file saving the trained model.

Return type:

None

Notes

If the training environment and the deploying/test environment use the same type of device (GPU/CPU), you can load the model directly with torch.load(model_path).

save(saving_path, overwrite=False)

Save the model with current parameters to a disk file.

A .pypots extension will be appended to the filename if it does not already have one. Please note that such an extension is not necessary, but to indicate the saved model is from PyPOTS framework so people can distinguish.

Parameters:
  • saving_path (str) – The given path to save the model. The directory will be created if it does not exist.

  • overwrite (bool) – Whether to overwrite the model file if the path already exists.

Return type:

None

to(device)

Move the model to the given device.

Parameters:

device (Union[str, device]) – The device to move the model to. It can be a string or a torch.device object.

Return type:

None