pypots.forecasting package#

pypots.forecasting.bttf#

The package of the partially-observed time-series forecasting model BTTF.

Refer to the paper “Chen, X., & Sun, L. (2021). Bayesian Temporal Factorization for Multidimensional Time Series Prediction. IEEE transactions on pattern analysis and machine intelligence.”

class pypots.forecasting.bttf.BTTF(n_steps, n_features, pred_step, rank, time_lags, burn_iter, gibbs_iter, multi_step=1, device=None)[source]#

Bases: BaseForecaster

The implementation of the BTTF model [14].

Parameters:
  • n_steps (int,) – The number of time steps in the time-series data sample.

  • n_features (int,) – The number of features in the time-series data sample.

  • pred_step (int,) – The number of time steps to forecast.

  • rank (int,) – The rank of the low-rank tensor.

  • time_lags (list,) – The time lags.

  • burn_iter (int,) – The number of burn-in iterations.

  • gibbs_iter (int,) – The number of Gibbs iterations.

  • multi_step (int, default = 1,) – The number of time steps to forecast at each iteration.

  • device (Union[str, device, list, None]) – The device for the model to run on. It can be a string, a torch.device object, or a list of them. If not given, will try to use CUDA devices first (will use the default CUDA device if there are multiple), then CPUs, considering CUDA and CPU are so far the main devices for people to train ML models. If given a list of devices, e.g. [‘cuda:0’, ‘cuda:1’], or [torch.device(‘cuda:0’), torch.device(‘cuda:1’)] , the model will be parallely trained on the multiple devices (so far only support parallel training on CUDA devices). Other devices like Google TPU and Apple Silicon accelerator MPS may be added in the future.

Notes

1). n_steps must be larger than pred_step;

2). n_steps - pred_step must be larger than max(time_lags);

References

fit(train_set, val_set=None, file_type='h5py')[source]#

Train the forecaster on the given data. :rtype: None

Warning

BTTF does not need to run fit(). Please run func forecast() directly.

predict(test_set, file_type='h5py')[source]#

Make predictions for the input data with the trained model.

Parameters:
  • test_set (dict or str) – The dataset for model validating, should be a dictionary including keys as ‘X’, or a path string locating a data file supported by PyPOTS (e.g. h5 file). If it is a dict, X should be array-like of shape [n_samples, sequence length (time steps), n_features], which is time-series data for validating, can contain missing values, and y should be array-like of shape [n_samples], which is classification labels of X. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include keys as ‘X’ and ‘y’.

  • file_type (str) – The type of the given file if test_set is a path string.

Returns:

result_dict – Prediction results in a Python Dictionary for the given samples. It should be a dictionary including keys as ‘imputation’, ‘classification’, ‘clustering’, and ‘forecasting’. For sure, only the keys that relevant tasks are supported by the model will be returned.

Return type:

dict

forecast(X, file_type='h5py')[source]#

Forecast the future the input with the trained model.

Warning

The method forecast is deprecated. Please use predict() instead.

Parameters:
  • X (Union[dict, str]) – Time-series data containing missing values. Shape [n_samples, sequence length (time steps), n_features].

  • file_type (str) – The type of the given file if X is a path string.

Returns:

Forecasting results.

Return type:

array-like, shape [n_samples, prediction_horizon, n_features],

load(path)#

Load the saved model from a disk file.

Parameters:

path (str) – The local path to a disk file saving the trained model.

Return type:

None

Notes

If the training environment and the deploying/test environment use the same type of device (GPU/CPU), you can load the model directly with torch.load(model_path).

load_model(path)#

Load the saved model from a disk file.

Parameters:

path (str) – The local path to a disk file saving the trained model.

Return type:

None

Notes

If the training environment and the deploying/test environment use the same type of device (GPU/CPU), you can load the model directly with torch.load(model_path).

Warning

The method load_model is deprecated. Please use load() instead.

save(saving_path, overwrite=False)#

Save the model with current parameters to a disk file.

A .pypots extension will be appended to the filename if it does not already have one. Please note that such an extension is not necessary, but to indicate the saved model is from PyPOTS framework so people can distinguish.

Parameters:
  • saving_path (str) – The given path to save the model. The directory will be created if it does not exist.

  • overwrite (bool) – Whether to overwrite the model file if the path already exists.

Return type:

None

save_model(saving_path, overwrite=False)#

Save the model with current parameters to a disk file.

A .pypots extension will be appended to the filename if it does not already have one. Please note that such an extension is not necessary, but to indicate the saved model is from PyPOTS framework so people can distinguish.

Parameters:
  • saving_path (str) – The given path to save the model. The directory will be created if it does not exist.

  • overwrite (bool) – Whether to overwrite the model file if the path already exists.

Return type:

None

Warning

The method save_model is deprecated. Please use save() instead.