pypots.forecasting package¶
pypots.forecasting.bttf¶
The package of the partially-observed time-series forecasting model BTTF.
Notes
This numpy implementation is the same with the official one from https://github.com/xinychen/transdim.
- class pypots.forecasting.bttf.BTTF(n_steps, n_features, pred_step, rank, time_lags, burn_iter, gibbs_iter, multi_step=1, device=None)[source]¶
Bases:
BaseForecaster
The implementation of the BTTF model [31].
- Parameters:
n_steps (int,) – The number of time steps in the time-series data sample.
n_features (int,) – The number of features in the time-series data sample.
pred_step (int,) – The number of time steps to forecast.
rank (int,) – The rank of the low-rank tensor.
time_lags (list,) – The time lags.
burn_iter (int,) – The number of burn-in iterations.
gibbs_iter (int,) – The number of Gibbs iterations.
multi_step (int, default = 1,) – The number of time steps to forecast at each iteration.
device (
Union
[str
,device
,list
,None
]) – The device for the model to run on. It can be a string, atorch.device
object, or a list of them. If not given, will try to use CUDA devices first (will use the default CUDA device if there are multiple), then CPUs, considering CUDA and CPU are so far the main devices for people to train ML models. If given a list of devices, e.g. [‘cuda:0’, ‘cuda:1’], or [torch.device(‘cuda:0’), torch.device(‘cuda:1’)] , the model will be parallely trained on the multiple devices (so far only support parallel training on CUDA devices). Other devices like Google TPU and Apple Silicon accelerator MPS may be added in the future.
Notes
1).
n_steps
must be larger thanpred_step
;2).
n_steps - pred_step
must be larger thanmax(time_lags)
;- fit(train_set, val_set=None, file_type='hdf5')[source]¶
Train the forecaster on the given data. :rtype:
None
Warning
BTTF does not need to run fit(). Please run func
forecast()
directly.
- predict(test_set, file_type='hdf5')[source]¶
Make predictions for the input data with the trained model.
- Parameters:
test_set (
Union
[dict
,str
]) – The dataset for model validating, should be a dictionary including keys as ‘X’, or a path string locating a data file supported by PyPOTS (e.g. h5 file). If it is a dict, X should be array-like of shape [n_samples, sequence length (n_steps), n_features], which is time-series data for validating, can contain missing values, and y should be array-like of shape [n_samples], which is classification labels of X. If it is a path string, the path should point to a data file, e.g. a h5 file, which contains key-value pairs like a dict, and it has to include keys as ‘X’ and ‘y’.file_type (
str
) – The type of the given file if test_set is a path string.
- Returns:
Prediction results in a Python Dictionary for the given samples. It should be a dictionary including keys as ‘imputation’, ‘classification’, ‘clustering’, and ‘forecasting’. For sure, only the keys that relevant tasks are supported by the model will be returned.
- Return type:
result_dict
- forecast(test_set, file_type='hdf5')[source]¶
Forecast the future of the input with the trained model.
- Parameters:
- Returns:
Forecasting results.
- Return type:
array-like, shape [n_samples, n_pred_steps, n_features],
- load(path)¶
Load the saved model from a disk file.
Notes
If the training environment and the deploying/test environment use the same type of device (GPU/CPU), you can load the model directly with torch.load(model_path).
- save(saving_path, overwrite=False)¶
Save the model with current parameters to a disk file.
A
.pypots
extension will be appended to the filename if it does not already have one. Please note that such an extension is not necessary, but to indicate the saved model is from PyPOTS framework so people can distinguish.