pypots.nn package¶
pypots.nn.functional¶
- pypots.nn.functional.nonstationary_norm(X, missing_mask=None)[source]¶
Normalization from Non-stationary Transformer. Please refer to [20] for more details.
- Parameters:
X (torch.Tensor) – Input data to be normalized. Shape: (n_samples, n_steps (seq_len), n_features).
missing_mask (torch.Tensor, optional) – Missing mask has the same shape as X. 1 indicates observed and 0 indicates missing.
- Return type:
- Returns:
X_enc (torch.Tensor) – Normalized data. Shape: (n_samples, n_steps (seq_len), n_features).
means (torch.Tensor) – Means values for de-normalization. Shape: (n_samples, n_features) or (n_samples, 1, n_features).
stdev (torch.Tensor) – Standard deviation values for de-normalization. Shape: (n_samples, n_features) or (n_samples, 1, n_features).
- pypots.nn.functional.nonstationary_denorm(X, means, stdev)[source]¶
De-Normalization from Non-stationary Transformer. Please refer to [20] for more details.
- Parameters:
X (torch.Tensor) – Input data to be de-normalized. Shape: (n_samples, n_steps (seq_len), n_features).
means (torch.Tensor) – Means values for de-normalization . Shape: (n_samples, n_features) or (n_samples, 1, n_features).
stdev (torch.Tensor) – Standard deviation values for de-normalization. Shape: (n_samples, n_features) or (n_samples, 1, n_features).
- Returns:
X_denorm – De-normalized data. Shape: (n_samples, n_steps (seq_len), n_features).
- Return type:
pypots.nn.modules.autoformer¶
The package including the modules of Autoformer.
Notes
This implementation is inspired by the official one https://github.com/thuml/Autoformer
- class pypots.nn.modules.autoformer.AutoCorrelation(factor=1, attention_dropout=0.1)[source]¶
- AutoCorrelation Mechanism with the following two phases:
period-based dependencies discovery
time delay aggregation
This block can replace the self-attention family mechanism seamlessly.
- time_delay_agg_training(values, corr)[source]¶
SpeedUp version of Autocorrelation (a batch-normalization style design) This is for the training phase.
- time_delay_agg_inference(values, corr)[source]¶
SpeedUp version of Autocorrelation (a batch-normalization style design) This is for the inference phase.
- forward(q, k, v, attn_mask=None, **kwargs)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses. :rtype:
Tuple
[Tensor
,Tensor
]Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.autoformer.SeasonalLayerNorm(n_channels)[source]¶
A special designed layer normalization for the seasonal part.
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.autoformer.MovingAvgBlock(kernel_size, stride)[source]¶
The moving average block to highlight the trend of time series.
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.autoformer.SeriesDecompositionBlock(kernel_size)[source]¶
Series decomposition block
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.autoformer.AutoformerEncoderLayer(attn_opt, d_model, n_heads, d_ffn, moving_avg=25, dropout=0.1, activation='relu')[source]¶
Autoformer encoder layer with the progressive decomposition architecture.
- forward(x, attn_mask=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.autoformer.AutoformerDecoderLayer(self_attn_opt, cross_attn_opt, d_model, n_heads, d_out, d_ff=None, moving_avg=25, dropout=0.1, activation='relu')[source]¶
Autoformer decoder layer with the progressive decomposition architecture
- forward(x, cross, x_mask=None, cross_mask=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.autoformer.AutoformerEncoder(n_layers, d_model, n_heads, d_ffn, factor, moving_avg_window_size, dropout, activation='relu')[source]¶
- forward(x, attn_mask=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
pypots.nn.modules.brits¶
The package including the modules of BRITS.
Notes
This implementation is inspired by the official one https://github.com/caow13/BRITS The bugs in the original implementation are fixed here.
- class pypots.nn.modules.brits.BackboneRITS(n_steps, n_features, rnn_hidden_size)[source]¶
model RITS: Recurrent Imputation for Time Series
- Attributes:
n_steps – sequence length (number of time steps)
n_features – number of features (input dimensions)
rnn_hidden_size – the hidden size of the RNN cell
rnn_cell – the LSTM cell to model temporal data
temp_decay_h – the temporal decay module to decay RNN hidden state
temp_decay_x – the temporal decay module to decay data in the raw feature space
hist_reg – the temporal-regression module to project RNN hidden state into the raw feature space
feat_reg – the feature-regression module
combining_weight – the module used to generate the weight to combine history regression and feature regression
- Parameters:
- forward(inputs, direction)[source]¶
- Parameters:
- Return type:
- Returns:
imputed_data – Input data with missing parts imputed. Shape of [batch size, sequence length, feature number].
estimations – Reconstructed data. Shape of [batch size, sequence length, feature number].
hidden_states (tensor,) – [batch size, RNN hidden size]
reconstruction_loss – reconstruction loss
- class pypots.nn.modules.brits.BackboneBRITS(n_steps, n_features, rnn_hidden_size)[source]¶
model BRITS: Bidirectional RITS BRITS consists of two RITS, which take time-series data from two directions (forward/backward) respectively.
- Attributes:
n_steps – sequence length (number of time steps)
n_features – number of features (input dimensions)
rnn_hidden_size – the hidden size of the RNN cell
rits_f (RITS object) – the forward RITS model
rits_b (RITS object) – the backward RITS model
- forward(inputs)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses. :rtype:
Tuple
[Tensor
,...
]Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.brits.FeatureRegression(input_size)[source]¶
The module used to capture the correlation between features for imputation in BRITS.
- Attributes:
W (tensor) – The weights (parameters) of the module.
b (tensor) – The bias of the module.
m (buffer) (tensor) – The mask matrix, a squire matrix with diagonal entries all zeroes while left parts all ones. It is applied to the weight matrix to mask out the estimation contributions from features themselves. It is used to help enhance the imputation performance of the network.
- Parameters:
input_size (the feature dimension of the input)
pypots.nn.modules.crli¶
The package including the modules of CRLI.
Refer to the paper Qianli Ma, Chuxin Chen, Sen Li, and Garrison W. Cottrell. Learning Representations for Incomplete Time Series Clustering. In AAAI, 35(10):8837–8846, May 2021.
Notes
This implementation is inspired by the official one https://github.com/qianlima-lab/CRLI
- class pypots.nn.modules.crli.BackboneCRLI(n_steps, n_features, n_generator_layers, rnn_hidden_size, decoder_fcn_output_dims, rnn_cell_type='GRU')[source]¶
- forward(X, missing_mask)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses. :rtype:
Tuple
[Tensor
,...
]Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.crli.CrliGenerator(n_layers, n_features, d_hidden, cell_type)[source]¶
- forward(X, missing_mask)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses. :rtype:
Tuple
[Tensor
,Tensor
]Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.crli.CrliDecoder(n_steps, d_input, d_output, fcn_output_dims=None)[source]¶
- forward(generator_fb_hidden_states)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses. :rtype:
Tuple
[Tensor
,Tensor
]Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.crli.CrliDiscriminator(cell_type, d_input)[source]¶
- forward(X, missing_mask, imputation_latent)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses. :rtype:
Tensor
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
pypots.nn.modules.crossformer¶
The package including the modules of Crossformer.
Refer to the paper Yunhao Zhang and Junchi Yan. Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting. In The 11th ICLR, 2023.
Notes
This implementation is inspired by the official one https://github.com/Thinklab-SJTU/Crossformer
- class pypots.nn.modules.crossformer.CrossformerEncoder(attn_layers)[source]¶
- forward(x, src_mask=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.crossformer.CrossformerDecoder(layers)[source]¶
- forward(x, cross)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.crossformer.TwoStageAttentionLayer(seg_num, factor, d_model, n_heads, d_k, d_v, d_ff=None, dropout=0.1, attn_dropout=0.1)[source]¶
The Two Stage Attention (TSA) Layer input/output shape: [batch_size, Data_dim(D), Seg_num(L), d_model]
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.crossformer.ScaleBlock(win_size, d_model, n_heads, d_ff, depth, dropout, seg_num, factor)[source]¶
- forward(x, attn_mask=None, tau=None, delta=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.crossformer.CrossformerDecoderLayer(self_attention, cross_attention, seg_len, d_model, d_ff=None, dropout=0.1)[source]¶
- forward(x, cross)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
pypots.nn.modules.csdi¶
The package including the modules of CSDI.
Refer to the paper Yusuke Tashiro, Jiaming Song, Yang Song, and Stefano Ermon. CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation. In NeurIPS, 2021.
Notes
This implementation is inspired by the official one the official implementation https://github.com/ermongroup/CSDI.
- class pypots.nn.modules.csdi.BackboneCSDI(n_layers, n_heads, n_channels, d_target, d_time_embedding, d_feature_embedding, d_diffusion_embedding, is_unconditional, n_diffusion_steps, schedule, beta_start, beta_end)[source]¶
- forward(observed_data, cond_mask, side_info, n_sampling_times)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.csdi.CsdiDiffusionEmbedding(n_diffusion_steps, d_embedding=128, d_projection=None)[source]¶
- forward(diffusion_step)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.csdi.CsdiDiffusionModel(n_diffusion_steps, d_diffusion_embedding, d_input, d_side, n_channels, n_heads, n_layers)[source]¶
- forward(x, cond_info, diffusion_step)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.csdi.CsdiResidualBlock(d_side, n_channels, diffusion_embedding_dim, nheads)[source]¶
- forward(x, cond_info, diffusion_emb)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
pypots.nn.modules.dlinear¶
The package including the modules of DLinear.
Refer to the paper Ailing Zeng, Muxi Chen, Lei Zhang, and Qiang Xu. Are transformers effective for time series forecasting? In AAAI, volume 37, pages 11121–11128, Jun. 2023.
Notes
This implementation is inspired by the official one https://github.com/cure-lab/LTSF-Linear
- class pypots.nn.modules.dlinear.BackboneDLinear(n_steps, n_features, individual=False, d_model=None)[source]¶
- forward(seasonal_init, trend_init)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
pypots.nn.modules.etsformer¶
The package including the modules of ETSformer.
Refer to the paper Gerald Woo, Chenghao Liu, Doyen Sahoo, Akshat Kumar, and Steven Hoi. ETSformer: Exponential smoothing transformers for time-series forecasting. In ICLR, 2023.
Notes
This implementation is inspired by the official one https://github.com/salesforce/ETSformer
- class pypots.nn.modules.etsformer.ETSformerEncoder(layers)[source]¶
- forward(res, level, attn_mask=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.etsformer.ETSformerEncoderLayer(d_model, n_heads, d_out, seq_len, pred_len, k, d_ffn=None, dropout=0.1, activation='sigmoid', layer_norm_eps=1e-05)[source]¶
- forward(res, level, attn_mask=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.etsformer.ETSformerDecoder(layers)[source]¶
- forward(growths, seasons)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.etsformer.ETSformerDecoderLayer(d_model, n_heads, d_out, pred_len, dropout=0.1)[source]¶
- forward(growth, season)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
pypots.nn.modules.fedformer¶
The package including the modules of FEDformer.
Notes
This implementation is inspired by the official one https://github.com/MAZiqing/FEDformer
- class pypots.nn.modules.fedformer.FEDformerEncoder(n_steps, n_layers, d_model, n_heads, d_ffn, moving_avg_window_size, dropout, version='Fourier', modes=32, mode_select='random', activation='relu')[source]¶
- forward(X, attn_mask=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.fedformer.FEDformerDecoder(n_steps, n_pred_steps, n_layers, n_heads, d_model, d_ffn, d_output, moving_avg_window_size, dropout, version='Fourier', modes=32, mode_select='random', activation='relu')[source]¶
- forward(X, attn_mask=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.fedformer.MultiWaveletTransform(ich=1, k=8, alpha=16, c=128, nCZ=1, L=0, base='legendre', attention_dropout=0.1)[source]¶
1D multiwavelet block.
- forward(q, k, v, attn_mask=None, **kwargs)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses. :rtype:
Tuple
[Tensor
,None
]Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.fedformer.MultiWaveletCross(in_channels, out_channels, seq_len_q, seq_len_kv, modes, c=64, k=8, ich=512, L=0, base='legendre', mode_select_method='random', initializer=None, activation='tanh', **kwargs)[source]¶
1D Multiwavelet Cross Attention layer.
- forward(q, k, v, attn_mask=None, **kwargs)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses. :rtype:
Tuple
[Tensor
,None
]Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.fedformer.FourierBlock(in_channels, out_channels, seq_len, modes=0, mode_select_method='random')[source]¶
- forward(q, k, v, attn_mask=None, **kwargs)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses. :rtype:
Tuple
[Tensor
,None
]Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.fedformer.FourierCrossAttention(in_channels, out_channels, seq_len_q, seq_len_kv, modes=64, mode_select_method='random', activation='tanh', policy=0, num_heads=8)[source]¶
- forward(q, k, v, attn_mask=None, **kwargs)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses. :rtype:
Tuple
[Tensor
,None
]Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
pypots.nn.modules.gpvae¶
The package including the modules of GP-VAE.
Notes
This implementation is inspired by the official one https://github.com/ratschlab/GP-VAE
- class pypots.nn.modules.gpvae.BackboneGPVAE(input_dim, time_length, latent_dim, encoder_sizes=(64, 64), decoder_sizes=(64, 64), beta=1, M=1, K=1, kernel='cauchy', sigma=1.0, length_scale=7.0, kernel_scales=1, window_size=24)[source]¶
model GPVAE with Gaussian Process prior
- Parameters:
input_dim (int,) – the feature dimension of the input
time_length (int,) – the length of each time series
latent_dim (int,) – the feature dimension of the latent embedding
encoder_sizes (tuple,) – the tuple of the network size in encoder
decoder_sizes (tuple,) – the tuple of the network size in decoder
beta (float,) – the weight of the KL divergence
M (int,) – the number of Monte Carlo samples for ELBO estimation
K (int,) – the number of importance weights for IWAE model
kernel (str,) – the Gaussian Process kernel [“cauchy”, “diffusion”, “rbf”, “matern”]
sigma (float,) – the scale parameter for a kernel function
length_scale (float,) – the length scale parameter for a kernel function
kernel_scales (int,) – the number of different length scales over latent space dimensions
- forward(X, missing_mask)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
pypots.nn.modules.grud¶
The package including the modules of GRU-D.
Notes
This implementation is inspired by the official one https://github.com/PeterChe1990/GRU-D
- class pypots.nn.modules.grud.TemporalDecay(input_size, output_size, diag=False)[source]¶
The module used to generate the temporal decay factor gamma in the GRU-D model. Please refer to the original paper [38] for more details.
- Attributes:
W (tensor,) – The weights (parameters) of the module.
b (tensor,) – The bias of the module.
- Parameters:
References
pypots.nn.modules.informer¶
The package including the modules of Informer.
Notes
This implementation is inspired by the official one https://github.com/zhouhaoyi/Informer2020
- class pypots.nn.modules.informer.ProbAttention(mask_flag=True, factor=5, attention_dropout=0.1, scale=None)[source]¶
- forward(q, k, v, attn_mask=None, **kwargs)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.informer.ConvLayer(c_in)[source]¶
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.informer.InformerEncoderLayer(attention, d_model, d_ff=None, dropout=0.1, activation='relu')[source]¶
- forward(x, attn_mask=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.informer.InformerDecoderLayer(self_attention, cross_attention, d_model, d_ff=None, dropout=0.1, activation='relu')[source]¶
- forward(x, cross, x_mask=None, cross_mask=None, tau=None, delta=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.informer.InformerEncoder(attn_layers, conv_layers=None, norm_layer=None)[source]¶
- forward(x, attn_mask=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.informer.InformerDecoder(layers, norm_layer=None, projection=None)[source]¶
- forward(x, cross, x_mask=None, cross_mask=None, trend=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
pypots.nn.modules.mrnn¶
The package including the modules of M-RNN.
Notes
This implementation is inspired by the official one https://github.com/jsyoon0823/MRNN and https://github.com/WenjieDu/SAITS
- class pypots.nn.modules.mrnn.BackboneMRNN(n_steps, n_features, rnn_hidden_size)[source]¶
- forward(inputs)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses. :rtype:
Tuple
[Tensor
,Tensor
,Tensor
]Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.mrnn.MrnnFcnRegression(feature_num)[source]¶
M-RNN fully connection regression Layer
- forward(x, missing_mask, target)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
pypots.nn.modules.patchtst¶
The package including the modules of PatchTST.
Refer to the paper Yuqi Nie, Nam H Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. A time series is worth 64 words: Long-term forecasting with transformers. In ICLR, 2023.
Notes
This implementation is inspired by the official one https://github.com/yuqinie98/PatchTST
- class pypots.nn.modules.patchtst.PatchtstEncoder(n_layers, d_model, n_heads, d_k, d_v, d_ffn, dropout, attn_dropout)[source]¶
- forward(x, attn_mask=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.patchtst.PatchEmbedding(d_model, patch_len, stride, padding, dropout)[source]¶
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.patchtst.RegressionHead(n_features, d_model, d_output, head_dropout, y_range=None)[source]¶
pypots.nn.modules.raindrop¶
The package including the modules of Raindrop.
Refer to the paper Xiang Zhang, Marko Zeman, Theodoros Tsiligkaridis, and Marinka Zitnik. Graph-guided network for irregularly sampled multivariate time series. In ICLR, 2022.
Notes
This implementation is inspired by the official one the official implementation https://github.com/mims-harvard/Raindrop
- class pypots.nn.modules.raindrop.BackboneRaindrop(n_features, n_layers, d_model, n_heads, d_ffn, n_classes, dropout=0.3, max_len=215, d_static=9, d_pe=16, aggregation='mean', sensor_wise_mask=False, static=False)[source]¶
pypots.nn.modules.saits¶
The package including the modules of SAITS.
Refer to the paper Wenjie Du, David Cote, and Yan Liu. SAITS: Self-Attention-based Imputation for Time Series. Expert Systems with Applications, 219:119619, 2023.
Notes
This implementation is inspired by the official one https://github.com/WenjieDu/SAITS
- class pypots.nn.modules.saits.BackboneSAITS(n_steps, n_features, n_layers, d_model, n_heads, d_k, d_v, d_ffn, dropout, attn_dropout)[source]¶
- forward(X, missing_mask, attn_mask=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.saits.SaitsEmbedding(d_in, d_out, with_pos, n_max_steps=1000, dropout=0)[source]¶
The embedding method from the SAITS paper [1].
- Parameters:
- forward(X, missing_mask=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.saits.SaitsLoss(ORT_weight, MIT_weight, loss_calc_func=<function calc_mae>)[source]¶
- forward(reconstruction, X_ori, missing_mask, indicating_mask)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
pypots.nn.modules.timesnet¶
The package including the modules of TimesNet.
Refer to the paper Haixu Wu, Tengge Hu, Yong Liu, Hang Zhou, Jianmin Wang, and Mingsheng Long. TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis. In ICLR, 2023.
Notes
This implementation is inspired by the official one https://github.com/thuml/Time-Series-Library
- class pypots.nn.modules.timesnet.BackboneTimesNet(n_layers, n_steps, n_pred_steps, top_k, d_model, d_ffn, n_kernels)[source]¶
- forward(X)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses. :rtype:
Tensor
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.timesnet.InceptionBlockV1(in_channels, out_channels, num_kernels=6, init_weight=True)[source]¶
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.timesnet.TimesBlock(seq_len, pred_len, top_k, d_model, d_ffn, num_kernels)[source]¶
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
pypots.nn.modules.transformer¶
The package including the modules of Transformer.
Refer to the papers Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. and Wenjie Du, David Cote, and Yan Liu. SAITS: Self-Attention-based Imputation for Time Series. Expert Systems with Applications, 219:119619, 2023.
Notes
This implementation is inspired by https://github.com/WenjieDu/SAITS
- class pypots.nn.modules.transformer.ScaledDotProductAttention(temperature, attn_dropout=0.1)[source]¶
Scaled dot-product attention.
- Parameters:
- forward(q, k, v, attn_mask=None, **kwargs)[source]¶
Forward processing of the scaled dot-product attention.
- Parameters:
q (
Tensor
) – Query tensor.k (
Tensor
) – Key tensor.v (
Tensor
) – Value tensor.attn_mask (
Optional
[Tensor
]) – Masking tensor for the attention map. The shape should be [batch_size, n_heads, n_steps, n_steps]. 0 in attn_mask means values at the according position in the attention map will be masked out.
- Return type:
- Returns:
output – The result of Value multiplied with the scaled dot-product attention map.
attn – The scaled dot-product attention map.
- class pypots.nn.modules.transformer.MultiHeadAttention(attn_opt, d_model, n_heads, d_k, d_v)[source]¶
Transformer multi-head attention module.
- Parameters:
attn_opt (
AttentionOperator
) – The attention operator, e.g. the self-attention proposed in Transformer.d_model (
int
) – The dimension of the input tensor.n_heads (
int
) – The number of heads in multi-head attention.d_k (
int
) – The dimension of the key and query tensor.d_v (
int
) – The dimension of the value tensor.
- forward(q, k, v, attn_mask, **kwargs)[source]¶
Forward processing of the multi-head attention module.
- Parameters:
q (
Tensor
) – Query tensor.k (
Tensor
) – Key tensor.v (
Tensor
) – Value tensor.attn_mask (
Optional
[Tensor
]) – Masking tensor for the attention map. The shape should be [batch_size, n_heads, n_steps, n_steps]. 0 in attn_mask means values at the according position in the attention map will be masked out.
- Return type:
- Returns:
v – The output of the multi-head attention layer.
attn_weights – The attention map.
- class pypots.nn.modules.transformer.PositionalEncoding(d_hid, n_positions=1000)[source]¶
The original positional-encoding module for Transformer.
- Parameters:
- class pypots.nn.modules.transformer.TransformerEncoderLayer(attn_opt, d_model, n_heads, d_k, d_v, d_ffn, dropout=0.1)[source]¶
Transformer encoder layer.
- Parameters:
attn_opt (
AttentionOperator
) – The attention operator for the multi-head attention module in the encoder layer.d_model (
int
) – The dimension of the input tensor.n_heads (
int
) – The number of heads in multi-head attention.d_k (
int
) – The dimension of the key and query tensor.d_v (
int
) – The dimension of the value tensor.d_ffn (
int
) – The dimension of the hidden layer.dropout (
float
) – The dropout rate.
- class pypots.nn.modules.transformer.TransformerDecoderLayer(slf_attn_opt, enc_attn_opt, d_model, n_heads, d_k, d_v, d_ffn, dropout=0.1)[source]¶
Transformer decoder layer.
- Parameters:
slf_attn_opt (
AttentionOperator
) – The attention operator for the multi-head attention module in the decoder layer.enc_attn_opt (
AttentionOperator
) – The attention operator for the encoding multi-head attention module in the decoder layer.d_model (
int
) – The dimension of the input tensor.n_heads (
int
) – The number of heads in multi-head attention.d_k (
int
) – The dimension of the key and query tensor.d_v (
int
) – The dimension of the value tensor.d_ffn (
int
) – The dimension of the hidden layer.dropout (
float
) – The dropout rate.
- forward(dec_input, enc_output, slf_attn_mask=None, dec_enc_attn_mask=None, **kwargs)[source]¶
Forward processing of the decoder layer.
- Parameters:
dec_input (
Tensor
) – Input tensor.enc_output (
Tensor
) – Output tensor from the encoder.slf_attn_mask (
Optional
[Tensor
]) – Masking tensor for the self-attention module. The shape should be [batch_size, n_heads, n_steps, n_steps].dec_enc_attn_mask (
Optional
[Tensor
]) – Masking tensor for the encoding attention module. The shape should be [batch_size, n_heads, n_steps, n_steps].
- Return type:
- Returns:
dec_output – Output tensor.
dec_slf_attn – The self-attention map.
dec_enc_attn – The encoding attention map.
- class pypots.nn.modules.transformer.PositionWiseFeedForward(d_in, d_hid, dropout=0.1)[source]¶
Position-wise feed forward network (FFN) in Transformer.
- Parameters:
- class pypots.nn.modules.transformer.TransformerEncoder(n_layers, d_model, n_heads, d_k, d_v, d_ffn, dropout, attn_dropout)[source]¶
Transformer encoder.
- Parameters:
n_layers (
int
) – The number of layers in the encoder.d_model (
int
) – The dimension of the module manipulation space. The input tensor will be projected to a space with d_model dimensions.n_heads (
int
) – The number of heads in multi-head attention.d_k (
int
) – The dimension of the key and query tensor.d_v (
int
) – The dimension of the value tensor.d_ffn (
int
) – The dimension of the hidden layer in the feed-forward network.dropout (
float
) – The dropout rate.attn_dropout (
float
) – The dropout rate for the attention map.
- class pypots.nn.modules.transformer.TransformerDecoder(n_steps, n_features, n_layers, d_model, n_heads, d_k, d_v, d_ffn, dropout, attn_dropout)[source]¶
Transformer decoder.
- Parameters:
n_steps (
int
) – The number of time steps in the input tensor.n_features (
int
) – The number of features in the input tensor.n_layers (
int
) – The number of layers in the decoder.d_model (
int
) – The dimension of the module manipulation space. The input tensor will be projected to a space with d_model dimensions.n_heads (
int
) – The number of heads in multi-head attention.d_k (
int
) – The dimension of the key and query tensor.d_v (
int
) – The dimension of the value tensor.d_ffn (
int
) – The dimension of the hidden layer in the feed-forward network.dropout (
float
) – The dropout rate.attn_dropout (
float
) – The dropout rate for the attention map.
- forward(trg_seq, enc_output, trg_mask=None, src_mask=None, return_attn_weights=False)[source]¶
Forward processing of the decoder.
- Parameters:
trg_seq (
Tensor
) – Input tensor.enc_output (
Tensor
) – Output tensor from the encoder.trg_mask (
Optional
[Tensor
]) – Masking tensor for the self-attention module.src_mask (
Optional
[Tensor
]) – Masking tensor for the encoding attention module.return_attn_weights (
bool
) – Whether to return the attention map.
- Return type:
- Returns:
dec_output – Output tensor.
dec_slf_attn_collector – A list containing the self-attention map from each decoder layer.
dec_enc_attn_collector – A list containing the encoding attention map from each decoder layer.
pypots.nn.modules.usgan¶
The package including the modules of USGAN.
- class pypots.nn.modules.usgan.BackboneUSGAN(n_steps, n_features, rnn_hidden_size, lambda_mse, hint_rate=0.7, dropout_rate=0.0)[source]¶
USGAN model
- forward(inputs, training_object='generator', training=True)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses. :rtype:
Tuple
[Tensor
,...
]Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
pypots.nn.modules.vader¶
The package including the modules of VaDER.
Notes
This implementation is inspired by the official one https://github.com/johanndejong/VaDER
- class pypots.nn.modules.vader.BackboneVaDER(n_steps, d_input, n_clusters, d_rnn_hidden, d_mu_stddev, eps=1e-09, alpha=1.0)[source]¶
- Parameters:
- forward(X, missing_mask)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses. :rtype:
Tuple
[Tensor
,Tensor
,Tensor
,Tensor
,Tensor
,Tensor
,Tensor
]Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.vader.PeepholeLSTMCell(input_size, hidden_size, bias=True)[source]¶
Notes
This implementation is adapted from https://gist.github.com/Kaixhin/57901e91e5c5a8bac3eb0cbbdd3aba81
- forward(X, hx=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses. :rtype:
Tuple
[Tensor
,Tensor
]Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.vader.ImplicitImputation(d_input)[source]¶
- forward(X, missing_mask)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses. :rtype:
Tensor
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class pypots.nn.modules.vader.GMMLayer(d_hidden, n_clusters)[source]¶
- forward()[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses. :rtype:
Tuple
[Tensor
,Tensor
,Tensor
]Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.