markovflow.likelihoods.mutlistage_likelihood
Module containing the MultiStageLikelihood
MultiStageLikelihood
Bases: gpflow.likelihoods.MultiLatentLikelihood
gpflow.likelihoods.MultiLatentLikelihood
The Multistage Likelihood as described in
title={Bayesian intermittent demand forecasting for large inventories}, author={Seeger, Matthias W and Salinas, David and Flunkert, Valentin}, booktitle={Advances in Neural Information Processing Systems}, pages={4646–4654}, year={2016}
}
This relates scalar data y to variables F = [F0, F1, F2] through the log-conditional density
δ(Y=1) * (log(1 - σ(F0)) + log σ(F1))
δ(Y>1) * (log(1 - σ(F0)) + log(1-σ(F1)) + log Poisson(Y-2|λ(F2)))
σ(F0) -> Y = 0
/
σ(F1) -> Y = 1
A base class for likelihoods, which specifies an observation model connecting the latent functions (‘F’) to the data (‘Y’).
All of the members of this class are expected to obey some shape conventions, as specified by latent_dim and observation_dim.
If we’re operating on an array of function values ‘F’, then the last dimension represents multiple functions (preceding dimensions could represent different data points, or different random samples, for example). Similarly, the last dimension of Y represents a single data point. We check that the dimensions are as this object expects.
The return shapes of all functions in this class is the broadcasted shape of the arguments, excluding the last dimension of each argument.
latent_dim – the dimension of the vector F of latent functions for a single data point
observation_dim – the dimension of the observation vector Y for a single data point
_split_f
Splits the input tensor F into 3 tensors along the last dimension :param F: tensor of shape […, 3] :return: tuple of 3 tensors of shape […, 1]
_log_prob
Return the log-conditional density log p(Y|F) = δ(Y=0) * log σ(F0)
δ(Y=1) * (log(1 - σ(F0)) + log σ(F1)) δ(Y>1) * (log(1 - σ(F0)) + log(1-σ(F1)) + log Poisson(Y-2|λ(F2)))
F – tensor of shape […, 3]
Y – tensor of shape […, 1]
tensor of shape […]
_variational_expectations
Returns E_q(F) log p(Y|F) under the factored distribution q(F) = ∏ₖ q(Fₖ) = ∏ₖ 𝓝(Fmuₖ, Fvarₖ)
δ(Y=1) * (E_q(F0) log(1 - σ(F0)) + E_q(F1) log σ(F1))
E_q(F2) log Poisson(Y-2|λ(F2)))
Fmu – mean function evaluation Tensor, with shape […, latent_dim]
Fvar – variance of function evaluation Tensor, with shape […, latent_dim]
Y – observation Tensor, with shape […, observation_dim]:
variational expectations, with shape […]
_predict_log_density
Here, we implement a default Gauss-Hermite quadrature routine, but some likelihoods (Gaussian, Poisson) will implement specific cases. :param Fmu: mean function evaluation Tensor, with shape […, latent_dim] :param Fvar: variance of function evaluation Tensor, with shape […, latent_dim] :param Y: observation Tensor, with shape […, observation_dim]: :returns: log predictive density, with shape […]
_predict_mean_and_var
Here, we implement a default Gauss-Hermite quadrature routine, but some likelihoods (e.g. Gaussian) will implement specific cases.
mean and variance of Y, both with shape […, observation_dim]
sample_y
Given values of the latent processes F, samples observations from P(Y|F) :param F_samples: batch_shape + [3] :return: batch_shape + [1]