markovflow.models.models

Module containing base classes for models.

Note

Markovflow models are intended to work with eager mode in TensorFlow. Therefore models (and their collaborating objects) should typically avoid performing any computation in their __init__ methods. Because models and other objects are typically initialised outside of an optimisation loop, performing computation in the constructor means that this computation is performed ‘too early’, and optimisation is not possible.

Module Contents

class MarkovFlowModel(name=None)[source]

Bases: tf.Module, abc.ABC

Abstract class representing Markovflow models that depend on input data.

All Markovflow models are TensorFlow Modules, so it is possible to obtain trainable variables via the trainable_variables attribute. You can combine this with the loss() method to train the model. For example:

optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=0.01)
for i in range(iterations):
    model.optimization_step(optimizer)

Call the predict_f() method to predict marginal function values at future time points. For example:

mean, variance = model.predict_f(validation_data_tensor)

Note

Markovflow models that extend this class must implement the loss() method and posterior attribute.

abstract loss()tf.Tensor[source]

Obtain the loss, which you can use to train the model. It should always return a scalar.

Raises

NotImplementedError – Must be implemented in derived classes.

property posteriormarkovflow.posterior.PosteriorProcess[source]

Return a posterior process from the model, which can be used for inference.

Raises

NotImplementedError – Must be implemented in derived classes.

predict_state(new_time_points: tf.Tensor)Tuple[tf.Tensor, tf.Tensor][source]

Predict state at new_time_points. Note these time points should be sorted.

Parameters

new_time_points – Time points to generate observations for, with shape batch_shape + [num_new_time_points,].

Returns

Predicted mean and covariance for the new time points, with respective shapes batch_shape + [num_new_time_points, state_dim] batch_shape + [num_new_time_points, state_dim, state_dim].

predict_f(new_time_points: tf.Tensor, full_output_cov: bool = False)Tuple[tf.Tensor, tf.Tensor][source]

Predict marginal function values at new_time_points. Note these time points should be sorted.

Parameters
  • new_time_points – Time points to generate observations for, with shape batch_shape + [num_new_time_points].

  • full_output_cov – Either full output covariance (True) or marginal variances (False).

Returns

Predicted mean and covariance for the new time points, with respective shapes batch_shape + [num_new_time_points, output_dim] and either batch_shape + [num_new_time_points, output_dim, output_dim] or batch_shape + [num_new_time_points, output_dim].

class MarkovFlowSparseModel(name=None)[source]

Bases: tf.Module, abc.ABC

Abstract class representing Markovflow models that do not need to store the training data (\(X, Y\)) in the model to approximate the posterior predictions \(p(f*|X, Y, x*)\).

This currently applies only to sparse variational models.

The optimization_step method should typically be used to train the model. For example:

input_data = (tf.constant(time_points), tf.constant(observations))
optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=0.01)
for i in range(iterations):
    model.optimization_step(input_data, optimizer)

Call the predict_f() method to predict marginal function values at future time points. For example:

mean, variance = model.predict_f(validation_data_tensor)

Note

Markovflow models that extend this class must implement the loss() method and posterior attribute.

abstract loss(input_data: Tuple[tf.Tensor, tf.Tensor])tf.Tensor[source]

Obtain the loss, which can be used to train the model.

Parameters

input_data

A tuple of time points and observations containing the data at which to calculate the loss for training the model:

  • A tensor of inputs with shape batch_shape + [num_data]

  • A tensor of observations with shape batch_shape + [num_data, observation_dim]

Raises

NotImplementedError – Must be implemented in derived classes.

property posteriormarkovflow.posterior.PosteriorProcess[source]

Obtain a posterior process from the model, which can be used for inference.

Raises

NotImplementedError – Must be implemented in derived classes.

predict_state(new_time_points: tf.Tensor)Tuple[tf.Tensor, tf.Tensor][source]

Predict state at new_time_points. Note these time points should be sorted.

Parameters

new_time_points – Time points to generate observations for, with shape batch_shape + [num_new_time_points,].

Returns

Predicted mean and covariance for the new time points, with respective shapes batch_shape + [num_new_time_points, state_dim] batch_shape + [num_new_time_points, state_dim, state_dim].

predict_f(new_time_points: tf.Tensor, full_output_cov: bool = False)Tuple[tf.Tensor, tf.Tensor][source]

Predict marginal function values at new_time_points. Note these time points should be sorted.

Parameters
  • new_time_points – Time points to generate observations for, with shape batch_shape + [num_new_time_points].

  • full_output_cov – Either full output covariance (True) or marginal variances (FalseF).

Returns

Predicted mean and covariance for the new time points, with respective shapes batch_shape + [num_new_time_points, output_dim] and either batch_shape + [num_new_time_points, output_dim, output_dim] or batch_shape + [num_new_time_points, output_dim].

predict_log_density(input_data: Tuple[tf.Tensor, tf.Tensor], full_output_cov: bool = False)tf.Tensor[source]

Compute the log density of the data. That is:

\[log ∫ p(yᵢ=Yᵢ|Fᵢ)q(Fᵢ) dFᵢ\]
Parameters
  • input_data

    A tuple of time points and observations containing the data at which to calculate the loss for training the model:

    • A tensor of inputs with shape batch_shape + [num_data]

    • A tensor of observations with shape batch_shape + [num_data, observation_dim]

  • full_output_cov – Either full output covariance (True) or marginal variances (False).

Returns

Predicted log density at input time points, with shape batch_shape + [num_data].