pyconstruct.learners.EG

class pyconstruct.learners.EG(domain=None, model=None, *, inference='map', eta0=1.0, power_t=0.5, learning_rate='decaying', n_samples=1000, **kwargs)

Learner implementing the Exponentiate Gradient algorithm.

This learner uses multiplicative weight updates as in [1].

Parameters:
  • domain (BaseDomain) – The domain of the data.
  • inference (str in ['map', 'loss_augmented_map']) – Which type of inference to perform when learning.
  • alpha (float) – The regularization coefficient.
  • train_loss (str in ['hinge', 'logistic', 'exponential']) – The training loss. The derivative of this loss is used to rescale the margin of the examples when making an update.
  • structured_loss (function (y, y) -> float) – The structured loss to compute on the objects.
  • eta0 (float) – The initial value of the learning rate.
  • power_t (float) – The power of the iteration index when using an invscaling learning_rate.
  • learning_rate (str in ['constant', 'decaying', 'invscaling']) – The learning rate strategy. The constant learning multiplies the updates for eta0; the invscaling divides the updates by the iteration number raised to the power_t; the decaying strategy decreases monotonically within the range [0.5, 1] with the number of samples seen. Same strategy used in [1].
  • n_samples (int) – Estimate of the number of samples in the dataset. This parameter helps setting the decaying learning rate when training is initialized with the partial_fit instead of the fit method.

References

[1](1, 2) Collins, Michael, et al. “Exponentiated gradient algorithms for conditional random fields and max-margin markov networks.” Journal of Machine Learning Research 9.Aug (2008): 1775-1822.

Methods

decision_function(X, Y, **kwargs)
fit(X, Y, **kwargs) Fit a model with data (X, Y).
get_params([deep]) Get parameters for this estimator.
loss(X, Y, Y_pred, **kwargs)
partial_fit(X, Y[, Y_pred, Y_phi, Y_pred_phi]) Updates the current model with a mini-batch (X, Y).
phi(X, Y, **kwargs) Computes the feature vector for the given input and output objects.
predict(X, *args, **kwargs) Computes the prediction of the current model for the given input.
score(X, Y[, Y_pred]) Compute the score as the average loss over the examples.
set_params(**params) Set the parameters of this estimator.