mlpack  2.2.5
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Pages
Public Member Functions | List of all members
AdaDelta< DecomposableFunctionType > Class Template Reference

Adadelta is an optimizer that uses two ideas to improve upon the two main drawbacks of the Adagrad method: More...

Public Member Functions

 AdaDelta (DecomposableFunctionType &function, const double rho=0.95, const double eps=1e-6, const size_t maxIterations=100000, const double tolerance=1e-5, const bool shuffle=true)
 Construct the AdaDelta optimizer with the given function and parameters. More...
 
double Epsilon () const
 Get the value used to initialise the mean squared gradient parameter. More...
 
double & Epsilon ()
 Modify the value used to initialise the mean squared gradient parameter. More...
 
const DecomposableFunctionType & Function () const
 Get the instantiated function to be optimized. More...
 
DecomposableFunctionType & Function ()
 Modify the instantiated function. More...
 
size_t MaxIterations () const
 Get the maximum number of iterations (0 indicates no limit). More...
 
size_t & MaxIterations ()
 Modify the maximum number of iterations (0 indicates no limit). More...
 
double Optimize (arma::mat &iterate)
 Optimize the given function using AdaDelta. More...
 
double Rho () const
 Get the smoothing parameter. More...
 
double & Rho ()
 Modify the smoothing parameter. More...
 
bool Shuffle () const
 Get whether or not the individual functions are shuffled. More...
 
bool & Shuffle ()
 Modify whether or not the individual functions are shuffled. More...
 
double Tolerance () const
 Get the tolerance for termination. More...
 
double & Tolerance ()
 Modify the tolerance for termination. More...
 

Detailed Description

template<typename DecomposableFunctionType>
class mlpack::optimization::AdaDelta< DecomposableFunctionType >

Adadelta is an optimizer that uses two ideas to improve upon the two main drawbacks of the Adagrad method:

For more information, see the following.

* @article{Zeiler2012,
* author = {Matthew D. Zeiler},
* title = {{ADADELTA:} An Adaptive Learning Rate Method},
* journal = {CoRR},
* year = {2012}
* }
*

For AdaDelta to work, a DecomposableFunctionType template parameter is required. This class must implement the following function:

size_t NumFunctions(); double Evaluate(const arma::mat& coordinates, const size_t i); void Gradient(const arma::mat& coordinates, const size_t i, arma::mat& gradient);

NumFunctions() should return the number of functions ( $n$), and in the other two functions, the parameter i refers to which individual function (or gradient) is being evaluated. So, for the case of a data-dependent function, such as NCA (see mlpack::nca::NCA), NumFunctions() should return the number of points in the dataset, and Evaluate(coordinates, 0) will evaluate the objective function on the first point in the dataset (presumably, the dataset is held internally in the DecomposableFunctionType).

Template Parameters
DecomposableFunctionTypeDecomposable objective function type to be minimized.

Definition at line 63 of file ada_delta.hpp.

Constructor & Destructor Documentation

AdaDelta ( DecomposableFunctionType &  function,
const double  rho = 0.95,
const double  eps = 1e-6,
const size_t  maxIterations = 100000,
const double  tolerance = 1e-5,
const bool  shuffle = true 
)

Construct the AdaDelta optimizer with the given function and parameters.

The defaults here are not necessarily good for the given problem, so it is suggested that the values used be tailored to the task at hand. The maximum number of iterations refers to the maximum number of points that are processed (i.e., one iteration equals one point; one iteration does not equal one pass over the dataset).

Parameters
functionFunction to be optimized (minimized).
rhoSmoothing constant
epsValue used to initialise the mean squared gradient parameter.
maxIterationsMaximum number of iterations allowed (0 means no limit).
toleranceMaximum absolute tolerance to terminate algorithm.
shuffleIf true, the function order is shuffled; otherwise, each function is visited in linear order.

Member Function Documentation

double Epsilon ( ) const
inline

Get the value used to initialise the mean squared gradient parameter.

Definition at line 111 of file ada_delta.hpp.

double& Epsilon ( )
inline

Modify the value used to initialise the mean squared gradient parameter.

Definition at line 113 of file ada_delta.hpp.

const DecomposableFunctionType& Function ( ) const
inline

Get the instantiated function to be optimized.

Definition at line 101 of file ada_delta.hpp.

DecomposableFunctionType& Function ( )
inline

Modify the instantiated function.

Definition at line 103 of file ada_delta.hpp.

size_t MaxIterations ( ) const
inline

Get the maximum number of iterations (0 indicates no limit).

Definition at line 116 of file ada_delta.hpp.

size_t& MaxIterations ( )
inline

Modify the maximum number of iterations (0 indicates no limit).

Definition at line 118 of file ada_delta.hpp.

double Optimize ( arma::mat &  iterate)

Optimize the given function using AdaDelta.

The given starting point will be modified to store the finishing point of the algorithm, and the final objective value is returned.

Parameters
iterateStarting point (will be modified).
Returns
Objective value of the final point.
double Rho ( ) const
inline

Get the smoothing parameter.

Definition at line 106 of file ada_delta.hpp.

double& Rho ( )
inline

Modify the smoothing parameter.

Definition at line 108 of file ada_delta.hpp.

bool Shuffle ( ) const
inline

Get whether or not the individual functions are shuffled.

Definition at line 126 of file ada_delta.hpp.

bool& Shuffle ( )
inline

Modify whether or not the individual functions are shuffled.

Definition at line 128 of file ada_delta.hpp.

double Tolerance ( ) const
inline

Get the tolerance for termination.

Definition at line 121 of file ada_delta.hpp.

double& Tolerance ( )
inline

Modify the tolerance for termination.

Definition at line 123 of file ada_delta.hpp.


The documentation for this class was generated from the following file: