Stochastic Gradient Descent is a technique for minimizing a function which can be expressed as a sum of other functions.
More...
|
| SGD (DecomposableFunctionType &function, const double stepSize=0.01, const size_t maxIterations=100000, const double tolerance=1e-5, const bool shuffle=true) |
| Construct the SGD optimizer with the given function and parameters. More...
|
|
const DecomposableFunctionType & | Function () const |
| Get the instantiated function to be optimized. More...
|
|
DecomposableFunctionType & | Function () |
| Modify the instantiated function. More...
|
|
size_t | MaxIterations () const |
| Get the maximum number of iterations (0 indicates no limit). More...
|
|
size_t & | MaxIterations () |
| Modify the maximum number of iterations (0 indicates no limit). More...
|
|
double | Optimize (arma::mat &iterate) |
| Optimize the given function using stochastic gradient descent. More...
|
|
template<> |
double | Optimize (arma::mat ¶meters) |
|
bool | Shuffle () const |
| Get whether or not the individual functions are shuffled. More...
|
|
bool & | Shuffle () |
| Modify whether or not the individual functions are shuffled. More...
|
|
double | StepSize () const |
| Get the step size. More...
|
|
double & | StepSize () |
| Modify the step size. More...
|
|
double | Tolerance () const |
| Get the tolerance for termination. More...
|
|
double & | Tolerance () |
| Modify the tolerance for termination. More...
|
|
template<typename DecomposableFunctionType>
class mlpack::optimization::SGD< DecomposableFunctionType >
Stochastic Gradient Descent is a technique for minimizing a function which can be expressed as a sum of other functions.
That is, suppose we have
and our task is to minimize
. Stochastic gradient descent iterates over each function
, producing the following update scheme:
where
is a parameter which specifies the step size.
is chosen according to
(the iteration number). The SGD class supports either scanning through each of the
functions
linearly, or in a random sequence. The algorithm continues until
reaches the maximum number of iterations—or when a full sequence of updates through each of the
functions
produces an improvement within a certain tolerance
. That is,
The parameter
is specified by the tolerance parameter to the constructor;
is specified by the maxIterations parameter.
This class is useful for data-dependent functions whose objective function can be expressed as a sum of objective functions operating on an individual point. Then, SGD considers the gradient of the objective function operating on an individual point in its update of
.
For SGD to work, a DecomposableFunctionType template parameter is required. This class must implement the following function:
size_t NumFunctions(); double Evaluate(const arma::mat& coordinates, const size_t i); void Gradient(const arma::mat& coordinates, const size_t i, arma::mat& gradient);
NumFunctions() should return the number of functions (
), and in the other two functions, the parameter i refers to which individual function (or gradient) is being evaluated. So, for the case of a data-dependent function, such as NCA (see mlpack::nca::NCA), NumFunctions() should return the number of points in the dataset, and Evaluate(coordinates, 0) will evaluate the objective function on the first point in the dataset (presumably, the dataset is held internally in the DecomposableFunctionType).
- Template Parameters
-
DecomposableFunctionType | Decomposable objective function type to be minimized. |
Definition at line 76 of file sgd.hpp.