Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions questions/149_adadelta-optimizer/description.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Implement the Adadelta optimizer update step function. Your function should take the current parameter value, gradient, and moving averages as inputs, and return the updated parameter value and new moving averages. The function should handle both scalar and array inputs, and include proper input validation.
5 changes: 5 additions & 0 deletions questions/149_adadelta-optimizer/example.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
"input": "parameter = 1.0, grad = 0.1, u = 1.0, v = 1.0, rho = 0.95, epsilon = 1e-6",
"output": "(0.89743, 0.9505, 0.95053)",
"explanation": "The Adadelta optimizer computes updated values for the parameter, first moment (u), and second moment (v). With input values parameter=1.0, grad=0.1, u=1.0, v=1.0, and rho=0.95, the updated parameter becomes 0.89743."
}
71 changes: 71 additions & 0 deletions questions/149_adadelta-optimizer/learn.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
# Implementing Adadelta Optimizer

## Introduction
Adadelta is an extension of Adagrad that addresses two key issues: the aggressive, monotonically decreasing learning rate and the need for manual learning rate tuning. While Adagrad accumulates all past squared gradients, Adadelta restricts the influence of past gradients to a window of size w. Instead of explicitly storing w past gradients, it efficiently approximates this window using an exponential moving average with decay rate ρ, making it more robust to parameter updates. Additionally, it automatically handles the units of the updates, eliminating the need for a manually set learning rate.

## Learning Objectives
- Understand how Adadelta optimizer works
- Learn to implement adaptive learning rates with moving averages

## Theory
Adadelta uses two main ideas:
1. Exponential moving average of squared gradients to approximate a window of size w
2. Automatic unit correction through the ratio of parameter updates

The key equations are:

$v_t = \rho v_{t-1} + (1-\rho)g_t^2$ (Exponential moving average of squared gradients)

The above approximates a window size of $w \approx \dfrac{1}{1-\rho}$

$\Delta\theta_t = -\dfrac{\sqrt{u_{t-1} + \epsilon}}{\sqrt{v_t + \epsilon}} \cdot g_t$ (Parameter update with unit correction)

$u_t = \rho u_{t-1} + (1-\rho)\Delta\theta_t^2$ (Exponential moving average of squared parameter updates)

Where:
- $v_t$ is the exponential moving average of squared gradients (decay rate ρ)
- $u_t$ is the exponential moving average of squared parameter updates (decay rate ρ)
- $\rho$ is the decay rate (typically 0.9) that controls the effective window size w ≈ 1/(1-ρ)
- $\epsilon$ is a small constant for numerical stability
- $g_t$ is the gradient at time step t

The ratio $\dfrac{\sqrt{u_{t-1} + \epsilon}}{\sqrt{v_t + \epsilon}}$ serves as an adaptive learning rate that automatically handles the units of the updates, making the algorithm more robust to different parameter scales. Unlike Adagrad, Adadelta does not require a manually set learning rate, making it especially useful when tuning hyperparameters is difficult. This automatic learning rate adaptation is achieved through the ratio of the root mean squared (RMS) of parameter updates to the RMS of gradients.

Read more at:

1. Zeiler, M. D. (2012). ADADELTA: An Adaptive Learning Rate Method. [arXiv:1212.5701](https://arxiv.org/abs/1212.5701)
2. Ruder, S. (2017). An overview of gradient descent optimization algorithms. [arXiv:1609.04747](https://arxiv.org/pdf/1609.04747)

## Problem Statement
Implement the Adadelta optimizer update step function. Your function should take the current parameter value, gradient, and accumulated statistics as inputs, and return the updated parameter value and new accumulated statistics.

### Input Format
The function should accept:
- parameter: Current parameter value
- grad: Current gradient
- v: Exponentially decaying average of squared gradients
- u: Exponentially decaying average of squared parameter updates
- rho: Decay rate (default=0.9)
- epsilon: Small constant for numerical stability (default=1e-8)

### Output Format
Return tuple: (updated_parameter, updated_v, updated_u)

## Example
```python
# Example usage:
parameter = 1.0
grad = 0.1
v = 1.0
u = 1.0

new_param, new_v, new_u = adadelta_optimizer(parameter, grad, v, u)
```

## Tips
- Initialize v and u as zeros
- Use numpy for numerical operations
- Test with both scalar and array inputs
- The learning rate is automatically determined by the algorithm

---
17 changes: 17 additions & 0 deletions questions/149_adadelta-optimizer/meta.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
{
"id": "149",
"title": "Adadelta Optimizer",
"difficulty": "medium",
"category": "Deep Learning",
"video": "",
"likes": "0",
"dislikes": "0",
"contributor": [
{
"profile_link": "https://github.com/mavleo96",
"name": "Vijayabharathi Murugan"
}
],
"tinygrad_difficulty": null,
"pytorch_difficulty": null
}
44 changes: 44 additions & 0 deletions questions/149_adadelta-optimizer/solution.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
import numpy as np

def adadelta_optimizer(parameter, grad, u, v, rho=0.95, epsilon=1e-6):
"""
Update parameters using the AdaDelta optimizer.
AdaDelta is an extension of AdaGrad that seeks to reduce its aggressive,
monotonically decreasing learning rate.

Args:
parameter: Current parameter value
grad: Current gradient
u: Running average of squared gradients
v: Running average of squared parameter updates
rho: Decay rate for the moving average (default=0.95)
epsilon: Small constant for numerical stability (default=1e-6)

Returns:
tuple: (updated_parameter, updated_u, updated_v)
"""
assert 0 <= rho < 1, "Rho must be between 0 and 1"
assert epsilon > 0, "Epsilon must be positive"
assert all(u >= 0) if isinstance(u, np.ndarray) else u >= 0, "u must be non-negative"
assert all(v >= 0) if isinstance(v, np.ndarray) else v >= 0, "v must be non-negative"

# Update running average of squared gradients
u = rho * u + (1 - rho) * grad**2

# Compute RMS of gradient
RMS_g = np.sqrt(u + epsilon)

# Compute RMS of parameter updates
RMS_dx = np.sqrt(v + epsilon)

# Compute parameter update
dx = -RMS_dx / RMS_g * grad

# Update running average of squared parameter updates
v = rho * v + (1 - rho) * dx**2

# Update parameters
parameter = parameter + dx

return np.round(parameter, 5), np.round(u, 5), np.round(v, 5)

21 changes: 21 additions & 0 deletions questions/149_adadelta-optimizer/starter_code.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
import numpy as np

def adadelta_optimizer(parameter, grad, u, v, rho=0.95, epsilon=1e-6):
"""
Update parameters using the AdaDelta optimizer.
AdaDelta is an extension of AdaGrad that seeks to reduce its aggressive,
monotonically decreasing learning rate.

Args:
parameter: Current parameter value
grad: Current gradient
u: Running average of squared gradients
v: Running average of squared parameter updates
rho: Decay rate for the moving average (default=0.95)
epsilon: Small constant for numerical stability (default=1e-6)

Returns:
tuple: (updated_parameter, updated_u, updated_v)
"""
# Your code here
return np.round(parameter, 5), np.round(u, 5), np.round(v, 5)
22 changes: 22 additions & 0 deletions questions/149_adadelta-optimizer/tests.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
[
{
"test": "print(adadelta_optimizer(1., 0.5, 1., 1., 0.95, 1e-6))",
"expected_output": "(0.49035, 0.9625, 0.96299)"
},
{
"test": "print(adadelta_optimizer(np.array([1., 2.]), np.array([0.1, 0.2]), np.array([1., 1.]), np.array([1., 1.]), 0.95, 1e-6))",
"expected_output": "(array([0.89743, 1.79502]), array([0.9505, 0.952]), array([0.95053, 0.9521]))"
},
{
"test": "print(adadelta_optimizer(np.array([1., 2.]), np.array([0., 0.2]), np.array([0., 1.]), np.array([0., 1.]), 0.95, 1e-6))",
"expected_output": "(array([1., 1.79502]), array([0., 0.952]), array([0., 0.9521]))"
},
{
"test": "print(adadelta_optimizer(np.array([1., 1.]), np.array([1., 1.]), np.array([10000., 1.]), np.array([1., 1.]), 0.95, 1e-6))",
"expected_output": "(array([0.98974, 0.]), array([9500.05, 1.]), array([0.95001, 1.]))"
},
{
"test": "print(adadelta_optimizer(1., 0.5, 1., 1., 0., 1e-6))",
"expected_output": "(0., 0.25, 1.0)"
}
]
Loading