Gradients
Creating a tensor with gradients:
Method 1:
import torch
a = torch.ones((2,2),requires_grad = True)
print(a)
🤙
tensor([[ 1., 1.],
[ 1., 1.]])
Check if tensor requires gradients:
a.requires_grad()
🤙
True
Method 2:
# create a tensor
a = torch.ones((2,2))
# Requires Gradient
a.requires_grad_()
# Check if it requires Gradient
a.requires_grad
🤙
True
A tensor without Gradients
# Not a Variable
no_grad = torch.ones((2,2))
no_grad.requires_grad
🤙
False
Manually and Automatically Calculating gradients
What exactly is requires_grad
? - Allows calculation of gradients w.r.t the tensors that all allows gradients accumulation.

# Create a tensor of size (2,1) and requires gradients
x = torch.ones(2, requires_grad = True)
print(x)
# Simple Linear Equation with the tensor x
y = 5 * (x + 1) ** 2
print(y)
🤙
tensor([ 1., 1.]) # x
tensor([ 20., 20.]) # y
Note: Backward should be called only on scalar(i.e.,1-element tensor) or w.r.t the variable
Let's reduce the y into a scalar then:

o = (1/2) * torch.sum(y)
o
🤙
tensor(20.)
Calculating First Derivative:
Recap y
 equation:
Recap o
 equation: 
Substitute y
 into o
 equation: 


And it's so simple with PyTorch with the following line.
o.backward()
# Print out the first derivative.
x.grad
🤙
tensor([10., 10.])
Note: If x requires grad and you create new objects with it, you get all gradients.
print(x.requires_grad)
print(y.requires_grad)
print(o.requires_grad)
🤙
True
True
True
Summary
Tensor with Gradients
- Wraps a tensor for gradient accumulation Gradients
Define original equation
- Substitute equation with x values
- Reduce to scalar output, o through mean
- Calculate gradients with o.backward()
- Then access gradients of the x variable through x.grad
Reference: