About

In this notebook, we are going to take a baby step into the world of deep learning using PyTorch. There are a ton of notebooks out there that teach you the fundamentals of deep learning and PyTorch, so here the idea is to give you some basic introduction to deep learning and PyTorch at a very high level. Therefore, this notebook is targeting beginners but it can also serve as a review for more experienced developers.

After completion of this notebook, you are expected to know the basic components of training a basic neural network with PyTorch. I have also left a couple of exercises towards the end with the intention of encouraging more research and practise of your deep learning skills.


Author: Elvis Saravia - Twitter | LinkedIn

Complete Code Walkthrough: Blog post

Importing the libraries

Like with any other programming exercise, the first step is to import the necessary libraries. As we are going to be using Google Colab to program our neural network, we need to install and import the necessary PyTorch libraries.

!pip3 install torch torchvision
Requirement already satisfied: torch in /usr/local/lib/python3.6/dist-packages (1.4.0)
Requirement already satisfied: torchvision in /usr/local/lib/python3.6/dist-packages (0.5.0)
Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.6/dist-packages (from torchvision) (7.0.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torchvision) (1.18.1)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from torchvision) (1.12.0)
## The usual imports
import torch
import torch.nn as nn

## print out the pytorch version used
print(torch.__version__)
1.4.0

The Neural Network

alt text

Before building and training a neural network the first step is to process and prepare the data. In this notebook, we are going to use syntethic data (i.e., fake data) so we won't be using any real world data.

For the sake of simplicity, we are going to use the following input and output pairs converted to tensors, which is how data is typically represented in the world of deep learning. The x values represent the input of dimension (6,1) and the y values represent the output of similar dimension. The example is taken from this tutorial.

The objective of the neural network model that we are going to build and train is to automatically learn patterns that better characterize the relationship between the x and y values. Essentially, the model learns the relationship that exists between inputs and outputs which can then be used to predict the corresponding y value for any given input x.

## our data in tensor form
x = torch.tensor([[-1.0],  [0.0], [1.0], [2.0], [3.0], [4.0]], dtype=torch.float)
y = torch.tensor([[-3.0], [-1.0], [1.0], [3.0], [5.0], [7.0]], dtype=torch.float)
## print size of the input tensor
x.size()
torch.Size([6, 1])

The Neural Network Components

As said earlier, we are going to first define and build out the components of our neural network before training the model.

Model

Typically, when building a neural network model, we define the layers and weights which form the basic components of the model. Below we show an example of how to define a hidden layer named layer1 with size (1, 1). For the purpose of this tutorial, we won't explicitly define the weights and allow the built-in functions provided by PyTorch to handle that part for us. By the way, the nn.Linear(...) function applies a linear transformation ($y = xA^T + b$) to the data that was provided as its input. We ignore the bias for now by setting bias=False.

## Neural network with 1 hidden layer
layer1 = nn.Linear(1,1, bias=False)
model = nn.Sequential(layer1)

Loss and Optimizer

The loss function, nn.MSELoss(), is in charge of letting the model know how good it has learned the relationship between the input and output. The optimizer (in this case an SGD) primary role is to minimize or lower that loss value as it tunes its weights.

## loss function
criterion = nn.MSELoss()

## optimizer algorithm
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)

Training the Neural Network Model

We have all the components we need to train our model. Below is the code used to train our model.

In simple terms, we train the model by feeding it the input and output pairs for a couple of rounds (i.e., epoch). After a series of forward and backward steps, the model somewhat learns the relationship between x and y values. This is notable by the decrease in the computed loss. For a more detailed explanation of this code check out this tutorial.

## training
for i in range(150):
    model = model.train()

    ## forward
    output = model(x)
    loss = criterion(output, y)
    optimizer.zero_grad()

    ## backward + update model params 
    loss.backward()
    optimizer.step()

    model.eval()
    print('Epoch: %d | Loss: %.4f' %(i, loss.detach().item()))
Epoch: 0 | Loss: 25.5853
Epoch: 1 | Loss: 20.6815
Epoch: 2 | Loss: 16.7388
Epoch: 3 | Loss: 13.5688
Epoch: 4 | Loss: 11.0201
Epoch: 5 | Loss: 8.9709
Epoch: 6 | Loss: 7.3234
Epoch: 7 | Loss: 5.9987
Epoch: 8 | Loss: 4.9337
Epoch: 9 | Loss: 4.0774
Epoch: 10 | Loss: 3.3889
Epoch: 11 | Loss: 2.8353
Epoch: 12 | Loss: 2.3903
Epoch: 13 | Loss: 2.0325
Epoch: 14 | Loss: 1.7448
Epoch: 15 | Loss: 1.5134
Epoch: 16 | Loss: 1.3275
Epoch: 17 | Loss: 1.1779
Epoch: 18 | Loss: 1.0577
Epoch: 19 | Loss: 0.9610
Epoch: 20 | Loss: 0.8833
Epoch: 21 | Loss: 0.8208
Epoch: 22 | Loss: 0.7706
Epoch: 23 | Loss: 0.7302
Epoch: 24 | Loss: 0.6977
Epoch: 25 | Loss: 0.6716
Epoch: 26 | Loss: 0.6506
Epoch: 27 | Loss: 0.6338
Epoch: 28 | Loss: 0.6202
Epoch: 29 | Loss: 0.6093
Epoch: 30 | Loss: 0.6005
Epoch: 31 | Loss: 0.5935
Epoch: 32 | Loss: 0.5878
Epoch: 33 | Loss: 0.5832
Epoch: 34 | Loss: 0.5796
Epoch: 35 | Loss: 0.5766
Epoch: 36 | Loss: 0.5742
Epoch: 37 | Loss: 0.5723
Epoch: 38 | Loss: 0.5708
Epoch: 39 | Loss: 0.5696
Epoch: 40 | Loss: 0.5686
Epoch: 41 | Loss: 0.5678
Epoch: 42 | Loss: 0.5671
Epoch: 43 | Loss: 0.5666
Epoch: 44 | Loss: 0.5662
Epoch: 45 | Loss: 0.5659
Epoch: 46 | Loss: 0.5656
Epoch: 47 | Loss: 0.5654
Epoch: 48 | Loss: 0.5652
Epoch: 49 | Loss: 0.5651
Epoch: 50 | Loss: 0.5650
Epoch: 51 | Loss: 0.5649
Epoch: 52 | Loss: 0.5648
Epoch: 53 | Loss: 0.5648
Epoch: 54 | Loss: 0.5647
Epoch: 55 | Loss: 0.5647
Epoch: 56 | Loss: 0.5646
Epoch: 57 | Loss: 0.5646
Epoch: 58 | Loss: 0.5646
Epoch: 59 | Loss: 0.5646
Epoch: 60 | Loss: 0.5646
Epoch: 61 | Loss: 0.5646
Epoch: 62 | Loss: 0.5645
Epoch: 63 | Loss: 0.5645
Epoch: 64 | Loss: 0.5645
Epoch: 65 | Loss: 0.5645
Epoch: 66 | Loss: 0.5645
Epoch: 67 | Loss: 0.5645
Epoch: 68 | Loss: 0.5645
Epoch: 69 | Loss: 0.5645
Epoch: 70 | Loss: 0.5645
Epoch: 71 | Loss: 0.5645
Epoch: 72 | Loss: 0.5645
Epoch: 73 | Loss: 0.5645
Epoch: 74 | Loss: 0.5645
Epoch: 75 | Loss: 0.5645
Epoch: 76 | Loss: 0.5645
Epoch: 77 | Loss: 0.5645
Epoch: 78 | Loss: 0.5645
Epoch: 79 | Loss: 0.5645
Epoch: 80 | Loss: 0.5645
Epoch: 81 | Loss: 0.5645
Epoch: 82 | Loss: 0.5645
Epoch: 83 | Loss: 0.5645
Epoch: 84 | Loss: 0.5645
Epoch: 85 | Loss: 0.5645
Epoch: 86 | Loss: 0.5645
Epoch: 87 | Loss: 0.5645
Epoch: 88 | Loss: 0.5645
Epoch: 89 | Loss: 0.5645
Epoch: 90 | Loss: 0.5645
Epoch: 91 | Loss: 0.5645
Epoch: 92 | Loss: 0.5645
Epoch: 93 | Loss: 0.5645
Epoch: 94 | Loss: 0.5645
Epoch: 95 | Loss: 0.5645
Epoch: 96 | Loss: 0.5645
Epoch: 97 | Loss: 0.5645
Epoch: 98 | Loss: 0.5645
Epoch: 99 | Loss: 0.5645
Epoch: 100 | Loss: 0.5645
Epoch: 101 | Loss: 0.5645
Epoch: 102 | Loss: 0.5645
Epoch: 103 | Loss: 0.5645
Epoch: 104 | Loss: 0.5645
Epoch: 105 | Loss: 0.5645
Epoch: 106 | Loss: 0.5645
Epoch: 107 | Loss: 0.5645
Epoch: 108 | Loss: 0.5645
Epoch: 109 | Loss: 0.5645
Epoch: 110 | Loss: 0.5645
Epoch: 111 | Loss: 0.5645
Epoch: 112 | Loss: 0.5645
Epoch: 113 | Loss: 0.5645
Epoch: 114 | Loss: 0.5645
Epoch: 115 | Loss: 0.5645
Epoch: 116 | Loss: 0.5645
Epoch: 117 | Loss: 0.5645
Epoch: 118 | Loss: 0.5645
Epoch: 119 | Loss: 0.5645
Epoch: 120 | Loss: 0.5645
Epoch: 121 | Loss: 0.5645
Epoch: 122 | Loss: 0.5645
Epoch: 123 | Loss: 0.5645
Epoch: 124 | Loss: 0.5645
Epoch: 125 | Loss: 0.5645
Epoch: 126 | Loss: 0.5645
Epoch: 127 | Loss: 0.5645
Epoch: 128 | Loss: 0.5645
Epoch: 129 | Loss: 0.5645
Epoch: 130 | Loss: 0.5645
Epoch: 131 | Loss: 0.5645
Epoch: 132 | Loss: 0.5645
Epoch: 133 | Loss: 0.5645
Epoch: 134 | Loss: 0.5645
Epoch: 135 | Loss: 0.5645
Epoch: 136 | Loss: 0.5645
Epoch: 137 | Loss: 0.5645
Epoch: 138 | Loss: 0.5645
Epoch: 139 | Loss: 0.5645
Epoch: 140 | Loss: 0.5645
Epoch: 141 | Loss: 0.5645
Epoch: 142 | Loss: 0.5645
Epoch: 143 | Loss: 0.5645
Epoch: 144 | Loss: 0.5645
Epoch: 145 | Loss: 0.5645
Epoch: 146 | Loss: 0.5645
Epoch: 147 | Loss: 0.5645
Epoch: 148 | Loss: 0.5645
Epoch: 149 | Loss: 0.5645

Testing the Model

After training the model we have the ability to test the model predictive capability by passing it an input. Below is a simple example of how you could achieve this with our model. The result we obtained aligns with the results obtained in this notebook, which inspired this entire tutorial.

## test the model
sample = torch.tensor([10.0], dtype=torch.float)
predicted = model(sample)
print(predicted.detach().item())
17.096769332885742

Final Words

Congratulations! In this tutorial you learned how to train a simple neural network using PyTorch. You also learned about the basic components that make up a neural network model such as the linear transformation layer, optimizer, and loss function. We then trained the model and tested its predictive capabilities. You are well on your way to become more knowledgeable about deep learning and PyTorch. I have provided a bunch of references below if you are interested in practising and learning more.

I would like to thank Laurence Moroney for his excellent tutorial which I used as an inspiration for this tutorial.

Exercises

  • Add more examples in the input and output tensors. In addition, try to change the dimensions of the data, say by adding an extra value in each array. What needs to be changed to successfully train the network with the new data?
  • The model converged really fast, which means it learned the relationship between x and y values after a couple of iterations. Do you think it makes sense to continue training? How would you automate the process of stopping the training after the model loss doesn't subtantially change?
  • In our example, we used a single hidden layer. Try to take a look at the PyTorch documentation to figure out what you need to do to get a model with more layers. What happens if you add more hidden layers?
  • We did not discuss the learning rate (lr-0.001) and the optimizer in great detail. Check out the PyTorch documentation to learn more about what other optimizers you can use.