MachineX: Alphabets of PyTorch (Part 1)

Reading Time: 6 minutes

Overview

In this blog, you’ll get an introduction to deep learning using the PyTorch framework, we will see some basics of PyTorch.

Introduction to PyTorch

PyTorch is a Python machine learning package based on Torch, which is an open-source machine learning package based on the programming language Lua.

Two main features:

  • Tensor computation (like NumPy) with strong GPU acceleration
  • Automatic differentiation for building and training neural networks

Benefits of Using PyTorch

There are a few reason you might prefer PyTorch to other deep learning libraries:

  1. Some famous libraries like TensorFlow, where you have to first define an entire computational graph before you can run your model, PyTorch allows you to define your graph dynamically.
  2. It is also great for deep learning research and provides maximum flexibility and speed.

Basics of PyTorch

PyTorch is very similar to NumPy as i said earlier . Lets see some of the basic operations and how it is similar to NumPy.

In the NumPy library, we have multi-dimensional arrays whereas in PyTorch, we have tensors. So, let’s first understand what tensors are.

Introduction to Tensors

Tensors are multidimensional arrays. And PyTorch tensors are similar to NumPy’s n-dimensional arrays. We can use these tensors on a GPU as well (this is not the case with NumPy arrays). This is a major advantage of using tensors.

PyTorch supports multiple types of tensors, including:

  • FloatTensor: 32-bit float
  • DoubleTensor: 64-bit float
  • HalfTensor: 16-bit float
  • IntTensor: 32-bit int
  • LongTensor: 64-bit int
# importing libraries
import numpy as np

import torch

Now, let’s see how we can assign a variable in NumPy as well as PyTorch:

# initializing a numpy array
a = np.array(1)

# initializing a tensor
b = torch.tensor(1)

print(a)
print(b)

Let’s quickly check at the type of both these variables:

type(a), type(b)

As we can see here , fist variable (a) here is a NumPy array whereas the second variable (b) is a torch tensor.

Now lets see some mathematical operations on these tensors and we will compare how they are similar to NumPy’s mathematical operations.

Some Mathematical operations

We will initialize two arrays and then perform mathematical operations like addition, subtraction, multiplication, and division, on them:

# initializing two arrays
a = np.array(2)
b = np.array(1)
print(a,b)

These are the two NumPy arrays we have initialized. Now let’s see how we can perform mathematical operations on these arrays:

# addition
print(a+b)

# subtraction
print(b-a)

# multiplication
print(a*b)

# division
print(a/b)

Let’s now see how we can do the same using PyTorch on tensors. So, first, let’s initialize two tensors:

# initializing two tensors
a = torch.tensor(2)
b = torch.tensor(1)
print(a,b)

Next, perform the operations which we saw in NumPy:

# addition
print(a+b)

# subtraction
print(b-a)

# multiplication
print(a*b)

# division
print(a/b)

Did you see the similarities? The codes are exactly the same to perform the above mentioned mathematical operations in both NumPy and PyTorch.

Next, let’s see how to initialize a matrix as well as perform matrix operations in PyTorch (along with, you guessed it, it’s NumPy counterpart!).

Matrix Initialization

Let’s say we want a matrix of shape 3*3 having all zeros. Take a moment to think – how can we do that using NumPy?

# matrix of zeros
a = np.zeros((3,3))
print(a)
print(a.shape)

Fairly straightforward. We just have to use the zeros() function of NumPy and pass the desired shape ((3,3) in our case), and we get a matrix consisting of all zeros. Let’s now see how we can do this in PyTorch:

# matrix of zeros
a = torch.zeros((3,3))
print(a)
print(a.shape)

Similar to NumPy, PyTorch also has the zeros() function which takes the shape as input and returns a matrix of zeros of a specified shape. Now, while building a neural network, we randomly initialize the weights for the model. So, let’s see how we can initialize a matrix with random numbers:

# setting the random seed for numpy
np.random.seed(42)
# matrix of random numbers
a = np.random.randn(3,3)
a

We have specified the random seed at the beginning here so that every time we run the above code, the same random number will generate. The random.randn() function returns random numbers that follow a standard normal distribution.

But let’s not get waylaid by the statistics part of things. We’ll focus on how we can initialize a similar matrix of random numbers using PyTorch:

# setting the random seed for pytorch
torch.manual_seed(42)
# matrix of random numbers
a = torch.randn(3,3)
a

This is where even more similarities with NumPy crop up. PyTorch also has a function called randn() that returns a tensor filled with random numbers from a normal distribution with mean 0 and variance 1 (also called the standard normal distribution).

Note that we have set the random seed here as well just to reproduce the results every time you run this code. So far, we have seen how to initialize a matrix using PyTorch. Next, let’s see how to perform matrix operations in PyTorch.

Matrix Operations

We will first initialize two matrices in NumPy:

# setting the random seed for numpy and initializing two matrices
np.random.seed(42)
a = np.random.randn(3,3)
b = np.random.randn(3,3)

Next, let’s perform basic operations on them using NumPy:

# matrix addition
print(np.add(a,b), '\n')

# matrix subtraction
print(np.subtract(a,b), '\n')

# matrix multiplication
print(np.dot(a,b), '\n')

# matrix multiplication
print(np.divide(a,b))

Matrix transpose is one technique which is also very useful while creating a neural network from scratch. So let’s see how we take the transpose of a matrix in NumPy:

# original matrix
print(a, '\n')

# matrix transpose
print(np.transpose(a))

The transpose() function of NumPy automatically returns the transpose of a matrix. How does this happen in PyTorch? Let’s find out:

# setting the random seed for pytorch and initializing two tensors
torch.manual_seed(42)
a = torch.randn(3,3)
b = torch.randn(3,3)
# matrix addition
print(torch.add(a,b), '\n')

# matrix subtraction
print(torch.sub(a,b), '\n')

# matrix multiplication
print(torch.mm(a,b), '\n')

# matrix division
print(torch.div(a,b))

Note that the .mm() function of PyTorch is similar to the dot product in NumPy. This function will be helpful when we create our model from scratch in PyTorch. Calculating transpose is also similar to NumPy:

# original matrix
print(a, '\n')

# matrix transpose
torch.t(a)

Next, we will look at some other common operations like concatenating and reshaping tensors. From this point forward, I will not be comparing PyTorch against NumPy as you must have got an idea of how the codes are similar.

Concatenating Tensors

Let’s say we have two tensors as shown below:

# initializing two tensors
a = torch.tensor([[1,2],[3,4]])
b = torch.tensor([[5,6],[7,8]])
print(a, '\n')
print(b)

What if we want to concatenate these tensors vertically? We can use the below code:

# concatenating vertically
torch.cat((a,b))

As you can see, the second tensor has been stacked below the first tensor. We can concatenate the tensors horizontally as well by setting the dim parameter to 1:

# concatenating horizontally
torch.cat((a,b),dim=1)

So these were Some of the basic operations which tells us how Torch tensors are very similar to NumPy’s array .

In the next part we will see some common module like autograd , neural network etc. so stay tuned!!!!!!!

Happy learning !!! 🙂

References

Written by 

Shubham Goyal is a Data Scientist at Knoldus Inc. With this, he is an artificial intelligence researcher, interested in doing research on different domain problems and a regular contributor to society through blogs and webinars in machine learning and artificial intelligence. He had also written a few research papers on machine learning. Moreover, a conference speaker and an official author at Towards Data Science.

Discover more from Knoldus Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading