To understand the single-layer perceptron, it is important to understand the artificial neural network (ANN). An artificial neural network is an information processing system whose mechanism is inspired by the function of biological neural circuits. Artificial neural networks have many interconnected computing units. The schematic diagram of the artificial neural network is as follows.

This figure shows that the hidden entity is communicating with the external layer. Input and output units, on the other hand, communicate only through the hidden layers of the network.
The connection pattern with the nodes, the total number of layers, the level of the nodes between the inputs and outputs, and the number of neurons per layer, define the architecture of the neural network.
There are two types of architecture. These types focus on the functionality artificial neural networks as follows −
- Single Layer Perceptron
- Multi-Layer Perceptron
Single Layer Perceptron
The single-layer is the first proposed neural model. The contents of the neuron’s local memory consist of a vector of weights. The calculation of the single-layer, is done by multiplying the sum of the input vectors of each value by the corresponding elements of the weight vector. The value displayed in the output is the input of the activation function.

A single layer perceptron (SLP) is a feed-forward network based on a threshold transfer function. SLP is the simplest type of artificial neural networks and can only classify linearly separable cases with a binary target (1 , 0).
Activation Function and it’s Significance
Activation functions are decision making units of neural networks. They calculates net output of a neural node. Herein, Heaviside step function is one of the most common activation function in neural networks. The function produces binary output. That is the reason why it also called as binary step function. The function produces 1 (or true) when input passes threshold limit whereas it produces 0 (or false) when input does not pass threshold. That’s why, they are very useful for binary classification studies.
It is a non-linear transformation that we do over the input before sending it to the next layer of neurons or finalizing it as output. A neuron’s activation function dictates whether it should be turned on or off. Nonlinear functions usually transform a neuron’s output to a number between 0 and 1 or -1 and 1.The purpose of the activation function is to introduce non-linearity into the output of a neuron.
The Heaviside step function is typically only useful within single-layer perceptron’s, an early type of neural networks that can be used for classification in cases where the input data is linearly separable.

Algorithm
The single layer perceptron does not have a priori knowledge, so the initial weights are assigned randomly. SLP sums all the weighted inputs and if the sum is above the threshold (some predetermined value), SLP is said to be activated (output=1).

The input values are presented to the perceptron, and if the predicted output is the same as the desired output, then the performance is considered satisfactory and no changes to the weights are made. However, if the output does not match the desired output, then the weights need to be changed to reduce the error.

Because SLP is a linear classifier and if the cases are not linearly separable the learning process will never reach a point where all the cases are classified properly. The most famous example of the inability of perceptron to solve problems with linearly non-separable cases is the XOR problem.

The complete code for implementation of single layer perceptron –
import numpy as np
import pandas as pd
data=pd.read_csv('iris.csv')
data.columns=['Sepal_len_cm','Sepal_wid_cm','Petal_len_cm','Petal_wid_cm','Type']
# I am using Sigmoid function as the activation function
def activation_func(value): #Tangent Hypotenuse
#return (1/(1+np.exp(-value)))
return ((np.exp(value)-np.exp(-value))/(np.exp(value)+np.exp(-value)))
def perceptron_train(in_data,labels,alpha):
X=np.array(in_data)
y=np.array(labels)
weights=np.random.random(X.shape[1])
original=weights
bias=np.random.random_sample()
for key in range(X.shape[0]):
a=activation_func(np.matmul(np.transpose(weights),X[key]))
yn=0
if a>=0.7:
yn=1
elif a<=(-0.7):
yn=-1
weights=weights+alpha*(yn-y[key])*X[key]
print('Iteration '+str(key)+': '+str(weights))
print('Difference: '+str(weights-original))
return weights
# Testing and Score
def perceptron_test(in_data,label_shape,weights):
X=np.array(in_data)
y=np.zeros(label_shape)
for key in range(X.shape[1]):
a=activation_func((weights*X[key]).sum())
y[key]=0
if a>=0.7:
y[key]=1
elif a<=(-0.7):
y[key]=-1
return y
def score(result,labels):
difference=result-np.array(labels)
correct_ctr=0
for elem in range(difference.shape[0]):
if difference[elem]==0:
correct_ctr+=1
score=correct_ctr*100/difference.size
print('Score='+str(score))
# Main code
divider = np.random.rand(len(data)) < 0.70
d_train = data[divider]
d_test = data[~divider]
# Dividing d_train into data and labels/targets
d_train_y = d_train['Type']
d_train_X = d_train.drop(['Type'], axis=1)
# Dividing d_train into data and labels/targets
d_test_y = d_test['Type']
d_test_X = d_test.drop(['Type'], axis=1)
# Learning rate
alpha = 0.001
# Train
weights = perceptron_train(d_train_X, d_train_y, alpha)
# Test
result_test = perceptron_test(d_test_X, d_test_y.shape, weights)
# Calculate score
score(result_test, d_test_y)
Output
The above code generates the following output −
