Master Backpropagation with TensorFlow in Python 3

Backpropagation with tensorflow | Innovate Yourself
2
0

Are you ready to delve into the fascinating world of neural networks and understand the magic behind their learning process? In this comprehensive guide, we’re going to demystify one of the core concepts of neural networks: backpropagation. But we won’t stop at theory – we’ll implement backpropagation using Python 3 and TensorFlow. By the end of this journey, you’ll not only understand the inner workings of neural networks but also have the practical skills to implement backpropagation and train your models effectively.

The Backpropagation Unveiled

Before we jump into the code and start building our backpropagation algorithm, let’s first understand what backpropagation is and why it’s crucial in the realm of deep learning.

Backpropagation is the cornerstone of training neural networks, allowing them to learn from data. It stands for “backward propagation of errors.” In essence, it’s a process that helps neural networks adjust their internal parameters to make more accurate predictions.

Imagine teaching a neural network to recognize handwritten digits. Backpropagation starts with an untrained network, and as it makes predictions, it evaluates the difference between its predictions and the actual values. This difference, or error, is then “propagated” backward through the network.

It adjusts the network’s internal parameters to minimize this error, making future predictions more accurate. It’s the key to making neural networks learn, improve, and excel in various tasks, from image recognition to natural language processing.

Setting Up Your Environment

Before we get into the nitty-gritty of backpropagation, let’s ensure your Python environment is ready for action. If you haven’t already, make sure you have Python 3 installed. Additionally, we’ll be using TensorFlow, so you need to have it installed:

pip install tensorflow

With your environment set up, we can now dive into the exciting world of backpropagation.

Understanding the Basics

To grasp backpropagation, you need to understand a few fundamental concepts:

Neural Networks and Neurons

Think of a neural network as a computational model inspired by the human brain’s structure. It consists of layers of interconnected nodes, or neurons. Each neuron processes input data and produces an output.

Loss Function

The loss function is a crucial part of backpropagation. It quantifies how well the network is performing. The goal is to minimize this function as much as possible.

Optimization Algorithm

The optimization algorithm (e.g., stochastic gradient descent) is responsible for minimizing the loss function. It adjusts the network’s parameters to make predictions as accurate as possible.

Forward Pass

In the forward pass, input data is processed layer by layer through the network to produce an output.

Backward Pass (Backpropagation)

In the backward pass, errors are computed by comparing the network’s output to the expected output. These errors are then propagated backward through the network to update the parameters.

Building a Backpropagation Model

For this tutorial, we’ll create a simple feedforward neural network and implement backpropagation. We’ll use a well-known dataset, the Iris dataset, for simplicity.

Importing Libraries

Let’s start by importing the necessary libraries:

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split

Loading and Preprocessing the Data

We’ll use the Iris dataset to demonstrate backpropagation. The data consists of features (sepal length, sepal width, petal length, and petal width) and labels (species of iris flowers).

iris = load_iris()
X = iris.data
y = iris.target

# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

Building the Neural Network Model

We’ll create a simple feedforward neural network with one hidden layer.

model = tf.keras.models.Sequential([
    tf.keras.layers.Dense(4, activation='relu', input_dim=4),
    tf.keras.layers.Dense(3, activation='softmax')
])

model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

Training the Model

Now, we’ll train the model on the Iris dataset:

history = model.fit(X_train, y_train, epochs=100, validation_data=(X_test, y_test))

This will train the model using the backpropagation algorithm.

Epoch 1/100
4/4 [==============================] - 0s 37ms/step - loss: 1.0987 - accuracy: 0.2917 - val_loss: 1.0986 - val_accuracy: 0.3667
Epoch 2/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0986 - accuracy: 0.3250 - val_loss: 1.0986 - val_accuracy: 0.3000
Epoch 3/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0986 - accuracy: 0.3417 - val_loss: 1.0986 - val_accuracy: 0.3000
Epoch 4/100
4/4 [==============================] - 0s 8ms/step - loss: 1.0986 - accuracy: 0.3417 - val_loss: 1.0987 - val_accuracy: 0.3000
Epoch 5/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0986 - accuracy: 0.3417 - val_loss: 1.0987 - val_accuracy: 0.3000
Epoch 6/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0986 - accuracy: 0.3417 - val_loss: 1.0987 - val_accuracy: 0.3000
Epoch 7/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0986 - accuracy: 0.3417 - val_loss: 1.0988 - val_accuracy: 0.3000
Epoch 8/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0986 - accuracy: 0.3417 - val_loss: 1.0988 - val_accuracy: 0.3000
Epoch 9/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0986 - accuracy: 0.3417 - val_loss: 1.0988 - val_accuracy: 0.3000
Epoch 10/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0986 - accuracy: 0.3417 - val_loss: 1.0988 - val_accuracy: 0.3000
Epoch 11/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0986 - accuracy: 0.3417 - val_loss: 1.0989 - val_accuracy: 0.3000
Epoch 12/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0986 - accuracy: 0.3417 - val_loss: 1.0989 - val_accuracy: 0.3000
Epoch 13/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0986 - accuracy: 0.3417 - val_loss: 1.0989 - val_accuracy: 0.3000
Epoch 14/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0989 - val_accuracy: 0.3000
Epoch 15/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0986 - accuracy: 0.3417 - val_loss: 1.0990 - val_accuracy: 0.3000
Epoch 16/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0986 - accuracy: 0.3417 - val_loss: 1.0990 - val_accuracy: 0.3000
Epoch 17/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0990 - val_accuracy: 0.3000
Epoch 18/100
4/4 [==============================] - 0s 11ms/step - loss: 1.0986 - accuracy: 0.3417 - val_loss: 1.0990 - val_accuracy: 0.3000
Epoch 19/100
4/4 [==============================] - 0s 16ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0990 - val_accuracy: 0.3000
Epoch 20/100
4/4 [==============================] - 0s 13ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0991 - val_accuracy: 0.3000
Epoch 21/100
4/4 [==============================] - 0s 13ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0991 - val_accuracy: 0.3000
Epoch 22/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0991 - val_accuracy: 0.3000
Epoch 23/100
4/4 [==============================] - 0s 11ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0992 - val_accuracy: 0.3000
Epoch 24/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0992 - val_accuracy: 0.3000
Epoch 25/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0992 - val_accuracy: 0.3000
Epoch 26/100
4/4 [==============================] - 0s 11ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0992 - val_accuracy: 0.3000
Epoch 27/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0992 - val_accuracy: 0.3000
Epoch 28/100
4/4 [==============================] - 0s 11ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0992 - val_accuracy: 0.3000
Epoch 29/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0993 - val_accuracy: 0.3000
Epoch 30/100
4/4 [==============================] - 0s 16ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0993 - val_accuracy: 0.3000
Epoch 31/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0993 - val_accuracy: 0.3000
Epoch 32/100
4/4 [==============================] - 0s 16ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0993 - val_accuracy: 0.3000
Epoch 33/100
4/4 [==============================] - 0s 13ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0993 - val_accuracy: 0.3000
Epoch 34/100
4/4 [==============================] - 0s 8ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0993 - val_accuracy: 0.3000
Epoch 35/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0994 - val_accuracy: 0.3000
Epoch 36/100
4/4 [==============================] - 0s 16ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0994 - val_accuracy: 0.3000
Epoch 37/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0994 - val_accuracy: 0.3000
Epoch 38/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0994 - val_accuracy: 0.3000
Epoch 39/100
4/4 [==============================] - 0s 6ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0995 - val_accuracy: 0.3000
Epoch 40/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0995 - val_accuracy: 0.3000
Epoch 41/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0995 - val_accuracy: 0.3000
Epoch 42/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0995 - val_accuracy: 0.3000
Epoch 43/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0995 - val_accuracy: 0.3000
Epoch 44/100
4/4 [==============================] - 0s 11ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0996 - val_accuracy: 0.3000
Epoch 45/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0996 - val_accuracy: 0.3000
Epoch 46/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.0996 - val_accuracy: 0.3000
Epoch 47/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0997 - val_accuracy: 0.3000
Epoch 48/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0997 - val_accuracy: 0.3000
Epoch 49/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.0997 - val_accuracy: 0.3000
Epoch 50/100
4/4 [==============================] - 0s 6ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0997 - val_accuracy: 0.3000
Epoch 51/100
4/4 [==============================] - 0s 9ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.0997 - val_accuracy: 0.3000
Epoch 52/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.0997 - val_accuracy: 0.3000
Epoch 53/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.0997 - val_accuracy: 0.3000
Epoch 54/100
4/4 [==============================] - 0s 11ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0998 - val_accuracy: 0.3000
Epoch 55/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0998 - val_accuracy: 0.3000
Epoch 56/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.0998 - val_accuracy: 0.3000
Epoch 57/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.0998 - val_accuracy: 0.3000
Epoch 58/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.0998 - val_accuracy: 0.3000
Epoch 59/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0998 - val_accuracy: 0.3000
Epoch 60/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.0998 - val_accuracy: 0.3000
Epoch 61/100
4/4 [==============================] - 0s 11ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.0998 - val_accuracy: 0.3000
Epoch 62/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0998 - val_accuracy: 0.3000
Epoch 63/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0999 - val_accuracy: 0.3000
Epoch 64/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.0999 - val_accuracy: 0.3000
Epoch 65/100
4/4 [==============================] - 0s 11ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.0999 - val_accuracy: 0.3000
Epoch 66/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.0999 - val_accuracy: 0.3000
Epoch 67/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.0999 - val_accuracy: 0.3000
Epoch 68/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.0999 - val_accuracy: 0.3000
Epoch 69/100
4/4 [==============================] - 0s 6ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.0999 - val_accuracy: 0.3000
Epoch 70/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.0999 - val_accuracy: 0.3000
Epoch 71/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.0999 - val_accuracy: 0.3000
Epoch 72/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.0999 - val_accuracy: 0.3000
Epoch 73/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.1000 - val_accuracy: 0.3000
Epoch 74/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.1000 - val_accuracy: 0.3000
Epoch 75/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.0999 - val_accuracy: 0.3000
Epoch 76/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.0999 - val_accuracy: 0.3000
Epoch 77/100
4/4 [==============================] - 0s 11ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.0999 - val_accuracy: 0.3000
Epoch 78/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.1000 - val_accuracy: 0.3000
Epoch 79/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.1000 - val_accuracy: 0.3000
Epoch 80/100
4/4 [==============================] - 0s 12ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.1000 - val_accuracy: 0.3000
Epoch 81/100
4/4 [==============================] - 0s 9ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.1000 - val_accuracy: 0.3000
Epoch 82/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.1000 - val_accuracy: 0.3000
Epoch 83/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.1000 - val_accuracy: 0.3000
Epoch 84/100
4/4 [==============================] - 0s 11ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.1000 - val_accuracy: 0.3000
Epoch 85/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.1000 - val_accuracy: 0.3000
Epoch 86/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.1000 - val_accuracy: 0.3000
Epoch 87/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.1000 - val_accuracy: 0.3000
Epoch 88/100
4/4 [==============================] - 0s 11ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.1000 - val_accuracy: 0.3000
Epoch 89/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.1000 - val_accuracy: 0.3000
Epoch 90/100
4/4 [==============================] - 0s 12ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.1000 - val_accuracy: 0.3000
Epoch 91/100
4/4 [==============================] - 0s 8ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.1000 - val_accuracy: 0.3000
Epoch 92/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.1000 - val_accuracy: 0.3000
Epoch 93/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.1000 - val_accuracy: 0.3000
Epoch 94/100
4/4 [==============================] - 0s 10ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.1000 - val_accuracy: 0.3000
Epoch 95/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.1000 - val_accuracy: 0.3000
Epoch 96/100
4/4 [==============================] - 0s 7ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.1001 - val_accuracy: 0.3000
Epoch 97/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.1001 - val_accuracy: 0.3000
Epoch 98/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0985 - accuracy: 0.3417 - val_loss: 1.1001 - val_accuracy: 0.3000
Epoch 99/100
4/4 [==============================] - 0s 11ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.1001 - val_accuracy: 0.3000
Epoch 100/100
4/4 [==============================] - 0s 5ms/step - loss: 1.0984 - accuracy: 0.3417 - val_loss: 1.1001 - val_accuracy: 0.3000
1/1 [==============================] - 0s 16ms/step - loss: 1.1001 - accuracy: 0.3000

Evaluating the Model

After training, it’s essential to evaluate the model’s performance:

test_loss, test_acc = model.evaluate(X_test, y_test)
print('\nTest accuracy:', test_acc)
Test accuracy: 0.30000001192092896

This gives you an idea of how well your model generalizes to new, unseen data.

Visualizing the Results for Backpropagation with tensorflow

To gain a deeper understanding of the training process, we can create plots to visualize how the loss and accuracy evolve during training.

plt.figure(figsize=(12, 4))

plt.subplot(1, 2, 1)
plt.plot(history.history['accuracy'], label='Accuracy')
plt.plot(history.history['val_accuracy'], label='Val Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0, 1])
plt.legend(loc='lower right')

plt.subplot(1, 2, 2)
plt.plot(history.history['loss'], label='Loss')
plt.plot(history.history['val_loss'], label='Val Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend(loc='upper right')

plt.show()
Backpropagation with tensorflow | Innovate Yourself

These plots help you visualize how your model is learning from the training data and how it’s performing on the test data.

Conclusion of Backpropagation

In this extensive tutorial, we’ve covered the basics of backpropagation, a fundamental concept in training neural networks. We implemented backpropagation using Python 3 and TensorFlow, demonstrating the entire process from data preparation to model evaluation. By understanding and mastering backpropagation, you’ll be well-equipped to tackle more complex machine learning tasks and build advanced neural networks.

As you continue your journey to becoming a Python pro, remember that backpropagation is just one piece of the puzzle in the exciting field of deep learning. There’s a world of possibilities waiting for you, from computer vision to natural language processing. Keep coding, keep learning, and keep pushing the boundaries of what’s possible with Python and neural networks.

Also, check out our other playlist Rasa ChatbotInternet of thingsDockerPython ProgrammingMachine LearningMQTTTech NewsESP-IDF etc.
Become a member of our social family on youtube here.
Stay tuned and Happy Learning. ✌🏻😃
Happy coding, and may your journey be filled with discovery and achievement! ❤️🔥

Leave a Reply