Building Your First Deep Learning Model: A Step-by-Step Guide | by Ravjot Singh | Jun, 2024

Deep Literacy is a subset of machine literacy, which itself is a subset of artificial intelligence( AI). Deep literacy models are inspired by the structure and function of the mortal brain and are composed of layers of artificial neurons. These models are able of learning complex patterns in data through a process called training, where the model is iteratively acclimated to minimize crimes in its predictions.In this blog post, we will walk through the process of erecting a simple artificial neural network( ANN) to classify handwritten integers using the MNIST dataset.The MNIST dataset( Modified National Institute of norms and Technology dataset) is one of the most notorious datasets in the field of machine literacy and computer vision. It consists of 70,000 grayscale images of handwritten integers from 0 to 9, each of size 28 × 28 pixels. The dataset is divided into a training set of 60,000 images and a test set of 10,000 images. Each image is labeled with the corresponding number it represents.We’ll use the MNIST dataset handed by the Keras library, which makes it easy to download and use in our model.Before we start erecting our model, we need to import the necessary libraries. These include libraries for data manipulation, visualization, and erecting our deep literacy model.import numpy as npimport pandas as pdimport matplotlib.pyplot as pltimport seaborn as snsimport tensorflow as tffrom tensorflow import kerasnumpy and pandas are used for numerical and data manipulation.matplotlib and seaborn are used for data visualization.tensorflow and keras are used for structure and training the deep literacy model.The MNIST dataset is available directly in the Keras library, making it easy to load and use.( X_train, y_train),( X_test, y_test) = keras.datasets.mnist.load_data() This line of law downloads the MNIST dataset and splits it into training and test setsX_train and y_train are the training images and their corresponding labels.X_test and y_test are the test images and their corresponding labels.Let’s take a look at the shape of our training and test datasets to understand their structure.print( X_train. shape) print( X_test. shape) print( y_train. shape) print( y_test. shape) X_train. shape labors( 60000, 28, 28), indicating there are 60,000 training images, each of size 28 × 28 pixels.X_test. shape labors( 10000, 28, 28), indicating there are 10,000 test images, each of size 28 × 28 pixels.y_train. shape labors( 60000,), indicating there are 60,000 training markers. y_test. shapeoutputs( 10000,), indicating there are 10,000 test labels.To get a better understanding, let’s fantasize one of the training images and its corresponding label.plt.imshow( X_train( 2), cmap = ’ argentine’) plt.show() print( y_train( 2)) plt.imshow( X_train( 2), cmap = ’ argentine’) displays the third image in the training set in grayscale.plt.show() renders the image.print( y_train( 2)) labors the marker for the third image, which is the number the image represents.Pixel values in the images range from 0 to 255. To ameliorate the performance of our neural network, we rescale these values to the range( 0, 1). X_train = X_train/ 255X_test = X_test/ 255This normalization helps the neural network learn more efficiently by icing that the input values are in a analogous range.Our neural network expects the input to be a flat vector rather than a 2D image. thus, we reshape our training and test datasets accordingly.X_train = X_train. reshape( len( X_train), 28 * 28) X_test = X_test. reshape( len( X_test), 28 * 28) X_train. reshape( len( X_train), 28 * 28) reshapes the training set from( 60000, 28, 28) to( 60000, 784), leveling each 28 × 28 image into a 784- dimensional vector.Similarly, X_test. reshape( len( X_test), 28 * 28) reshapes the test set from( 10000, 28, 28) to( 10000, 784). We’ll make a simple neural network with one input subcaste and one affair subcaste. The input subcaste will have 784 neurons( one for each pixel), and the affair subcaste will have 10 neurons( one for each number). ANN1 = keras.Sequential(( keras.layers.Dense( 10, input_shape = ( 784,), activation = ’ sigmoid’))) keras.Sequential() creates a successional model, which is a direct mound of layers.keras.layers.Dense( 10, input_shape = ( 784,), activation = ’ sigmoid’) adds a thick( completely connected) subcaste with 10 neurons, input shape of 784, and sigmoid activation function.Next, we collect our model by specifying the optimizer, loss function, and metrics.ANN1.compile( optimizer = ’ adam’, loss = ’ sparse_categorical_crossentropy’, criteria =( ‘ delicacy’)) optimizer = ’ adam’ specifies the Adam optimizer, which is an adaptive literacy rate optimization algorithm.loss = ’ sparse_categorical_crossentropy’ specifies the loss function, which is suitable formulti-class bracket problems.metrics =( ‘ delicacy’) specifies that we want to track delicacy during training.We also train the model on the training data.ANN1.fit( X_train, y_train, ages = 5) ANN1.fit( X_train, y_train, ages = 5) trains the model for 5 ages. An time is one complete pass through the training data.After training the model, we estimate its performance on the test data.ANN1.evaluate( X_test, y_test) ANN1.evaluate( X_test, y_test) evaluates the model on the test data and returns the loss value and criteria specified during compilation.We can use our trained model to make prognostications on the test data.y_predicted = ANN1.predict( X_test) ANN1.predict( X_test) generates prognostications for the test images.To see the prognosticated marker for the first test imageprint( np.argmax( y_predicted( 10))) print( y_test( 10)) np.argmax( y_predicted( 10)) returns the indicator of the loftiest value in the vaticination vector, which corresponds to the prognosticated digit.print( y_test( 10)) prints the factual marker of the first test image for comparison.To ameliorate our model, we add a retired subcaste with 150 neurons and use the ReLU activation function, which frequently performs better in deep literacy models.ANN2 = keras.Sequential(( keras.layers.Dense( 150, input_shape = ( 784,), activation = ’ relu’), keras.layers.Dense( 10, activation = ’ sigmoid’))) keras.layers.Dense( 150, input_shape = ( 784,), activation = ’ relu’) adds a thick retired subcaste with 150 neurons and ReLU activation function.We collect and train the bettered model in the same way.ANN2.compile( optimizer = ’ adam’, loss = ’ sparse_categorical_crossentropy’, criteria =( ‘ delicacy’)) ANN2.fit( X_train, y_train, ages = 5) We estimate the performance of our bettered model on the test data.ANN2.evaluate( X_test, y_test) To get a better understanding of how our model performs, we can produce a confusion matrix.y_predicted2 = ANN2.predict( X_test) y_predicted_labels2 = ( np.argmax( i) for i in y_predicted2) y_predicted2 = ANN2.predict( X_test) generates prognostications for the test images.y_predicted_labels2 = ( np.argmax( i) for i in y_predicted2) converts the vaticination vectors to marker indices.We also produce the confusion matrix and fantasize it.cm = tf.math.confusion_matrix( markers = y_test, prognostications = y_predicted_labels2) plt.figure( figsize = ( 10, 7)) sns.heatmap( cm, annot = True, fmt = ’ d’) plt.xlabel( “ prognosticated ”) plt.ylabel( “ factual ”) plt.show() tf.math.confusion_matrix( markers = y_test, prognostications = y_predicted_labels2) generates the confusion matrix.sns.heatmap( cm, annot = True, fmt = ’ d’) visualizes the confusion matrix with annotations.CONFUSION MATRIXSEABORN — HEATMAPIn this blog post, we covered the basics of deep literacy and walked through the way of structure, training, and assessing a simple ANN model using the MNIST dataset. We also bettered the model by adding a retired subcaste and using a different activation function. Deep literacy models, though putatively complex, can be erected and understood step- by- step, enabling us to attack colorful machine literacy problems.

Leave a Comment

Your email address will not be published. Required fields are marked *