We have gone through some of the important topics in tensorflow and believe me there are ton of others ! but no worries.. We’ll catch up ! I always believed in project based learning. Therefore we’ll do the same this time as well. We shall be building a feed forward deep neural network which can classify handwritten digits. Sounds interesting right ? Let’s get into it. Open up any text editor or IDE. I personally prefer coding in an** pycharm ID**E. It’s a wonderful piece of software to write your python scripts.So, What is the most critical part of any neural network ? Data ! right ! Neural networks shine when there is lots and lots of data to train it. We will be using mnist dataset provided in tensorflow.org tutorials.We can get the data by simply importing and loading into python variable.

`import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)`

In MNIST, Every data point contains 2 points 1) image 2) label. Every image is of size 28 x 28.

Let’s start building our graph by creating a placeholder variable.

`x = tf.placeholder(tf.float32, [None, 784]) y = tf.placeholder(tf.float32) # for labels`

xisn’t a specific value, but we’ll input when we ask tensorflow to run computations. Here x represents a MNIST image, flattened into 784-dimentional vector. We represent this as 2-D tensor of floating point.

Now we need variables for weights and biases. These we represent with tensorflow variable as these variables can be modified by operational computations.

```
W = tf.Variable(tf.random_normal([784, 10]))
b = tf.Variable(tf.random_normal([10]))
```

Notice that W, b are initialized with some random values but it doesn’t actually matter and w is tensor of shape [784, 10] because we want to generate classification for 10 classes i.e 0,1,2..9. As we are building a deep net we need to have one or more hidden layers.

hidden_1_layer = { ‘weights’: tf.Variable(tf.random_normal([784, n_nodes_hl1])),

‘biases’: tf.Variable(tf.random_normal([n_nodes_hl1])) }

**n_nodeshl1 **is declared as 500. It is number of nodes in single hidden layer. We can tweak these numbers to check the change in accuracy. Moving on..

Now we can completed the neural network model by completing the implementation of layers.

l1 = tf.add(tf.matmul(data, hidden_1_layer[‘weights’]), hidden_1_layer[‘biases’])

l1 = tf.nn.relu(l1) #activaion func

There few more steps before we start training our model i.e Actually defining classification algorithm and employ a optimizer for back propagation. We are using * softmax_cross_entropy_with_logits *function.

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(prediction, y))

#comparing the diff with predicted vs orginaloptimizer = tf.train.GradientDescentOptimizer(0.5).minimize(cost)

There are many kinds of optimizers available in tensorflow, each one is optimal for some particular use-case. **0.5 **mentioned is the learning rate of the neural network.

Let’s train our neural network. Each cycle of feed forward and back propagation is called an **epoch. **I have set number of epoch as 5 and also 10. we have to start a session and initialize all variables. We can run our both** optimizer and cost **epoch number of times *here *it is called as training step.

_, c = sess.run( [optimizer, cost], feed_dict = { x:epoch_x, y:epoch_y } )

We are placing the values of both x and y in batches. c is the variable which holds the value of epoch loss in each epoch.

Its time to test the neural network and check the accuracy of our model.

correct = tf.equal(tf.argmax(prediction,1), tf.argmax(y,1))

accuracy = tf.reduce_mean(tf.cast(correct, ‘float’))print(‘Accuracy: ‘, accuracy.eval( { x:mnist.test.images, y:mnist.test.labels }) )

The accuracy I could achieve was 97.999.

This implementation of simple deep neural network. This code is inspired from pythonprogramming.net and also tensorflow.org .

complete source code : https://github.com/makaravind/ImageClassifier

We shall be using this model for more couple of use-cases before we move on to another model. I’ll catch you up in next post.