From the course: Deep Learning: Image Recognition

Coding a neural network with Keras - Python Tutorial

From the course: Deep Learning: Image Recognition

Start my 1-month free trial

Coding a neural network with Keras

- [Instructor] In this course, we'll be using a software framework called Keras to code our neural networks. Keras is a high-level library for building neural networks in Python with only a few lines of code. Let's take a look at an example neural network. This neural network has two input nodes, then a layer with three nodes, a second layer with three nodes, and then finally an output layer with one node. Also know this, that each node is connected to every node in the following layer. These are called densely connected layers. Densely connected layers are the most basic kind of layer in the neural network. Let's see how this example would look in code. First, we'll create a new neural network model with Keras. It's called a sequential model because we're gonna define each layer in order sequentially, or one layer at a time. Next, we'll add a layer with three nodes. But notice that we are also specifying the network has two input nodes by passing in the input dim perimeter. That specifies the number of nodes in the input layer. Next, we add the second layer with three nodes. And then finally, we add the final layer with one node to act as the output layer. That's all that's required to define the basic neural networking, Keras. The Keras also lets us customize how each layer works. One of the most important things to configure are activation functions. Before values flow from the nodes in one layer to the next, they pass through an activation function. Activation functions decide which inputs from the previous layer are important enough to feed to the next layer. Keras lets us choose which activation function is used for each layer by passing in the name of the activation function that we wanna use. Here when we call model.add, I've asked it to use a rectified linear unit, or relu activation function. Keras supports all the standard activation functions in use today. There's also less commonly needed things that we can customize for each layer beyond the activation function. But one of the guiding principles of Keras is that it'll do the best thing that it can if we don't specify extra perimeters. In other words, the default settings are modeled after what are considered best practices. So most of the time, just choosing the number of nodes in a layer and choosing an activation's good enough. The final step of defining a neural network is to compile it by calling model.compile. This tells Keras that we're done defining the model and that we actually wanna build it out of memory. When you compile a model, you have to pass in the optimizer algorithm and the loss function that you wanna use. The optimizer algorithm is used to train the neural network. The loss function is how the training process measures how right or how wrong your neural networks predictions are. This is a complete neural network that we can train to solve very simple classification problems. But to recognize objects and images, we need to create much larger neural networks with much larger input layers and more complex layer types. If you're feeling a little overwhelmed, don't worry. We'll step through each line of code as we build image recognition neural networks. But if you're totally new to the neural networks in Keras, I encourage you to check out my other course in the library called Building Deep Learning Applications with Keras 2.0. It goes into more depth about coding in Keras, and it covers activation functions and optimizers.

Contents