Train a Sequential Keras Model with Sample Data

Chris Achard
InstructorChris Achard
Share this video with your friends

Social Share Links

Send Tweet
Published 6 years ago
Updated 5 years ago

We’ll set up some training data for a fully connected neural network, and train the model on that data. Then, we’ll look at how the accuracy decreases as the number of epochs increase.

Instructor: [00:02] Import numpy as np. Then we can define our array of inputs, which we will call Xtrain, and outputs, which we will call Ytrain. They will both be NumPy arrays.

[00:13] The content should be the examples of our inputs and outputs that the network will use to learn its weights and biases. Our model is defined to take four numbers as inputs. We'll define several input examples, which will each be an array containing four numbers.

[00:30] We want the network to learn how to take the mean of the four inputs. That means our output Y values will be the mean of each of the rows from the X inputs. Notice that the Y values are all arrays, even though they each contain only one element.

[00:44] That's because the network expects the inputs and outputs to be arrays, no matter how many elements they contain. We have a set of inputs, and each input has a matching output value, which in this case is the mean of all the inputs.

[00:58] To train the network on our sample data, we'll call the fit method of the model. The only required argument to train the model are the input X values and the output Y values, but there are several optional parameters that we can specify.

[01:12] First, because we only have six input data points, we should pick a batch size that is smaller than that number. We'll define a batch size of 2. Normally, you would have a lot more data, so you could set your batch size to a more common 32, 64, or 256.

[01:28] Next, we'll set the number of epochs to 100. An epoch represents how many times the network will loop through the entire data set. The more epochs you set here, the better the network accuracy will be, but the longer it will take to train.

[01:42] Finally, we'll set verbose to 1, which will allow us to see the loss at every epoch. Then in the command line, we can run our file by typing python neuralnet.py.

[01:54] The neural net has trained. If we scroll to the top of the output, we can see the training start with the first epoch. The loss here is what we're looking to reduce. It starts very high at the beginning because the network is initialized with random weights.

[02:09] It's just totally guessing what the answer should be. With every training step, we want to see the loss go down further and further, until at last we see the loss start to flatten out.

[02:20] If we keep training with more epochs, we should start to see this number go down even further. Already, after only 100, we have a fairly low loss, which represents the mean squared error between the actual Y values and the predicted values from our network.

egghead
egghead
~ an hour ago

Member comments are a way for members to communicate, interact, and ask questions about a lesson.

The instructor or someone from the community might respond to your question Here are a few basic guidelines to commenting on egghead.io

Be on-Topic

Comments are for discussing a lesson. If you're having a general issue with the website functionality, please contact us at support@egghead.io.

Avoid meta-discussion

  • This was great!
  • This was horrible!
  • I didn't like this because it didn't match my skill level.
  • +1 It will likely be deleted as spam.

Code Problems?

Should be accompanied by code! Codesandbox or Stackblitz provide a way to share code and discuss it in context

Details and Context

Vague question? Vague answer. Any details and context you can provide will lure more interesting answers!

Markdown supported.
Become a member to join the discussionEnroll Today