Testing Different Neural Network Topologies

Chris Achard
InstructorChris Achard
Share this video with your friends

Social Share Links

Send Tweet
Published 6 years ago
Updated 5 years ago

There are numerous ways to set up a neural network, and it can be difficult to figure out what combination of settings and architectures will get the best results. We’ll investigate a few different typical network topologies including adding more “depth” and “width”, and evaluate what network topology is best for our data set. For example, you may want a very deep network for increased accuracy on very complex problems, but the training time will take longer. Or, you may add width to your network to increase accuracy, but this has a risk of overfitting.

Instructor: [00:00] This neural network has three hidden layers and one output layer, so it has a depth of four. There are many other ways we could configure this network.

[00:09] First, we'll run the network as is to check the training and validation losses, so that we can compare those losses to other networks that we can try.

[00:18] To make this network deeper just means to add more hidden layers. Let's copy this layer two more times and increase the middle dense layers number of nodes to 32. Then we can run that network to see what, if any, effect that had on the training and validation loss.

[00:36] We could even make the network deeper if we wanted to. As you make the network deeper, you may also want to run more epochs because the more complex network will now take longer to train properly. When we run that, we can see that the combination of a deep network and a long training time can be very effective.

[01:00] However, remember that we have a small data set which may be skewing our results some. It's important to test on a small data set, but also, to retest as you include more and more of your full data set.

[01:12] Instead of a deep network, we could also try to make a very wide but shallow network which means removing many of the hidden layers, but then drastically increasing the size of one or more of the layers.

[01:22] When we run that, we can see this network is also effective, at least on our small data set. Again, it's important to test different strategies on your data set because every one is different.

[01:36] Once you have all your training and validation, and you have a network that you're happy with, you can go ahead and add back in your test data and evaluation step, in order to test the network on data that it has not yet seen and that you haven't been using to do validation.

[01:50] This will help give you a final, less biased view on how your network is performing.

egghead
egghead
~ 8 minutes ago

Member comments are a way for members to communicate, interact, and ask questions about a lesson.

The instructor or someone from the community might respond to your question Here are a few basic guidelines to commenting on egghead.io

Be on-Topic

Comments are for discussing a lesson. If you're having a general issue with the website functionality, please contact us at support@egghead.io.

Avoid meta-discussion

  • This was great!
  • This was horrible!
  • I didn't like this because it didn't match my skill level.
  • +1 It will likely be deleted as spam.

Code Problems?

Should be accompanied by code! Codesandbox or Stackblitz provide a way to share code and discuss it in context

Details and Context

Vague question? Vague answer. Any details and context you can provide will lure more interesting answers!

Markdown supported.
Become a member to join the discussionEnroll Today