What is batch size in neural network?

  • I'm using Python Keras package for neural network. This is the link. Is batch_size equals to number of test samples? From Wikipedia we have this information:

    However, in other cases, evaluating the sum-gradient may require expensive evaluations of the gradients from all summand functions. When the training set is enormous and no simple formulas exist, evaluating the sums of gradients becomes very expensive, because evaluating the gradient requires evaluating all the summand functions' gradients. To economize on the computational cost at every iteration, stochastic gradient descent samples a subset of summand functions at every step. This is very effective in the case of large-scale machine learning problems.

    Above information is describing test data? Is this same as batch_size in keras (Number of samples per gradient update)?

    It's good to see https://class.coursera.org/ml-005/lecture/preview course, especially for you week 4-6 + 10. Wikipedia may be not so valuable resource for learning neural networks.

  • itdxer

    itdxer Correct answer

    5 years ago

    The batch size defines the number of samples that will be propagated through the network.

    For instance, let's say you have 1050 training samples and you want to set up a batch_size equal to 100. The algorithm takes the first 100 samples (from 1st to 100th) from the training dataset and trains the network. Next, it takes the second 100 samples (from 101st to 200th) and trains the network again. We can keep doing this procedure until we have propagated all samples through of the network. Problem might happen with the last set of samples. In our example, we've used 1050 which is not divisible by 100 without remainder. The simplest solution is just to get the final 50 samples and train the network.

    Advantages of using a batch size < number of all samples:

    • It requires less memory. Since you train the network using fewer samples, the overall training procedure requires less memory. That's especially important if you are not able to fit the whole dataset in your machine's memory.

    • Typically networks train faster with mini-batches. That's because we update the weights after each propagation. In our example we've propagated 11 batches (10 of them had 100 samples and 1 had 50 samples) and after each of them we've updated our network's parameters. If we used all samples during propagation we would make only 1 update for the network's parameter.

    Disadvantages of using a batch size < number of all samples:

    • The smaller the batch the less accurate the estimate of the gradient will be. In the figure below, you can see that the direction of the mini-batch gradient (green color) fluctuates much more in comparison to the direction of the full batch gradient (blue color).

    Gradient directions for different batch setups

    Stochastic is just a mini-batch with batch_size equal to 1. In that case, the gradient changes its direction even more often than a mini-batch gradient.

    Thank you for answer. do you work with `Keras`? anyway to set test data in this package?

    No, I didn't. This is popular technique in neural networks and this terminology you can see in different libraries, books and articles. Do you want check test data error in every epoch or just verify model after training?

    Yes. That's true. Similar structure we have in `MATLAB` but i found only train and validation data-sets here. I think here in this package validation data-set is same as test data but there isn't early stopping so we don't have any real validation data.

    The network also converges faster as the number of updates is considerable higher. Setting up the mini batch size is kind of an art, too small and you risk making your learning too stochastic, faster but will converge to unreliable models, too big and it wont fit into memory and still take ages.

    Does this mean that `batch_size=` are considered online learning, or rather `batch_size=1`? And does all of this remain true for RNNs as well? When using `batch_size` in RNNs, is the batch considered a sort of _virtual timestep_ in that all the instances in that batch will be computed as if they occurred at once?

    Typically when people say online learning they mean `batch_size=1`. The idea behind online learning is that you update your model as soon as you see the example. With larger batch size it means that first you are looking through the multiple samples before doing update. In RNN size of the batch can have different meanings. Usually, It's common to split training sequence into window of fixed size (like 10 words). In this case including 100 of these windows during the training will mean that you have `batch_size=100`.

    @itdxer: "The problem usually happens with the last set of samples." What exactly is the problem? So, the last batch carries 50 samples, but is designed to carry 100. I don't see a problem here, besides a small nuisance of half-wasted batch in the last step only. What am I missing?

    @Oleg Melnikov, if your last batch has significantly smaller size (let's say it would be 1 instead of 50) then estimate for the gradient would be less accurate and it can screw up a bit your weights. In the image above, imagine that you make 10 updates with a mini batch 100 (green lines) and one with mini batch 1 (red line). Which means that in the next epoch a few first iteration can start solving problem with last mini batch 1 update from the previous epoch.

    @itdxer. Why would the gradient be less accurate? It seems you may be assuming that in the implementation of Keras and TF the last batch would be padded with some noise that would erade the gradient. Is that so? Anyhow, something to ponder upon ;)

    @OlegMelnikov MIT deep learning book has a good explanation related to this problem (chapter 8.1.3): http://www.deeplearningbook.org/contents/optimization.html

    Sounds like this answer is incorrect or confusing. From what I know batch size is the number of items from the data set it takes to trigger the weight adjustment. So if you use batch-size 1 you update weights after every sample. If you use batch size 10, you calculate average error and then update weights every 10 samples.

    Batch commonly used as a terminology for training number of samples, but it's not required to apply training in order to call it batch. If you have a database with 100M entities that you want to classify, you will still have to split it into batches and do you prediction per batch (even if you want to distribute it into many machines). In fact, many libraries will use batch size terminology for these cases (you can check Keras doc). With batch size propagate all 10 examples at the same time, but gradient will be calculated per average error, since it's more efficient.

    Yet another advantage of mini-batch gradient descent is that it can jump out of local minimas, if the cost function is not convex. So the disadvantage mentioned in the Answer may actually be an advantage in these scenarios.

License under CC-BY-SA with attribution


Content dated before 6/26/2020 9:53 AM