Does this look familiar?
If so, I’m sure you remember the special pain of reading the table over and over again, committing all of the combinations to memory so you could pass the test that was scheduled for the next day. And pass you did … because young children are really good at memorization. In fact, I’m sure you got 100% on your first multiplication test.
But, did you really understand what multiplication meant? If the test consisted of problems that were NOT in the table that you memorized, would you have scored 100%? Likely not, because kids that memorize multiplication tables haven’t really learned anything — all they’ve done is implement a lookup table in their heads. And the problem with lookup tables is that the inputs have to match exactly for the table to generate any results. In other words, lookup tables (and children who memorize) are very accurate but they don’t generalize at all.
So what is the relationship between multiplication tables and supervised machine learning? Well, to answer that question, we first have to ask ourselves:
What is the goal of machine learning?
Here at Brainome, we believe that the goal of machine learning is to produce predictive models that are both accurate AND general. We want models that understand that multiplying means adding the same number over and over again and can apply this rule to solve ANY multiplication problem.
As explained above, one can always achieve 100% accuracy simply by memorizing the training data. A child that memorizes a multiplication table will be 100% accurate but have zero generalization. Similarly, models that use memorization (overfit) to achieve high accuracy are not desired because they cannot handle novel input. In other words, overfit models are not useful in the real world.
But don’t just take our word for it.
Now, you might ask: How does one measure generalization in machine learning?
Please check out the Brainome Web Demo to get the answer 🙂