And I make it all the way to the end! With the last week of this fantastic introductory course on Machine Learning completed, I proceeded to procure the certificate of completion. This certificate can be linked to on your LinkedIn profile.
For the last week, we dove into the area of Image Processing and Character Recognition within pictures. The professor walked us through the process of sweeping through a given picture one box of pixels at a time, looking for pixel patterns that match up to intelligible characters. Along with that, we looked into a handful of Machine Learning rules of thumb while working on projects and how best to make progress from one stage to another during these projects.
Here is the link to my certificate, with a screen grab below.
Machine Learning is a mathematically intensive discipline, and mastering the subject demands non-trivial amount of study spread out over a reasonably long duration of time. This introductory course – while providing the eager apprentice a bouncy springboard to get his feet wet and dabble a little – doesn’t really delve into the mathematical depths of the concepts that are covered. We skim over the topics superficially, learning just enough to get a bird’s eye view of the concept and being able to see the formulas for what they are, while not getting sufficient exposure to how those formulas came to be.
With that said, this course opens up doors for you to take things to the next level and go explore what intrigues you. Professor Andrew Ng helps you acquire a reasonably sound foundation to build upon. With the underlying knowledge of this course, I am now in a position to go learn about Google’s TensorFlow Machine Learning stack, or go read books on Neural Networks or Support Vector Machines.
I’ve come to realize that Learning is a non linear process. You do not have to choose the best book/course/instructor to get started with an area of knowledge. Perfectionism would be a futile pursuit here, possibly leading you to an analysis-paralysis standstill. A more pragmatic approach is just to pick a reasonably well known source and run with it. You internalize that source and then pick another. After 3-4 iterations of various sources, you would begin to see things falling into place all by themselves and things only get better as you advance. The Coursera Machine Learning course has its share of critics that say it is very superficial and lacks mathematical rigor. They then recommend the same Professor’s Stanford course on Machine Learning. I checked out the course and it overwhelmed me very quickly with its Mathematical density. However, now with this course completed, I have the foundation and confidence to be able to plow through some real Machine Learning meat.
Thank you Professor Ng and a thanks to Coursera!