Article: Stochastic Gradient Ascent

Gradient Ascent uses the whole dataset on each update. This is fine with 100 examples but, with billions of data points containing thousands of features, it is unnecessarily expensive in terms of computational resources. This article, based on chapter 5 of Machine Learning in Action, discusses Stochastic Gradient Ascent and how to make modifications that will yield better results.

Please enter the following information below to receive this content.
Email:
Name:

Where does my email address go?

Your address will get added to our general New Product Announcement list. We send email to this list about 4 times a month to announce new titles, present updates to our Early Access title list, and offer you special promotions and freebies. Our aim is that you will be able to rely upon us to provide information you can use to enhance your professional skills.

But what if you do not want to receive email from Manning? No worries! You will always find an opt-out link on our emails. We really do not want to bother you with any information you do not feel like receiving.

Enjoy the free content, and feel free to discuss or offer feedback to the authors in the Author Online!