In gradient descent update for linear regression, logistic regression, or else, betas or parameters in a model are updated through gradient descent.
However in GBM, the model itself is updated through gradient descent, which means that betas or parameters are updated altogether with a single gradient descent.
Useful Resources:
- A Gentle Introduction to Gradient Boosting: http://www.ccs.neu.edu/home/vip/teach/MLcourse/4_boosting/slides/gradient_boosting.pdf
- The Elements of Statistical Learning: http://web.stanford.edu/~hastie/local.ftp/Springer/OLD/ESLII_print4.pdf
댓글 없음:
댓글 쓰기