Getting ready
Ridge regression is a regularization method where a penalty is imposed on the size of the coefficients. As we said in the Building a linear regressor section, in the ordinary least squares method, the coefficients are estimated by determining numerical values that minimize the sum of the squared deviations between the observed responses and the fitted responses, according to the following equation:
Ridge regression, in order to estimate the β coefficients, starts from the basic formula of the residual sum of squares (RSS) and adds the penalty term. λ (≥ 0) is defined as the tuning parameter, which is multiplied by the sum of the β coefficients squared (excluding the intercept) to define the penalty period, as shown in the following equation:
It is evident that having λ = 0 means not having a penalty in the model, that is, we would produce the same estimates as the least squares. On the other hand, having a λ tending toward infinity means having a high penalty effect, which will bring many coefficients close to zero, but will not imply their exclusion from the model. Let's see how to build a ridge regressor in Python.