Search results
Jul 11, 2024 · The logistic ridge regression estimator is defined by Schaefer et al. (1984) as a straightforward extension of Hoerl and Kennard (1970) to solve the multicollinearity problem as β LRE = (X ′ W X + kI) − 1 (X ′ W X) β MLE (2.9) where k is the biasing parameter (k > 0) and identity matrix I.
When d = k = 0.5 and ρ = 0.99, both ridge and KL estimators outperform the Liu estimator. None of the estimators uniformly dominates each other. However, it appears that our proposed estimator, KL, performs better in the wider space of d = k in the parameter space.
Jun 6, 2024 · There are several methods to estimate the shrinkage parameter such as ridge, Liu, and Liu-type estimations, which have become a generally accepted and more effective methodology to solve the ...
Jul 11, 2024 · Theory and simulation results show that, under some conditions, it performs better than both Liu and ridge regression estimators in the smaller MSE sense.
Mar 1, 2023 · Two life applications were analyzed to illustrate the performance of the COMPKL estimator. The results of the simulation study and applications indicated that the COMPKL estimator outperforms the ML, ridge, and Liu estimators, especially when the explanatory variables are highly correlated.
Both ridge regression and Liu estimators are widely accepted in the linear regression model as an alternative to the OLS estimator to circumvent the problem of multicollinearity. In this study, we proposed a modified Liu estimator, which possesses a single parameter which places it in the class of the ridge and Liu estimators.
People also ask
Does the compkl estimator outperform ml Ridge & Liu estimators?
Which estimator is better compkl or CompL?
Does Kibria-Lukman estimator reduce multicollinearity?
What is a maximum likelihood estimator (MLE)?
What is Kibria Lukman (KL) estimator?
How does multicollinearity affect a linear regression model?
Nov 22, 2021 · The new estimators outperform the existing estimators in most of the considered scenarios including high and severe cases of multicollinearity. 95% mean prediction interval of all the estimators is also computed for the Tobacco data.