Is there an easy way of combing fractional RR with efficient leave-one-out cross-validation?
SKLearn has a documented implementation of LOOCV for standard RR:
https://github.com/scikit-learn/scikit-learn/blob/7e1e6d09bcc2eaeba98f7e737aac2ac782f0e5f1/sklearn/linear_model/_ridge.py#L1432
Edit: the linked code should be reviewed in light of this issue:
scikit-learn/scikit-learn#18079
(TL;DR: although the scikit-learn code mentions GCV, an algebraic form of LOOCV is implemented).
Is there an easy way of combing fractional RR with efficient leave-one-out cross-validation?
SKLearn has a documented implementation of LOOCV for standard RR:
https://github.com/scikit-learn/scikit-learn/blob/7e1e6d09bcc2eaeba98f7e737aac2ac782f0e5f1/sklearn/linear_model/_ridge.py#L1432
Edit: the linked code should be reviewed in light of this issue:
scikit-learn/scikit-learn#18079
(TL;DR: although the scikit-learn code mentions GCV, an algebraic form of LOOCV is implemented).