Skip to content

[ENH] Add standard deviation columns to Benchmarking results#626

Open
Vaishnav88sk wants to merge 1 commit intogc-os-ai:mainfrom
Vaishnav88sk:feature/benchmarking-std-scores
Open

[ENH] Add standard deviation columns to Benchmarking results#626
Vaishnav88sk wants to merge 1 commit intogc-os-ai:mainfrom
Vaishnav88sk:feature/benchmarking-std-scores

Conversation

@Vaishnav88sk
Copy link
Copy Markdown

Reference Issues/PRs

Fixes #620

What does this implement/fix? Explain your changes.

Extends the output of Benchmarking.run() to include standard deviation metrics across cross-validation folds.

  • Added train_std and test_std columns to the results DataFrame.
  • Allows researchers to assess the stability and variance of different estimators.

What should a reviewer concentrate their feedback on?

  • Column naming consistency.
  • Backward compatibility (no changes to existing columns).

Did you add any tests for the change?

Yes, added tests in test_benchmarking.py ensuring std columns are present and contain valid (non-negative) values.

Any other comments?

N/A

PR checklist

  • The PR title starts with [ENH]
  • Added/modified tests
  • Used pre-commit hooks

Add train_std and test_std columns alongside existing train/test mean
scores. This lets researchers assess model stability across CV folds.

Backward compatible: existing train/test columns unchanged, new columns
appended. Includes 3 new tests for column presence, non-negativity,
and multi-metric support.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[ENH] Add standard deviation columns to Benchmarking results

1 participant