Skip to content

Question about K-fold Cross-Validation Implementation in Your Code #1

@MSKBOX

Description

@MSKBOX

Hello,
I've been studying your reinforcement learning framework for ensemble optimization algorithms and I'm particularly interested in the K-fold cross-validation methodology mentioned in your paper. In the paper, you state:

"The K-fold cross-validation method is adopted to separate instances into training sets and validation sets, where we use K=4, meaning that 512 instances will be taken as validations."

I've examined the code in cec_dataset.py and Testing.py, and I noticed the parameters:
Train_set = 1024
Test_set = 1024
I'm curious about how the K-fold cross-validation (K=4) was actually implemented. Is it done by:

Manually running the experiment 4 times with different random seeds (data_gen_seed and test_seed)?
Using a 3:1 split ratio (1536 training instances and 512 validation instances) with different seeds for each fold?
Then averaging the results from these 4 separate runs?

Or is there a more automated K-fold implementation that I might have missed in the codebase?
Thank you for clarifying this aspect of your methodology. It would help me properly replicate your experiments and better understand the validation approach.
Best regards,

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions