Dear Author,
Thank you very much for your work. We encountered some issues while trying to replicate the results reported in your paper, which may be due to some incorrect settings in our replication process. We are eagerly looking forward to your corrections and would be extremely grateful for your assistance. Below are the detailed steps of our replication process.
- Unable to replicate the performance of PEFT(T5-base)
We noted that PEFT is a key component in C3. Therefore, we conducted training based on T5-base using the following command:
python train/main.py --cuda_visible_devices 0 \ --do_train true \ --do_cl_eval true \ --predictor_backbone_plm t5-base-lm-adapt \ --predictor_batch_size 4 \ --predictor_gradient_accumulation_steps 12 \ --dataset_type spider \ --dataset spider_perm_1 \ --first_task_id 0 \ --last_task_id 10 \ --task_num 11 \ --teacher false
Besides the aforementioned command, we did not modify any other parameters. Our final results are as follows, which still show a gap compared to the 65.7 and 64.5 reported in the paper:
"task_10": { "acc_a": 0.6275454545454546, "acc_w": 0.6067779069767443 }
Could you please let us know if there are any errors in our above experimental setup? We would greatly appreciate your guidance.
Dear Author,
Thank you very much for your work. We encountered some issues while trying to replicate the results reported in your paper, which may be due to some incorrect settings in our replication process. We are eagerly looking forward to your corrections and would be extremely grateful for your assistance. Below are the detailed steps of our replication process.
We noted that PEFT is a key component in C3. Therefore, we conducted training based on T5-base using the following command:
python train/main.py --cuda_visible_devices 0 \ --do_train true \ --do_cl_eval true \ --predictor_backbone_plm t5-base-lm-adapt \ --predictor_batch_size 4 \ --predictor_gradient_accumulation_steps 12 \ --dataset_type spider \ --dataset spider_perm_1 \ --first_task_id 0 \ --last_task_id 10 \ --task_num 11 \ --teacher falseBesides the aforementioned command, we did not modify any other parameters. Our final results are as follows, which still show a gap compared to the 65.7 and 64.5 reported in the paper:
"task_10": { "acc_a": 0.6275454545454546, "acc_w": 0.6067779069767443 }Could you please let us know if there are any errors in our above experimental setup? We would greatly appreciate your guidance.