I have finetuned the base1.5 model for my own robot dataset.
i have collected dataset on pick and place task with suction type gripper.
Dataset used = 200 episodes
Split ratio for training and testing = 80/20 %
mode = full
task = language conditioned
All the config parameters are same as mentioned in config file.
Now i want to understand, why there is lot of fluctuation in the validation curve and training loss?
Why the finetuned model is not generalizing accross my dataset?
is it overfitting? or need changes in my dataset? or required changes in finetuning config (hyper parameters)?
Kindly suggest what i am doing wrong ?
I got the similar results for 250 episodes of data and for headonly training as well?
Attached below image contain different plots.

I have finetuned the base1.5 model for my own robot dataset.
i have collected dataset on pick and place task with suction type gripper.
Dataset used = 200 episodes
Split ratio for training and testing = 80/20 %
mode = full
task = language conditioned
All the config parameters are same as mentioned in config file.
Now i want to understand, why there is lot of fluctuation in the validation curve and training loss?
Why the finetuned model is not generalizing accross my dataset?
is it overfitting? or need changes in my dataset? or required changes in finetuning config (hyper parameters)?
Kindly suggest what i am doing wrong ?
I got the similar results for 250 episodes of data and for headonly training as well?
Attached below image contain different plots.