Hey! Thanks for creating and open-sourcing the repo, it's fun to play with!
I want to build an automated tool for creating clips of our volleyball games to trim down our 1+ hour video down to 20 minutes of just actual gameplay.
I'm having trouble getting the GameStatusClassifier to reliably recognize anything (probably because the camera angles and footage from our recreational games are much worse)
- What made you decide to use the VideoMAE model for detecting game state? I'm trying to read up on it but couldn't find anything relevant to sports analytics outside of https://arxiv.org/abs/2203.12602
- Also if I want to train / finetune the model, is it enough to have clips have "playing" vs "not playing"? e.g. 50 x 10 secs of both?
Also I noticed that we have the player skeleton data, but we don't use it for Action Recognition. What did you plan to do with the Skeleton data? Do you think it would be helpful if I trained a model for Action Detection?
Anyway thanks again for creating the repo, I'm not a machine learning engineer but having fun messing around with it
Hey! Thanks for creating and open-sourcing the repo, it's fun to play with!
I want to build an automated tool for creating clips of our volleyball games to trim down our 1+ hour video down to 20 minutes of just actual gameplay.
I'm having trouble getting the GameStatusClassifier to reliably recognize anything (probably because the camera angles and footage from our recreational games are much worse)
Also I noticed that we have the player skeleton data, but we don't use it for Action Recognition. What did you plan to do with the Skeleton data? Do you think it would be helpful if I trained a model for Action Detection?
Anyway thanks again for creating the repo, I'm not a machine learning engineer but having fun messing around with it