You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there!
I'm working on a project tracking long term social behaviors in mice. Really excited to try out LabGyn!
My trial footage includes 3 black mice with dyed patterns on their backs (stripes, spots, and a plus sign). It is important that their identities are maintained.
I annotated 130 frames in Roboflow, and was pretty happy with its success rate (see below). The mice are all in one class, and I included some objects in the arena.
I used this to train a detector, and after some adjustments, it gave a mAP of 68 for the mice, and the testing images seemed satisfactory.
But, I'm running into problems with the mouse detection in the video clips to sort.
The area outlined, intended to include only one mouse, often includes multiple, or part of the background. Or, it starts on one mouse, then jumps across the screen.
Does anyone have suggestions for parameters I can adjust to increase accuracy of class identification in these clips? This is far less accurate than the testing examples generated by the detector. I could annotate more frames, but I have used up all of my free roboflow credits and would like to have a better idea of whether this will work before buying a plan.
Also, any tips for maximizing success in maintaining identities? Would it be better to train roboflow with a different class for each mouse? When sorting, would I need to specify which mouse is performing the behavior for each clip (ex. spot-run, spot-eat, stripe-run, stripe-eat)? I don't quite understand what is meant by the "behavior guided" identity correction method. There are no behaviors unique to a given mouse, in my case, just different patterns.
Thank you for your help. Let me know if any more info would be helpful.
-Leina
I'm working on a project tracking long term social behaviors in mice. Really excited to try out LabGyn!
My trial footage includes 3 black mice with dyed patterns on their backs (stripes, spots, and a plus sign). It is important that their identities are maintained.
I annotated 130 frames in Roboflow, and was pretty happy with its success rate (see below). The mice are all in one class, and I included some objects in the arena.
I used this to train a detector, and after some adjustments, it gave a mAP of 68 for the mice, and the testing images seemed satisfactory.
But, I'm running into problems with the mouse detection in the video clips to sort.
The area outlined, intended to include only one mouse, often includes multiple, or part of the background. Or, it starts on one mouse, then jumps across the screen.
Does anyone have suggestions for parameters I can adjust to increase accuracy of class identification in these clips? This is far less accurate than the testing examples generated by the detector. I could annotate more frames, but I have used up all of my free roboflow credits and would like to have a better idea of whether this will work before buying a plan.
Also, any tips for maximizing success in maintaining identities? Would it be better to train roboflow with a different class for each mouse? When sorting, would I need to specify which mouse is performing the behavior for each clip (ex. spot-run, spot-eat, stripe-run, stripe-eat)? I don't quite understand what is meant by the "behavior guided" identity correction method. There are no behaviors unique to a given mouse, in my case, just different patterns.
Thank you for your help. Let me know if any more info would be helpful.
-Leina
Hi Leina,
First, sorry that I didn't see your question earlier. There must be some glitches in the notifications and I haven't received any notification on this (usually I receive notifications through my email).
If maintaining the IDs of the mice is critical, you have two ways. First, utilize the difference in their appearance features. In your case, you can label the three mice as three different classes, like strip mouse, spot mouse, and plus mouse. If the these markers are always visible and distinguishable in your videos, LabGym can accurately track the three mice and maintain their IDs over time. In this scenario, you probably only need to annotate 200 non-redundant images to train a good Detector (you will need to use augmentations). Alternatively, if the markers are not easier to distinguish, you can label the three mice as the same class, mouse. In this scenario, you will need more annotated images (like 400-500) to train a good Detector (you also need to use augmentation). And when you select images for annotation, select as many images in which the mice have close body contact as possible--you even don't need any image in which the three mice are well-separated.
You only need to use Roboflow for annotations. I don't think manual annotations in Roboflow will take any credit. You train your Detectors within LabGym framework.
If you want me to check your annotation or help you with the datasets, you can invite me (yujiahu415@gmail.com) to your Roboflow workspace so that I can take a look.
In Roboflow, you only label the animals or objects, not their behaviors. LabGym has a dedicated behavior classifier called Categorizer for behavior identification. You don't need to use behavior-guided identity correction as that's mostly for individual-specific behaviors, like sex-specific behaviors. For example, if there's male-specific behaviors and if you choose that option, LabGym can correct the ID of a male mouse if male-specific behavior occurs.
No worries, thanks for getting back to me. I've figured most of it out, just one question - how do I teach it to recognize which two animals are interacting? For example, sniff and be sniffed, or nose-nose. I want to quantify A's overall interactions with B vs C, etc.
Thanks!
First, train a Detector, either with three classes or just one class mouse, whichever gives higher accuracy in maintaining the IDs. Choose interactive advanced mode when generating behavior examples using that Detector. Then sort behavior examples according to the behavior of the magenta-colored individual, like sniffing or being sniffed. Then train a Categorizer on the sorted examples. Then choose the trained Detector and the Categorizer to analyze videos. During analysis, each mouse will have a unique ID, and if the ID is precisely maintained over time, you can figure out which mouse is interacting with which with the LabGym analysis outputs, for example, at a frame, mouse 0 is sniffing, mouse 1 is being sniffed, and mouse 2 is idling, then mouse 0 is interacting with mouse 1. Since the behavior for each mouse is documented at each frame and stored in the spreadsheet, you can simply apply a filter on that spreadsheet to figure out which two mice are interacting at a frame. Does this make sense?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hi there!
I'm working on a project tracking long term social behaviors in mice. Really excited to try out LabGyn!
My trial footage includes 3 black mice with dyed patterns on their backs (stripes, spots, and a plus sign). It is important that their identities are maintained.
I annotated 130 frames in Roboflow, and was pretty happy with its success rate (see below). The mice are all in one class, and I included some objects in the arena.

I used this to train a detector, and after some adjustments, it gave a mAP of 68 for the mice, and the testing images seemed satisfactory.
But, I'm running into problems with the mouse detection in the video clips to sort.
The area outlined, intended to include only one mouse, often includes multiple, or part of the background. Or, it starts on one mouse, then jumps across the screen.
Does anyone have suggestions for parameters I can adjust to increase accuracy of class identification in these clips? This is far less accurate than the testing examples generated by the detector. I could annotate more frames, but I have used up all of my free roboflow credits and would like to have a better idea of whether this will work before buying a plan.
Also, any tips for maximizing success in maintaining identities? Would it be better to train roboflow with a different class for each mouse? When sorting, would I need to specify which mouse is performing the behavior for each clip (ex. spot-run, spot-eat, stripe-run, stripe-eat)? I don't quite understand what is meant by the "behavior guided" identity correction method. There are no behaviors unique to a given mouse, in my case, just different patterns.
Thank you for your help. Let me know if any more info would be helpful.
-Leina
Beta Was this translation helpful? Give feedback.
All reactions