Replies: 4 comments 16 replies
-
|
Hi, You're very welcome! For this assay, since you only care about the behaviors of one mouse (the resident), you can use the 'interactive basic' mode that always analyzes the two mice as one, but the process is much simpler than the 'interactive advanced' mode that analyzes each individual and distinguish different social roles of different individuals. In the 'interactive basic' mode, you don't need to worry about identity switching since the two mice are analyzed together all the time. So, the Detector training is very simple: you can select at least 50 frames (100 is recommended) for annotation. These 50 images should ideally from 50 different videos so that the Detector can see as many scenarios of different video recordings as possible. These 50 images should represent different scenarios of mouse locations, poses, whether or not having body contact, etc. Basically, provide the Detector as diverse scenarios as possible to let it be aware of different scenes during analysis. Use EZannot to annotate these images, use single class 'mouse' for both mice. You may not need to include the tail when annotating the mouse's body if the tail cannot be automatically detected with EZannot's AI help. But if the tail is important for identifying some of the behaviors of your interest, you'll need to include it in the annotation in each image when the tail is present. When the two mice have close body contact, you can still annotate them as two 'mouse', but if one mouse is completely (or over half of the body) occluded by the other, just annotate one 'mouse'. Choose all the augmentation methods in EZannot and export a training dataset to train a Detector. If the frame size of your videos is something like 480 X 720, use 720 as the inferencing frame size when training the Detector, and set iteration number to 20,000. Then generate the behavior examples using the trained Detector and the 'interactive basic' mode. Not sure what the fps of your videos. If the fps is higher than 30, downsampling it to 30 or even 15 using the preprocessing module of LabGym. And use 0.5 second (15 frames if fps is 30; 8 frames if fps is 15) as the duration for each behavior example. Set the interval of two consecutively generated examples as duration of one behavior example (like 15 if the duration of behavior example is set to 15). Don't include background or body parts in behavior examples, set background to white. When you sort behavior examples, select the most representative ones for each behavior category, and make sure the select ones can cover the majority of diverse scenarios, like mice in different locations, orientations, postures, etc. of each behavior. For each behavior, recommend having at least 200 (400 individual files) pairs of examples, but for rare behaviors that are easy to distinguish, it's fine to have 80 pairs. But the selected examples should not be redundant, meaning that every pair should look different from others. We can talk more about the training settings after you have the behavior examples generated. Before sorting examples, show me some of your examples so that I can take a look to see if they look good. I don't want your sorting effort to be wasted. |
Beta Was this translation helpful? Give feedback.
-
|
On a second thought, when you annotate the mice, when they have close body contact that is hard for the EZannot's AI to determine the boundaries between the two mouse bodies, just annotate the whole blob as one 'mouse'. Only annotate two 'mouse' when the EZannot's AI can easily determine the boundaries. |
Beta Was this translation helpful? Give feedback.
-
|
Hi, I just finished annotating 100 images with different body poses, videos, etc in EZAnnot and applying all the image augmentations. I had a quick question about your point about the frame size. My videos have different frame sizes, for example one has dimensions of 128 pixels x 352 pixels, while another has dimensions of 142 pixels x 370 pixels. Should I still annotate all of these together? What should I use for the inferencing frame size? Likewise, I had a question about the annotations (and my video dataset in particular). I have some videos where, on occasion, the camera shifts and you might see a mouse in another box coming into view. The actual original videos have 4 boxes in one video, and I cropped these into four different videos covering each of the boxes. But sometimes, the boxes will be moved slightly in the middle of the recording, so you might see in one box the 2 mice of interest, but another mouse might pop into view briefly (but completely physically separate from the two mice of interest). How should I handle these scenarios? Let me know if I can provide more clarification and thank you for your help! |
Beta Was this translation helpful? Give feedback.
-
|
Hi, Yes, you can annotate all of the images together and use 370 as the inferencing frame size. You can include some non-redundant images of the scenario where a mouse might be in another box coming into view, and annotate three 'mouse' in those images. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
First off, I appreciate your help with the head-twitch response question I asked earlier -- I'm still working on refining it now. I'm currently working on developing a Categorizer for a resident-intruder assay and just wanted to ask for some advice before starting the process.
One problem is that the resident and intruder mouse look very similar to each other! The intruder has a black tail mark which isn't always apparent (it's hard for me to tell sometimes). Here's a sample image, where I boosted the brightness of the image x3 using the preprocessing module.
I want to train a Detector, should I annotate both as a single "mouse" class or split into "intruder" and "resident?" How many images would I need to annotate?
Furthermore, what do you recommend for the next steps. For the behavioral examples, I want to mainly track "aggressive attacks," which are not super common in my dataset (maybe 80 total examples). At the same time, these are very easy examples to see and annotate. I also want to track other behaviors like "dominance," "pursuit," and "social behavior." Also, I only care about interactions where the "resident" does something to the "intruder," not the other way around. How long should the generated behavioral samples be to best capture all these various types of behavior?
Lastly, what should I do when I train the categorizer? Are there any considerations I should make before training and running the analyze videos?
Beta Was this translation helpful? Give feedback.
All reactions