Conversation
|
Issue 2 SOLVED: Missing files in resources/robots - worked like a charm afterwards. Really solid detection for Robots + Balls in viewport. Detected multiple goal_posts at times, but nothing a little k-nn/IoU post-processing can't fix downstream. |
ysims
left a comment
There was a problem hiding this comment.
I'd suggest removing the image curator tool, this was a bit of a hack to get a good dataset to train right before RoboCup and the bounding boxes weren't really working properly some of the time.
A few things
- Are we still getting huge robot bounding boxes occassionally when they are very slightly in frame?
- The goal posts should be fixed here, if there is an issue with them it needs to be part of this PR. The output of this tool should be a dataset that can be directly used to train an artificial neural network.
- The NUbook page needs updating with this PR, particularly letting people know how to use the bounding box set up.
- The intersection method is not really a great solution. It was the quickest thing we could do in the time we had before RoboCup. The main thing I don't like about it is that at the moment, you can't get both correct bounding box and segmentation outputs for the one raw NUpbr image, and each HDR needs the coloured blobs on the intersections. It would be better to use field dimensions to get them.
| # Goal annotations | ||
| # goal_annotations = [util.write_annotations(goal.obj, 1) for goal in goals] | ||
| # annotations += [ann for ann in goal_annotations if ann is not None] |
There was a problem hiding this comment.
Don't leave in commented out code
| width = max_x - min_x | ||
| height = max_y - min_y | ||
|
|
||
| # Normalize coordinates |
There was a problem hiding this comment.
| # Normalize coordinates | |
| # Normalise coordinates |
| Image Curator Tool for NUpbr Dataset | ||
|
|
||
| This tool allows you to review generated images with their annotations, | ||
| visualize bounding boxes, and accept/reject images for your final dataset. |
There was a problem hiding this comment.
| visualize bounding boxes, and accept/reject images for your final dataset. | |
| visualise bounding boxes, and accept/reject images for your final dataset. |
| </details> | ||
|
|
||
|
|
||
| ## BOUNDING BOX STUFF |
| The current bounding box implementation in this branch does not integrate well with the segmentation side. The HDR must have blobs on the intersections in the following colours: | ||
|
|
||
| - L: magenta [255, 0, 255] | ||
| - T: cyan [0, 255, 255] | ||
| - X: darker orange [255, 100, 0] | ||
|
|
||
| The goal posts must be solid yellow [255, 255, 0] with just the posts and not the top bar or any other part of the goals. No newline at end of file |
There was a problem hiding this comment.
It would be better to get the positions given we know field dimensions.


Adds bounding box annotations to output in YOLO format.
Too do: fix some issues with field line annotations.