-
Notifications
You must be signed in to change notification settings - Fork 8
Open
Description
I trained a model only with Objectron dataset, and I want to use ground truth bboxes instead of G-Dino bboxes for evaluation by modifying function merge_oracle2d_to_detection_dicts in cubercnn/data/build.py:
def merge_oracle2d_to_detection_dicts(dataset_dicts, oracle2d):
for dataset, oracle in zip(dataset_dicts, oracle2d):
if oracle == "use_gt":
for data_dict in dataset:
filtered_annotations = [ann for ann in data_dict["annotations"] if (ann["category_id"] != -1)]
data_dict["oracle2D"] = {
"gt_bbox2D": torch.tensor([instance["bbox2D_proj"] for instance in filtered_annotations]),
"gt_classes": torch.tensor([instance["category_id"] for instance in filtered_annotations]),
"gt_scores": torch.tensor([1.0 for _ in filtered_annotations]) # GT score is usually 1.0 for true labels
}
else:
with open(oracle, 'r') as file:
oracle_data = json.load(file)
for data_dict, oracle_dict in zip(dataset,oracle_data):
assert data_dict['image_id'] == oracle_dict['image_id']
data_dict["oracle2D"] = {"gt_bbox2D": torch.tensor([xywh_to_xyxy(instance["bbox"]) for instance in oracle_dict["instances"]]),
"gt_classes": torch.tensor([instance["category_id"] for instance in oracle_dict["instances"]]),
"gt_scores": torch.tensor([instance["score"] for instance in oracle_dict["instances"]]),
}
However, when I use this function to evaluate the Objectron_test, the AP2D can not reach 100%, and AP3D is even worse than using G-Dino bboxes:
Using GT bboxes with above modification:
Using G-Dino bboxes:
Can you please offer some advice about this phenomenon, and how do you use GT bboxes for evaluation?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels

