-
Notifications
You must be signed in to change notification settings - Fork 6
Open
Description
Based on the provided test code, the confidence value is calculated as the average of the top 25% values of the model output before performing relative prediction. However, if you enable the relative prediction option, you would need to recalculate the prediction values. This is because the range of pixel values in the heatmap may change significantly before and after applying the Min-Max normalization. Could you please clarify if this understanding is correct?
test.py
conf_av = np.sort(scores_av.flatten())[-n//4:].mean()
conf_obj = np.sort(scores_obj.flatten())[-n//4:].mean()
conf_av_obj = np.sort(scores_av_obj.flatten())[-n//4:].mean()
if args.relative_prediction:
pred_av = utils.normalize_img(scores_av)
pred_obj = utils.normalize_img(scores_obj)
pred_av_obj = utils.normalize_img(scores_av_obj)
thr_av = np.sort(pred_av.flatten())[int(n * args.pred_size)]
thr_obj = np.sort(pred_obj.flatten())[int(n * args.pred_size)]
thr_av_obj = np.sort(pred_av_obj.flatten())[int(n * args.pred_size)]
else:
pred_av = scores_av
pred_obj = scores_obj
pred_av_obj = scores_av_obj
thr_av = thr_obj = thr_av_obj = args.pred_thr
evaluator_av.update(bb, gt_map, conf_av, pred_av, thr_av, name[i])
evaluator_obj.update(bb, gt_map, conf_obj, pred_obj, thr_obj, name[i])
evaluator_av_obj.update(bb, gt_map, conf_av_obj, pred_av_obj, thr_av_obj, name[i])
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels