pyanomaly.datatools.evaluate package

Submodules

pyanomaly.datatools.evaluate.eval_function module

The eval api of the anomaly detection. We will store the results in these structure and use off-line style to compute the metric results {

'dataset': the name of dataset 'psnr': the psnr of each testing videos,

}

class pyanomaly.datatools.evaluate.eval_function.ScoreAUCMetrics(cfg, is_training)

Bases: pyanomaly.datatools.abstract.abstract_evaluate_method.AbstractEvalMethod

compute(result_file_dict)

result_file_dict = {'train':{description1:sigma0_result_file, description2:sigma1_result_file, .....}, 'val':{description1:sigma0_result_file, description2:sigma1_result_file}}

eval_method(result, gt, verbose)

The actual method to get the eval metrics

load_ground_truth()

Aim to load the gt of dataset.

load_results(result_file)

results' format: {

'dataset': the name of dataset 'psnr': the psnr of each testing videos, # will be deprecated in the future, only keep the 'score' key 'flow': [], 'names': [], 'diff_mask': [], 'score': the score of each testing videos 'num_videos': the number of the videos

}

pyanomaly.datatools.evaluate.utils module

pyanomaly.datatools.evaluate.utils.amc_normal_score(wf, sf, wi, si, lambada_s=0.2)
pyanomaly.datatools.evaluate.utils.amc_score(frame, frame_hat, flow, flow_hat, wf, wi, kernel_size=16, stride=4, lambada_s=0.2)

wf, wi is different from videos

pyanomaly.datatools.evaluate.utils.average_psnr(loss_file, cfg)
pyanomaly.datatools.evaluate.utils.cal_eer(fpr, tpr)
pyanomaly.datatools.evaluate.utils.calc_w(w_dict)
pyanomaly.datatools.evaluate.utils.calculate_psnr(loss_file, logger, cfg)
pyanomaly.datatools.evaluate.utils.compute_auc_psnr(loss_file, logger, cfg, score_type='normal')

For psnr, score_type is always 'normal', means that the higher PSNR, the higher normality

pyanomaly.datatools.evaluate.utils.compute_auc_score(loss_file, logger, cfg, score_type='normal')
score_type:

normal--> pos_label=0 abnormal --> pos_label=1 in dataset, 0 means normal, 1 means abnormal

pyanomaly.datatools.evaluate.utils.compute_eer(loss_file, cfg)
pyanomaly.datatools.evaluate.utils.find_max_patch(diff_map_appe, diff_map_flow, kernel_size=16, stride=4, aggregation=True)

kernel size = window size

pyanomaly.datatools.evaluate.utils.get_scores_labels(loss_file, cfg)

base the psnr to get the scores of each videos

pyanomaly.datatools.evaluate.utils.load_pickle_results(loss_file, cfg)
pyanomaly.datatools.evaluate.utils.oc_score(raw_data)
pyanomaly.datatools.evaluate.utils.precision_recall_auc(loss_file, cfg)
pyanomaly.datatools.evaluate.utils.psnr_error(gen_frames, gt_frames, hat=False)

Computes the Peak Signal to Noise Ratio error between the generated images and the ground truth images. @param gen_frames: A tensor of shape [batch_size, height, width, 3]. The frames generated by the

generator model.

@param gt_frames: A tensor of shape [batch_size, height, width, 3]. The ground-truth frames for

each frame in gen_frames.

@return: A scalar tensor. The mean Peak Signal to Noise Ratio error over each frame in the

batch.

pyanomaly.datatools.evaluate.utils.reconstruction_loss(x_hat, x)

The input is the video clip, and we use the RL as the score. RL := Reconstruction Loss

pyanomaly.datatools.evaluate.utils.simple_diff(frame_true, frame_hat, flow_true, flow_hat, aggregation=False)

Module contents