pyanomaly.core.engine.abstract package¶
Submodules¶
pyanomaly.core.engine.abstract.abstract_engine module¶
@author: Yuhao Cheng @contact: yuhao.cheng[at]outlook.com
-
class
pyanomaly.core.engine.abstract.abstract_engine.AbstractInference(*defaults, **kwargs)¶ Bases:
object-
after_inference()¶
-
before_inference()¶
-
inference()¶
-
run()¶ The basic loop of implement the algorithm
-
set_requires_grad(nets, requires_grad=False)¶ - Args:
nets(list):a list of networks requores_grad(bool):whether the networks require gradients or not
-
-
class
pyanomaly.core.engine.abstract.abstract_engine.AbstractTrainer(*args, **kwargs)¶ Bases:
objectAbstract trainer class. The abstract defination of method and frame work during the training process. All of trainers must be the sub-class of this class.
-
after_step(current_step)¶ the function after step
-
after_train()¶ the function after train fucntion
-
before_step(current_step)¶ the fucntion before step
-
before_train()¶ the fucntion before train function
-
run(start_iter, max_iter)¶ Start the training process.
The basic loop of
-
set_requires_grad(nets, requires_grad=False)¶ Change the state of gradient. Args:
nets(list): a list of networks requores_grad(bool): whether the networks require gradients or not
-
train(current_step)¶ the single step of training the model
-
pyanomaly.core.engine.abstract.base_engine module¶
@author: Yuhao Cheng @contact: yuhao.cheng[at]outlook.com
-
class
pyanomaly.core.engine.abstract.base_engine.BaseInference(*defaults, **kwargs)¶ Bases:
pyanomaly.core.engine.abstract.abstract_engine.AbstractInference-
abstract
custom_setup()¶
-
data_parallel(model)¶ Data parallel the model
-
abstract
inference(current_step)¶
-
load_pretrain()¶
-
set_all(is_train)¶ Set all of models in this trainer in eval or train model Args:
is_train: bool. True=train mode; False=eval mode
- Returns:
None
-
abstract
-
class
pyanomaly.core.engine.abstract.base_engine.BaseTrainer(*defaults, **kwargs)¶ Bases:
pyanomaly.core.engine.abstract.abstract_engine.AbstractTrainer-
after_step(current_step)¶ the function after step
-
after_train()¶ the function after train fucntion
-
before_step(current_step)¶ the fucntion before step
-
abstract
custom_setup()¶
-
data_parallel(model)¶ Data parallel the model by using torch.nn.DataParallel Args:
model: torch.nn.Module
- Returns:
model_parallel
-
fine_tune()¶
-
load_pretrain()¶
-
resume()¶
-
save(current_epoch, best=False)¶ self.saved_model: is the model or a dict of combination of models self.saved_optimizer: is the optimizer or a dict of combination of optimizers self.saved_loss: the loss or a dict of the combination of loss
-
set_all(is_train)¶ Set all of models in this trainer in eval or train model Args:
is_train: bool. True=train mode; False=eval mode
- Returns:
None
-
abstract
train(current_step)¶ the single step of training the model
-