class tf.train.CheckpointSaverHookSee the guide: Training > Training Hooks
Saves checkpoints every N steps or seconds.
__init__(checkpoint_dir, save_secs=None, save_steps=None, saver=None, checkpoint_basename='model.ckpt', scaffold=None, listeners=None)Initialize CheckpointSaverHook monitor.
checkpoint_dir: str, base directory for the checkpoint files.save_secs: int, save every N secs.save_steps: int, save every N steps.saver: Saver object, used for saving.checkpoint_basename: str, base name for the checkpoint files.scaffold: Scaffold, use to get saver object.listeners: List of CheckpointSaverListener subclass instances. Used for callbacks that run immediately after the corresponding CheckpointSaverHook callbacks, only in steps where the CheckpointSaverHook was triggered.ValueError: One of save_steps or save_secs should be set.ValueError: Exactly one of saver or scaffold should be set.after_create_session(session, coord)Called when new TensorFlow session is created.
This is called to signal the hooks that a new session has been created. This has two essential differences with the situation in which begin is called:
session: A TensorFlow Session that has been created.coord: A Coordinator object which keeps track of all threads.after_run(run_context, run_values)before_run(run_context)begin()end(session)Defined in tensorflow/python/training/basic_session_run_hooks.py.
© 2017 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/python/tf/train/CheckpointSaverHook