tf.fake_quant_with_min_max_vars_per_channel_gradient(gradients, inputs, min, max, name=None)See the guide: Tensor Transformations > Fake quantization
Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation.
gradients: A Tensor of type float32. Backpropagated gradients above the FakeQuantWithMinMaxVars operation, shape one of: [d], [b, d], [b, h, w, d].inputs: A Tensor of type float32. Values passed as inputs to the FakeQuantWithMinMaxVars operation, shape same as gradients. min, max: Quantization interval, floats of shape [d].min: A Tensor of type float32.max: A Tensor of type float32.name: A name for the operation (optional).A tuple of Tensor objects (backprops_wrt_input, backprop_wrt_min, backprop_wrt_max). backprops_wrt_input: A Tensor of type float32. Backpropagated gradients w.r.t. inputs, shape same as
inputs:
gradients * (inputs >= min && inputs <= max). backprop_wrt_min: A Tensor of type float32. Backpropagated gradients w.r.t. min parameter, shape [d]: sum_per_d(gradients * (inputs < min)). * backprop_wrt_max: A Tensor of type float32. Backpropagated gradients w.r.t. max parameter, shape [d]: sum_per_d(gradients * (inputs > max)).
Defined in tensorflow/python/ops/gen_array_ops.py.
© 2017 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/python/tf/fake_quant_with_min_max_vars_per_channel_gradient