#include <nn_ops.h>
Computes softmax cross entropy cost and gradients to backpropagate.
Unlike SoftmaxCrossEntropyWithLogits
, this operation does not accept a matrix of label probabilities, but rather a single label per row of features. This label is considered to have probability 1.0 for the given row.
Inputs are the logits, not probabilities.
Arguments:
Returns:
Output
loss: Per example loss (batch_size vector).Output
backprop: backpropagated gradients (batch_size x num_classes matrix). Constructors and Destructors | |
---|---|
SparseSoftmaxCrossEntropyWithLogits(const ::tensorflow::Scope & scope, ::tensorflow::Input features, ::tensorflow::Input labels) |
Public attributes | |
---|---|
backprop | |
loss |
::tensorflow::Output backprop
::tensorflow::Output loss
SparseSoftmaxCrossEntropyWithLogits( const ::tensorflow::Scope & scope, ::tensorflow::Input features, ::tensorflow::Input labels )
© 2017 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/sparse-softmax-cross-entropy-with-logits.html