Skip to content

Fix Potential NAN bug#1

Open
Justobe wants to merge 1 commit intoJoooey:masterfrom
Justobe:master
Open

Fix Potential NAN bug#1
Justobe wants to merge 1 commit intoJoooey:masterfrom
Justobe:master

Conversation

@Justobe
Copy link

@Justobe Justobe commented Nov 19, 2020

[Potential NAN bug] Loss may become NAN during training

Hello~

Thank you very much for sharing the code!

I try to use my own data set ( with the same shape as mnist) in code. After some iterations, it is found that the training loss becomes NAN. After carefully checking the code, I found that the following code may trigger NAN in loss:

In Tensorflow_gesture/Demo/Mnist.py

cross_entropy = -tf.reduce_sum(y_label * tf.log(y))

If y contains 0 (output of softmax ), the result of tf.log(y) is inf because log(0) is illegal . And this may cause the result of loss to become NAN.

It could be fixed by making the following changes:

cross_entropy = -tf.reduce_sum(y_label * tf.log(y + 1e-7))

or

cross_entropy = -tf.reduce_sum(y_label * tf.log(tf.clip_by_value(y,1e-7,1.0)))

Hope to hear from you ~

Thanks in advance! : )

Loss may become NAN during training
@Justobe
Copy link
Author

Justobe commented Jan 13, 2021

@Joooey :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant