Hi, thank you for the code!
I have a small comment:
For the one hot encoding of the labels, why the encodings are shifted by subtracting 0.5 (Ygood, Ybad). However, for the accuracy test for each 10 epoch, the variable 'unq_oh ' is the onehot encoding without subtracting 0.5, this might be a mistake?
I test the code without subtracting 0.5 of the encodings, it also works. And this shift seems does not affect the training accuracy.
Please let me know if there is misunderstanding, thank you!
Best,
Xing
Hi, thank you for the code!
I have a small comment:
For the one hot encoding of the labels, why the encodings are shifted by subtracting 0.5 (Ygood, Ybad). However, for the accuracy test for each 10 epoch, the variable 'unq_oh ' is the onehot encoding without subtracting 0.5, this might be a mistake?
I test the code without subtracting 0.5 of the encodings, it also works. And this shift seems does not affect the training accuracy.
Please let me know if there is misunderstanding, thank you!
Best,
Xing