Skip to content

Non-exhaustive list of possible optimizations and refactors #78

@MatthieuHernandez

Description

@MatthieuHernandez
  • Factorize back(float error) and train() between SimpleNeuron and RecurrentNeuron
  • Maybe rename computeBackOutput and computeTrain
  • What about this->errors in operator==?
  • Default operator==
  • Add ResetLearningVars
  • Remove learning vars from Archive
  • Add a real batchSize()
  • Make literals operator consteval
  • previousOutput not used in GRU
  • Tensor class can be made with private default constructors and friend to boost or something like that.

Metadata

Metadata

Labels

No labels
No labels

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions