Skip to content

WIP Don't merge yet#614

Open
brandon-edwards wants to merge 257 commits intomlcommons:mainfrom
hasan7n:be_enable_partial_epochs
Open

WIP Don't merge yet#614
brandon-edwards wants to merge 257 commits intomlcommons:mainfrom
hasan7n:be_enable_partial_epochs

Conversation

@brandon-edwards
Copy link
Contributor

...

brandon-edwards and others added 30 commits October 27, 2024 13:47
instead of division by amount of training completed when modifying data
size for update weighting, also allowing col1 to be admin for testing
admin control of dampener as well as train and val cutoff time for
train (training and local model validation)
to 50 and 250 respectively. Timeouts will still apply to stop early if
it is taking too long. Amount of training completed will be computed off
of these new maxes, i.e. training_completed of 1.0 will mean all of the
250 batches were trained. With this change we will want model updates to
be counted according to local data size, therefore it is important that
the training config set the train_completion_dampener at the value: 0.0.
…s related to his new initial model:

order = t1 t2 flair t1c
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants