Skip to content

Ensemble#19

Open
davek44 wants to merge 72 commits intomasterfrom
ensemble
Open

Ensemble#19
davek44 wants to merge 72 commits intomasterfrom
ensemble

Conversation

@davek44
Copy link
Copy Markdown
Contributor

@davek44 davek44 commented Aug 4, 2018

No description provided.

Comment thread basenji/augmentation.py
Args:
data_ops: dict with keys 'sequence,' 'label,' and 'na.'
augment_rc: Boolean
augment_shifts: Int
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be 'augment_shift'

Comment thread basenji/augmentation.py
augment_rc: Boolean
augment_shifts: Int
Returns
data_ops: augmented data
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you document what fields get added to the dict?

Comment thread basenji/augmentation.py
return data_ops_list


def augment_deterministic(data_ops, augment_rc=False, augment_shift=0):
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when would you use this function? It seems odd to call with augment_shift != 0, but only a single value.

Comment thread basenji/augmentation.py
if augment_rc:
data_ops_aug = augment_deterministic_rc(data_ops_aug)
else:
data_ops_aug['reverse_preds'] = tf.zeros((), dtype=tf.bool)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what are the semantics of these as targets?

Comment thread basenji/seqnn.py
target_subset=None):
augment_rc=False, augment_shifts=[0],
ensemble_rc=False, ensemble_shifts=[0],
penultimate=False, target_subset=None):
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does penultimate mean here?

Comment thread basenji/seqnn.py
self.loss_train, self.loss_train_targets, self.targets_train = loss_returns

# optimizer
self.update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't this be done after you make the optimizer?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants