Skip to content

Memory requirement for training on the conll-2012 corpus #7

@thomwolf

Description

@thomwolf

Hi, I am trying to train your model on a AWS p2x instance (with a 12 Go K80 GPU) on the Conll-2012 corpus (2802 documents in the training dataset). The training eats all (RAM) memory (64 Go) in less than 30 % of the first epoch and gets killed before finishing it.

I was wondering on what type of machine you trained it ?
Is 64 Go of RAM too small for training on the conll corpus ?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions