Skip to content

Is the accuracy reported in the paper the result with finetuning applied? #203

@jiwonchoi99

Description

@jiwonchoi99

I have the following questions.
I know you must be busy, but I would truly appreciate it if you could find the time to respond.

  1. Are the perplexity and accuracy results reported in Tables 2, 3, 4, 5, and 6 of the paper obtained after finetuning?
  2. If so, I would like to know whether it is end-to-end or layer-wise finetuning.
  3. Additionally, I am not sure how to implement the finetuning step after vector quantization in code, so I would greatly appreciate any hints or guidance on this.
  4. Finally, if you have any insight into the difference in effectiveness between LoRA finetuning and the end-to-end or layer-wise finetuning described in the paper, I would be grateful if you could share that as well.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions