Skip to content

The results without pre-training #2

@Gyann-z

Description

@Gyann-z

Thanks for your implementation. Have you tried TextVQA training without the layout-aware pre-training? Can you reproduce the results of the paper? E.g., LaTr-base achieves 44.06 on Rosetta-en and 52.29 on Amazon-OCR.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions