Skip to content

Detailed evaluation setup for VerilogEval #22

@maekawatoshiki

Description

@maekawatoshiki

Hi.

From your preprint, Nexus achieved 100% accuracy on non-self-verifying VerilogEval V1 Human (≒ VerilogEval V2).
I'd like to reproduce the result, but the only script I found is eda.py, which I believe was not used for the evaluation described in the paper.

Could you share the full evaluation setup for VerilogEval?

Metadata

Metadata

Assignees

Labels

questionFurther information is requested

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions