Skip to content

lil-lab/post-train-for-efficient-communication

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

Post-training for Efficient Communication via Convention Formation

This is the official repo for our paper: Post-training for Efficient Communication via Convention Formation

Yilun Hua, Evan Wang, and Yoav Artzi

Prerequisites

This repo requires python>=3.11.

To run model training, please install trl and peft via pip install -r post_train_efficiency/requirements.txt

To run evaluation, in the same environment as training, run pip install -r post_train_efficiency/requirements.eval.txt

For open-sourced models, you will need a huggingface account to accept their terms and conditions. Please check the respective pages of Gemma and Llama on huggingface. For proprietary models, you will need API access for them. Please refer to their websites and documentions for API access.

Data Processing and Training

The post_train_efficiency/post-train folder provides the python scripts and example command for data processing and running the two training stages.

Evaluation

See post_train_efficiency/refgame_eval for running the text-only reference game evaluation.

See post_train_efficiency/doc_grounded_eval for running the document-grounded evaluation.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors