We provide the thinking chain text generated by ChatGPT/GPT4 through ICL on the MATH training set (8 for each piece of data), which are saved in the data folder.
We also provide the code for fine-tuning llama on these datasets, as follows:
prepare llama-7b checkpoint and store it in the main directory
prepare conda environment following requirements.txt
conda activate llm
finetune
bash finetune.sh
infer
bash infer.sh
eval
bash eval.sh
The following are related papers and work we have organized:
