-
Outputs for Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Thr...
Raw responses from the models, clean answers, post-processed predictions and evaluation results for each model and dataset using in the publication Fine-Tuning with Divergent... -
LLaMA-2 70B Model checkpoints for Fine-Tuning with Divergent Chains of Though...
LLaMA-2 70B Model checkpoints for Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Models -
Phi 1.5 Model checkpoints for Fine-Tuning with Divergent Chains of Thought Bo...
Phi 1.5 Model checkpoints for Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Models -
LLaMA-2 13B-Chat Model checkpoints for Fine-Tuning with Divergent Chains of T...
LLaMA-2 13B-Chat Model checkpoints for Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Models -
LLaMA-2 13B Model checkpoints for Fine-Tuning with Divergent Chains of Though...
LLaMA-2 13B Model checkpoints for the paper Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Models -
LLaMA-2 7B Model checkpoints for Fine-Tuning with Divergent Chains of Thought...
LLaMA-2 7B Model checkpoints for the paper Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Models -
A dataset of 1500-word stories generated by gpt-4o-mini for 236 nationalities
We created a dataset of stories generated by OpenAI’s gpt-4o-miniby using a Python script to construct prompts that were sent to the OpenAI API. We used Statistics Norway’s list...