Skip to content

Studyard/Distractors

Repository files navigation

Distractors

This repository is composed of the main files to the research entitled: "LLM-Based Automatic Generation of Multiple-Choice Questions With Meaningful Distractors" Most of the code used Langfuse as a tool to manage the prompts version and the entire pipeline of our experiment. That is why some code calls this framework; however, our presented code is enough to replicate our results without using Langfuse.

To Summarize the important files:

  • Result folder contain the computed results of our test evaluation dataset.
    • maritaca-sabia3-BF.csv
    • maritaca-sabia3-DT.csv
    • openai_gpt4o-mini-BF.csv
    • openai_gpt4o-mini-DT.csv
  • Notebooks
      1. Pipeline: creating
      1. Evaluators: compute the evaluations metrics (diversity and LLM as judge) for the generated distractors.
      1. Result_analysis: a brief analysis with code of how we extract the metrics.
  • Prompts A set of collection of prompts files used in the research.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published