Software Artifact - Lexicalization Is All You Need: Examining the Impact of Lexical Knowledge in a Compositional QALD System
- DOI: http://doi.org/10.1007/978-3-031-77792-9_7
- Preprint: http://doi.org/10.48550/arXiv.2411.03906
- Zenodo: https://doi.org/10.5281/zenodo.12610054
- Docker: https://hub.docker.com/r/dvs23/neodudes
The results and other files generated during the experiments from the paper can be found in results/. For further details please see the README.md there.
The easiest way to run the NeoDUDES question answering system is using Docker. Either run
docker pull neodudes:lateston your machine or build the image yourself:
docker build . -t neodudesAfter that, you can run the container as follows:
docker run -e DBPEDIA_SPOTLIGHT_ENDPOINT='http://172.17.0.1:2222/rest' -e DBPEDIA_ENDPOINT='http://172.17.0.1:8890/sparql' -it neodudesThe container expects two environment variables to be set:
- DBPEDIA_SPOTLIGHT_ENDPOINT: URL of a running DBpedia Spotlight https://www.dbpedia-spotlight.org/ instance accessible for the Docker container
- DBPEDIA_ENDPOINT: URL of a SPARQL endpoint serving the necessary triples for the benchmark.
You can start a DBpedia Spotlight instance for English locally in the background using the following command:
docker run -tid --restart unless-stopped --name dbpedia-spotlight.en --mount source=spotlight-model,target=/opt/spotlight -p 2222:80 dbpedia/dbpedia-spotlight spotlight.sh enFor the SPARQL endpoint, in case of QALD-9, the triples used by our approach can be found here: https://ag-sc.techfak.uni-bielefeld.de/download/dudes/2016.ttl.zst. To serve them, you can use, e.g., Virtuoso for which you can find a prepared data directory here: https://ag-sc.techfak.uni-bielefeld.de/download/dudes/virtuoso2016.tar.zst
Please note that in case of local instances you need to take care that they are accessible from the container, e.g. by finding out the corresponding IP addresses or using some kind of docker-compose setup. URLs like 'http://localhost:8890/sparql' will likely not work (see https://stackoverflow.com/questions/24319662/from-inside-of-a-docker-container-how-do-i-connect-to-the-localhost-of-the-mach).
In case you want to setup the project locally, the Dockerfile and requirements.txt might give you a few hints which packages you need to install.
Inside the container, there are a few prepared scripts which you can run:
make_docs.shto generate the Sphinx documentation of the projectqald-eval-test.shand 'qald-eval-train.sh' for running the QALD-9 train or test benchmark, respectivelyqald-rpc.shfor starting the DUDES RPC server dealing with tagging input questions and scoring SPARQL queries with the provided modelssrc/llm/prompting.pyfor experiments with GPT - for this make sure to provide valid OpenAI organization and API key in the.envfile. An example evnironment file can be found insample.envsrc/llm/query_score_training.pyfor training query scoring models - a Slurm job array is given withquery_score_train.sarrayandquery_score_train_best.sarray, also illustrating relevant command line optionsqald-eval-newpipeline.pyfor running the QALD-9 benchmark, using the filessrc/lemon/resources/qald/QALD9_train-dataset-raw.csvandsrc/lemon/resources/qald/QALD9_test-dataset-raw.csv
Running the benchmark can be done using qald-eval-newpipeline.py, which by default launches four processes running in parralel. To change the number of spawned processes, you need to change the cpu_count variable.
Please consider citing our work if you find the provided resources useful:
@InProceedings{schmidt-etal-2025-neodudes,
author="Schmidt, David Maria
and Elahi, Mohammad Fazleh
and Cimiano, Philipp",
editor="Alam, Mehwish
and Rospocher, Marco
and van Erp, Marieke
and Hollink, Laura
and Gesese, Genet Asefa",
title="Lexicalization Is All You Need: Examining the Impact of Lexical Knowledge in a Compositional QALD System",
booktitle="Knowledge Engineering and Knowledge Management",
year="2025",
publisher="Springer Nature Switzerland",
address="Cham",
pages="102--122",
isbn="978-3-031-77792-9"
}