-
-
Notifications
You must be signed in to change notification settings - Fork 0
Golden Tests
Chris Day edited this page Feb 8, 2026
·
10 revisions
This document explains the current test coverage for uml2semantics-python
related to golden outputs, and outlines a realistic path to expand it.
The current repository contains a lightweight golden-style test in
tests/test_golden.py that:
- runs the CLI against the
examples/TSV bundle - parses the generated Turtle
- asserts the ontology IRI exists
- asserts the graph is non-empty
- asserts at least one
owl:Classand oneowl:AnnotationPropertyexist
This confirms the end-to-end pipeline works and produces a valid RDF graph,
but it does not compare full outputs against a canonical expected.ttl.
The following golden-test features are not present in the codebase today:
- canonical Turtle normalisation
- deterministic blank-node ordering
- full file-to-file diffs between
expected.ttland actual output - a suite of per-feature golden cases
If you want true golden regression testing, a practical next step is:
- Create a
tests/golden/directory with per-case TSV bundles. - Generate
expected.ttlonce per case. - Add a normalisation step (or accept stable
rdfliboutput with sorted triples). - Diff actual output against
expected.ttlin the test.
This would allow detection of changes in:
- choice semantics
- datatype facets
- enumeration individuals
- annotations
- prefix handling