-
Notifications
You must be signed in to change notification settings - Fork 33
chore: mark integration tests #248
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughThe changes introduce infrastructure for marking and running integration tests separately. A new Makefile target executes integration-only tests, the pyproject.toml defines and enforces the integration marker, and error integration tests are decorated with the integration marker. Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~5 minutes Pre-merge checks and finishing touches✅ Passed checks (3 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. ❌ Your project status has failed because the head coverage (71.17%) is below the target coverage (80.00%). You can increase the head coverage or adjust the target coverage. Additional details and impacted files@@ Coverage Diff @@
## main #248 +/- ##
=======================================
Coverage 71.17% 71.17%
=======================================
Files 137 137
Lines 11100 11100
=======================================
Hits 7900 7900
Misses 3200 3200 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR introduces pytest markers to distinguish integration tests from unit tests, enabling selective test execution. Integration tests now use the @pytest.mark.integration decorator, which is registered in pytest configuration with the --strict-markers flag to ensure only valid markers are used. A new test-integration Make target is added to run integration tests separately.
- Adds
@pytest.mark.integrationdecorator to integration test classes - Configures pytest marker in pyproject.toml with
--strict-markersenforcement - Adds
test-integrationMake target for running integration tests selectively
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| test/error_integration_test.py | Adds @pytest.mark.integration decorator to both async and sync integration test classes |
| pyproject.toml | Registers the integration marker and enables --strict-markers to enforce valid markers |
| Makefile | Adds new test-integration target to run only integration tests with coverage |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
pyproject.toml (1)
134-141: Pytest marker config looks good; consider centralizing coverage options in one placeThe
--strict-markersflag andintegrationmarker definition are aligned with how you’re tagging tests and will keep marker usage tidy across the suite.You now also have coverage configuration in both
addoptsand the Makefile targets (they each pass--cov/--cov-report), which is redundant though not harmful. To avoid duplicated options and keep coverage behavior defined in a single place, consider relying onaddoptsfor coverage and simplifying the Makefile invocations to justpytest(plus-m/path filters as needed).Makefile (1)
13-15: Ensuretest-integrationactually covers all integration testsThe
-m integrationselector is correct, but the default path istest/, whilepyproject.toml’stestpathsincludes both"test"and"integration". If you add integration tests under theintegration/directory, they won’t run via this target unlessTESTis explicitly set.You might want to either:
- Let pytest’s
testpathsdrive collection:test-integration: uv run pytest -m integrationor
- Explicitly include both directories:
test-integration: uv run pytest -m integration --cov-report term-missing --cov=openfga_sdk $(if $(TEST),$(TEST),test integration)Optionally, you can also add
test-integrationto the.PHONYlist and consider relying onaddoptsfor coverage options to avoid duplication, similar to the maintesttarget.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
Makefile(1 hunks)pyproject.toml(1 hunks)test/error_integration_test.py(2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Agent
🔇 Additional comments (2)
test/error_integration_test.py (2)
56-58: Async integration tests correctly taggedTagging
TestErrorIntegrationwith theintegrationmarker, on top of the existing async marker and module-level skip, cleanly integrates this suite with the new-m integrationflow while keeping the existing environment gate.
427-428: Sync integration tests consistently tagged as wellAdding the
integrationmarker toTestErrorIntegrationSynckeeps async and sync suites aligned for selection with-m integration, matching the marker definition in pyproject.toml.
Description
What problem is being solved?
How is it being solved?
What changes are made to solve it?
References
Review Checklist
mainSummary by CodeRabbit
Release Notes
✏️ Tip: You can customize this high-level summary in your review settings.