-
Notifications
You must be signed in to change notification settings - Fork 0
test: remove redundant tests, add additional #117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughThis PR addresses test audit concerns by refactoring unit tests from heavy class-based mocks and internal subclasses to lightweight runCommand-based integration testing with global mocks. Additionally, a duration flag is introduced to the bench subscriber command to enable timeout-based subscription limiting. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Areas requiring close attention:
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (4 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
613aac7 to
806e5b3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (24)
test/unit/commands/support.test.ts (1)
44-61: Consider using a test helper for environment variable management.The manual save/restore pattern for
process.env.ABLY_CLI_NON_INTERACTIVEis correct but could be simplified using Vitest'svi.stubEnv()or a dedicated test helper to reduce boilerplate and improve maintainability across test files.Example using Vitest's built-in environment mocking:
- const originalEnv = process.env.ABLY_CLI_NON_INTERACTIVE; - process.env.ABLY_CLI_NON_INTERACTIVE = "true"; - - try { - const { stdout, stderr } = await runCommand( - ["support", "unknowncommand"], - import.meta.url, - ); - - // Should show error or warning about command not found - const output = stdout + (stderr || ""); - expect(output).toMatch(/not found|not.*command|available/i); - } finally { - if (originalEnv === undefined) { - delete process.env.ABLY_CLI_NON_INTERACTIVE; - } else { - process.env.ABLY_CLI_NON_INTERACTIVE = originalEnv; - } - } + vi.stubEnv("ABLY_CLI_NON_INTERACTIVE", "true"); + + const { stdout, stderr } = await runCommand( + ["support", "unknowncommand"], + import.meta.url, + ); + + // Should show error or warning about command not found + const output = stdout + (stderr || ""); + expect(output).toMatch(/not found|not.*command|available/i); + + vi.unstubAllEnvs();test/unit/commands/accounts/logout.test.ts (4)
11-19: Consider more robust temp directory naming to avoid collisions.Using
Date.now()alone for directory naming could lead to collisions if tests run in parallel at the same millisecond. Consider usingcrypto.randomBytes()or a combination of timestamp and random string for guaranteed uniqueness.+import { randomBytes } from "node:crypto"; + describe("accounts:logout command", () => { let testConfigDir: string; let originalConfigDir: string; beforeEach(() => { // Create a temporary config directory for testing - testConfigDir = resolve(tmpdir(), `ably-cli-test-logout-${Date.now()}`); + testConfigDir = resolve(tmpdir(), `ably-cli-test-logout-${Date.now()}-${randomBytes(4).toString("hex")}`); mkdirSync(testConfigDir, { recursive: true, mode: 0o700 });
65-75: Verify exitCode for error scenarios.Error cases should also verify that the command exits with a non-zero exit code, not just JSON error output. The
runCommandhelper returnsexitCodewhich should be checked.it("should output error in JSON format when no account is selected", async () => { - const { stdout } = await runCommand( + const { stdout, exitCode } = await runCommand( ["accounts:logout", "--json"], import.meta.url, ); const result = JSON.parse(stdout); expect(result).toHaveProperty("success", false); expect(result).toHaveProperty("error"); expect(result.error).toContain("No account"); + expect(exitCode).not.toBe(0); });
93-115: Consider verifying actual config file changes and exit codes.While JSON output validation is useful, these tests would be more thorough if they also:
- Verified the command exits with code 0 for success
- Checked that the account was actually removed from the config file (side effect verification)
This would ensure the command is working end-to-end, not just producing correct JSON output.
Example enhancement for one test:
+import { readFileSync } from "node:fs"; + it("should successfully logout with --force and --json flags", async () => { - const { stdout } = await runCommand( + const { stdout, exitCode } = await runCommand( ["accounts:logout", "--force", "--json"], import.meta.url, ); const result = JSON.parse(stdout); expect(result).toHaveProperty("success", true); expect(result).toHaveProperty("account"); expect(result.account).toHaveProperty("alias", "testaccount"); expect(result).toHaveProperty("remainingAccounts"); + expect(exitCode).toBe(0); + + // Verify account was actually removed from config + const configContent = readFileSync(resolve(testConfigDir, "config"), "utf-8"); + expect(configContent).not.toContain("[accounts.testaccount]"); });
179-189: Verify exitCode for error scenario.Similar to the earlier error test, this should also verify that the command exits with a non-zero exit code.
it("should output error in JSON format when account alias does not exist", async () => { - const { stdout } = await runCommand( + const { stdout, exitCode } = await runCommand( ["accounts:logout", "nonexistent", "--json"], import.meta.url, ); const result = JSON.parse(stdout); expect(result).toHaveProperty("success", false); expect(result).toHaveProperty("error"); expect(result.error).toContain("not found"); + expect(exitCode).not.toBe(0); });test/unit/commands/status.test.ts (1)
74-88: Consider adding assertion for browser-opening behavior.The test comment on line 85 mentions browser opening is simulated, but the only assertion is that the status URL appears in stdout. Consider verifying that the
--openflag triggers the expected browser-opening code path, or document why this level of coverage is sufficient.test/unit/commands/channels/presence/subscribe.test.ts (1)
112-143: Timing-based test coordination may be fragile.The 50ms
setTimeouton line 125 for waiting on subscription setup is a timing-based approach that could cause flaky tests under load. Consider using a more deterministic approach such as polling forpresenceCallbackto be defined, or restructuring the mock to immediately invoke the callback.- // Wait for subscription setup - await new Promise((resolve) => setTimeout(resolve, 50)); + // Wait for subscription setup with polling + await vi.waitFor(() => expect(presenceCallback).not.toBeNull(), { timeout: 500 });test/unit/commands/channels/presence/enter.test.ts (1)
192-227: Same timing concern as subscribe tests.Line 199 uses a 50ms
setTimeoutfor async coordination, which has the same potential for flakiness as noted in the subscribe tests.test/unit/commands/channels/list.test.ts (2)
58-60: Consider addingvi.restoreAllMocks()in afterEach for consistency.Other test files in this PR (e.g.,
benchmarking.test.ts) callvi.restoreAllMocks()in afterEach. Adding it here ensures mock state doesn't leak between tests.afterEach(() => { delete globalThis.__TEST_MOCKS__; + vi.restoreAllMocks(); });
223-241: Flags section duplicates help tests.These tests only verify flags appear in help output, which overlaps with the help tests. The actual flag behavior is already tested in the "channel listing" section (lines 140-161). Consider removing this section to reduce redundancy, or consolidating with the help tests.
test/unit/commands/auth/revoke-token.test.ts (2)
26-60: Consider cleaning up the temporary config directory in afterEach.The test creates a temporary directory at
testConfigDirbut doesn't clean it up. While the OS will eventually cleantmpdir(), explicitly removing it prevents accumulation during repeated test runs.+import { rmSync } from "node:fs"; // ... afterEach(() => { nock.cleanAll(); delete globalThis.__TEST_MOCKS__; process.env.ABLY_CLI_CONFIG_DIR = originalConfigDir; + try { + rmSync(testConfigDir, { recursive: true, force: true }); + } catch { + // Ignore cleanup errors + } });
300-329: Flags section duplicates help tests.Similar to the channels:list tests, these only verify flags appear in help output, which is already covered in the help section. Consider consolidating.
test/unit/commands/bench/benchmarking.test.ts (1)
265-275: Duration flag tested only in help output.While the help test verifies the
--durationflag is documented, there's no functional test that actually exercises the duration-limited subscription behavior. Consider adding a test that passes--durationand verifies the command terminates after the specified time.Example test:
it("should terminate after specified duration", async () => { const startTime = Date.now(); const { error } = await runCommand( [ "bench:subscriber", "test-channel", "--api-key", "app.key:secret", "--duration", "1", "--json", ], import.meta.url, ); const elapsed = Date.now() - startTime; expect(error).toBeUndefined(); // Should complete around 1 second (with some tolerance) expect(elapsed).toBeGreaterThanOrEqual(900); expect(elapsed).toBeLessThan(3000); });test/unit/commands/channels/history.test.ts (1)
87-216: Well-structured history retrieval tests.The test coverage is comprehensive, including successful retrieval, message details, empty history, JSON output, various flags (--limit, --direction, --start, --end), and error handling. The approach of verifying mock calls with expected parameters ensures flags are properly passed through.
Consider adding a test for JSON output with empty history to ensure the JSON structure is correct even when no messages are found:
it("should output empty JSON array when no messages found", async () => { mockHistory.mockResolvedValue({ items: [] }); const { stdout } = await runCommand( [ "channels:history", "test-channel", "--api-key", "app.key:secret", "--json", ], import.meta.url, ); const result = JSON.parse(stdout); expect(result).toHaveProperty("messages"); expect(result.messages).toHaveLength(0); });test/unit/commands/spaces/spaces.test.ts (6)
173-180: Remove redundant help flag verification.This test only verifies that
--profileappears in the help text, which doesn't validate any actual command behavior. The functional test at lines 153-171 already confirms that the--profileflag works correctly by testing the actual command execution with profile data.Given the PR's objective to remove tests that don't test useful behavior, this test should be removed.
- it("should accept --profile flag", async () => { - const { stdout } = await runCommand( - ["spaces:members:enter", "--help"], - import.meta.url, - ); - - expect(stdout).toContain("--profile"); - });
183-194: Consider adding functional test for spaces members subscribe.The
mockMembersSubscribemock is initialized inbeforeEachbut never verified in any test. This describe block only contains a help text test and lacks coverage of the actual subscribe functionality.Do you want me to generate a functional test that verifies the subscribe command actually calls the mock?
208-215: Remove redundant help flag verification and consider functional test.This test only verifies that
--locationappears in the help text without testing any actual command behavior. Similar to the--profiletest, this should be removed per the PR's objective.Additionally,
mockLocationsSetis initialized but never verified in any test, indicating missing functional coverage.- it("should accept --location flag", async () => { - const { stdout } = await runCommand( - ["spaces:locations:set", "--help"], - import.meta.url, - ); - - expect(stdout).toContain("--location"); - });
231-238: Remove redundant help flag verification and consider functional test.This test only checks help text for the
--dataflag without validating command behavior. Consistent with the PR's objective to remove non-functional tests, this should be removed.The
mockLocksAcquiremock is also unused, suggesting missing functional test coverage.- it("should accept --data flag", async () => { - const { stdout } = await runCommand( - ["spaces:locks:acquire", "--help"], - import.meta.url, - ); - - expect(stdout).toContain("--data"); - });
253-261: Remove redundant help flag verification and consider functional test.This test only verifies position flags appear in help text using regex matching, without testing actual cursor set behavior. This aligns with tests that should be removed per the PR objective.
The
mockCursorsSetmock is initialized but never used in assertions, indicating missing functional coverage.- it("should accept position flags", async () => { - const { stdout } = await runCommand( - ["spaces:cursors:set", "--help"], - import.meta.url, - ); - - expect(stdout).toMatch(/--x|--data/); - expect(stdout).toMatch(/--y|--data/); - });
13-34: Consider removing or utilizing unused mocks.The following mocks are initialized in
beforeEachbut never verified in any test assertions:
mockMembersUnsubscribemockSpaceLeavemockLocationsGetAllmockLocksGetAllmockCursorsGetAllThis creates unnecessary test setup overhead. Either add functional tests that verify these mocks are called, or remove them from the setup to simplify the test harness.
test/unit/commands/channels/batch-publish.test.ts (4)
1-33: Global Ably mock wiring looks solid; double‑checkrunCommandload options & centralize__TEST_MOCKS__typing
- The
beforeEach/afterEachpattern withglobalThis.__TEST_MOCKS__cleanly injects a mocked Ably REST client and removes it after each test, which aligns well with the PR goal of exercising real command behavior via a shared mock instead of per‑test subclasses. 👍- Two small follow‑ups to consider:
- The
runCommandfunction from@oclif/testhas the signaturerunCommand(args: string | string[], loadOpts?: Interfaces.LoadOptions, captureOpts?: CaptureOptions), whereloadOptsis an options object, not a plain string.(jsdocs.io) Here you passimport.meta.urldirectly as the second argument in all call sites. It would be safer (and more type‑correct) either to:
- Drop the second argument entirely if you don’t need custom load options, e.g.
await runCommand(["channels:batch-publish", "--help"]), or- Wrap it in the appropriate options object if you do, e.g.
await runCommand([...], { dir: import.meta.url })(or whateverInterfaces.LoadOptionsexpects in this codebase).- The
declare globalfor__TEST_MOCKS__is inline in this file. If other tests also augmentglobalThis.__TEST_MOCKS__, consider moving the declaration into a single shared.d.ts(or helper) so the shape is defined once and can evolve without divergence.
35-55: Help/flags coverage is good; minor duplication you could consolidate laterThe
helpandflagssuites do a nice job asserting real CLI help text and that all key flags are exposed via--help, which is exactly the kind of behavior‑level testing FTF‑156 is aiming for.If you ever want to trim a bit of duplication, you could capture
stdoutfrom a single--helpinvocation in eachdescribeand assert multiple expectations against it (rather than re‑runningrunCommandfor each flag name), but that’s purely an optimization—what you have is already clear and correct.Also applies to: 341-386
94-339: End‑to‑end batch‑publish behavior, JSON mode, and error paths are very well coveredThis
describe("batch publish functionality")block does a thorough job of exercising the command as a black box:
- Verifies the Ably REST call shape via
mockRequest(method, path, version argument, and body fields for channels, data, name, and encoding).- Confirms both
--channelsand--channels-jsoninputs, plus the--specpath, all drive the same underlying request structure.- Tests both human‑readable output and
--jsonmode, including success and error shapes, by parsing the JSON that follows the initial “Sending batch publish request” log line.- Covers both full failures (rejected request) and partial success responses, ensuring the user‑facing messaging (“partially successful”, JSON
success: false, and error text) is correct.The JSON extraction via
findIndex((line) => line.trim() === "{")is a bit opinionated about pretty‑printing, but that’s acceptable given the current output format; if the formatter ever changes, you can always relax this to look for the first line starting with{instead.
31-33: Optional: consider centralizing avicleanup in shared test setupIn this file the only persistent thing torn down in
afterEachisglobalThis.__TEST_MOCKS__, and thevimocks are local and recreated per test, so nothing is strictly wrong.If, however, your broader suite uses
vito stub globals or shared modules, you might want to add avi.restoreAllMocks()/vi.clearAllMocks()in a shared Vitest setup file (rather than here) to guarantee cross‑test isolation.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Jira integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (15)
src/commands/bench/subscriber.ts(3 hunks)test/unit/commands/accounts/login.test.ts(1 hunks)test/unit/commands/accounts/logout.test.ts(1 hunks)test/unit/commands/auth/revoke-token.test.ts(1 hunks)test/unit/commands/bench/benchmarking.test.ts(1 hunks)test/unit/commands/channels/batch-publish.test.ts(1 hunks)test/unit/commands/channels/history.test.ts(1 hunks)test/unit/commands/channels/list.test.ts(2 hunks)test/unit/commands/channels/presence/enter.test.ts(1 hunks)test/unit/commands/channels/presence/subscribe.test.ts(1 hunks)test/unit/commands/channels/subscribe.test.ts(1 hunks)test/unit/commands/help.test.ts(1 hunks)test/unit/commands/spaces/spaces.test.ts(1 hunks)test/unit/commands/status.test.ts(1 hunks)test/unit/commands/support.test.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
**/*.test.ts
📄 CodeRabbit inference engine (.cursor/rules/AI-Assistance.mdc)
**/*.test.ts: When testing, mock the Ably SDK properly by stubbing the constructor (e.g., sinon.stub(Ably, 'Rest').returns(mockClient)).
Always ensure resources are properly closed/cleaned up in tests (e.g., call await client.close() and sinon.restore() in afterEach).
Files:
test/unit/commands/help.test.tstest/unit/commands/status.test.tstest/unit/commands/channels/list.test.tstest/unit/commands/channels/presence/subscribe.test.tstest/unit/commands/channels/history.test.tstest/unit/commands/support.test.tstest/unit/commands/spaces/spaces.test.tstest/unit/commands/channels/batch-publish.test.tstest/unit/commands/channels/presence/enter.test.tstest/unit/commands/accounts/logout.test.tstest/unit/commands/auth/revoke-token.test.tstest/unit/commands/accounts/login.test.tstest/unit/commands/bench/benchmarking.test.tstest/unit/commands/channels/subscribe.test.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/Development.mdc)
**/*.{ts,tsx}: Use TypeScript and follow best practice naming conventions
ESLint is used to ensure all code adheres best practices; ensure you check that all code passes the linter before concluding your work.
Files:
test/unit/commands/help.test.tssrc/commands/bench/subscriber.tstest/unit/commands/status.test.tstest/unit/commands/channels/list.test.tstest/unit/commands/channels/presence/subscribe.test.tstest/unit/commands/channels/history.test.tstest/unit/commands/support.test.tstest/unit/commands/spaces/spaces.test.tstest/unit/commands/channels/batch-publish.test.tstest/unit/commands/channels/presence/enter.test.tstest/unit/commands/accounts/logout.test.tstest/unit/commands/auth/revoke-token.test.tstest/unit/commands/accounts/login.test.tstest/unit/commands/bench/benchmarking.test.tstest/unit/commands/channels/subscribe.test.ts
{test/**/*.{ts,js},**/*.{test,spec}.{ts,js}}
📄 CodeRabbit inference engine (.cursor/rules/Development.mdc)
When new features are added or changes made, tests must be updated or added, and it is your responsibility to ensure the tests pass before deeming your work complete.
Files:
test/unit/commands/help.test.tstest/unit/commands/status.test.tstest/unit/commands/channels/list.test.tstest/unit/commands/channels/presence/subscribe.test.tstest/unit/commands/channels/history.test.tstest/unit/commands/support.test.tstest/unit/commands/spaces/spaces.test.tstest/unit/commands/channels/batch-publish.test.tstest/unit/commands/channels/presence/enter.test.tstest/unit/commands/accounts/logout.test.tstest/unit/commands/auth/revoke-token.test.tstest/unit/commands/accounts/login.test.tstest/unit/commands/bench/benchmarking.test.tstest/unit/commands/channels/subscribe.test.ts
src/commands/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/AI-Assistance.mdc)
src/commands/**/*.ts: Commands should follow the oclif structure: export default class MyCommand extends Command { ... }
Command should support auth via the standard auth helper (import { getAuth } from '../../helpers/auth') and use it in the run method.
Use try/catch for error handling in commands, and provide user-friendly error messages (e.g., handle error.code === 40100 for authentication failures).
Do not use direct HTTP requests (e.g., fetch) for data plane APIs; use the Ably SDK instead.
Do not use hardcoded endpoint URLs; use configuration values (e.g., flags.controlHost, config.controlHost) when constructing URLs.
Do not use non-standard command structures (e.g., export default class { async execute(args) { ... } }); always use the oclif Command class.
Files:
src/commands/bench/subscriber.ts
src/commands/**/*.{ts,js}
📄 CodeRabbit inference engine (.cursor/rules/Development.mdc)
Follow oclif framework best practices as described in the oclif documentation.
Files:
src/commands/bench/subscriber.ts
🧬 Code graph analysis (13)
test/unit/commands/help.test.ts (3)
test/helpers/command-helpers.ts (1)
runCommand(67-83)test/helpers/cli-runner.ts (1)
stdout(125-127)src/interactive-base-command.ts (1)
error(23-52)
src/commands/bench/subscriber.ts (1)
src/utils/long-running.ts (1)
waitUntilInterruptedOrTimeout(13-79)
test/unit/commands/status.test.ts (2)
test/helpers/command-helpers.ts (1)
runCommand(67-83)test/helpers/cli-runner.ts (1)
stdout(125-127)
test/unit/commands/channels/list.test.ts (3)
test/root-hooks.ts (2)
beforeEach(8-11)afterEach(13-61)test/helpers/command-helpers.ts (1)
runCommand(67-83)test/helpers/cli-runner.ts (1)
stdout(125-127)
test/unit/commands/channels/history.test.ts (3)
test/root-hooks.ts (2)
beforeEach(8-11)afterEach(13-61)test/helpers/command-helpers.ts (1)
runCommand(67-83)test/helpers/cli-runner.ts (1)
stdout(125-127)
test/unit/commands/support.test.ts (2)
test/helpers/command-helpers.ts (1)
runCommand(67-83)test/helpers/cli-runner.ts (2)
stdout(125-127)stderr(129-131)
test/unit/commands/spaces/spaces.test.ts (3)
test/root-hooks.ts (2)
beforeEach(8-11)afterEach(13-61)test/helpers/command-helpers.ts (1)
runCommand(67-83)test/helpers/cli-runner.ts (1)
stdout(125-127)
test/unit/commands/channels/batch-publish.test.ts (3)
test/helpers/command-helpers.ts (1)
runCommand(67-83)test/helpers/cli-runner.ts (1)
stdout(125-127)test/unit/commands/channels/publish.test.ts (1)
error(54-61)
test/unit/commands/accounts/logout.test.ts (3)
test/root-hooks.ts (2)
beforeEach(8-11)afterEach(13-61)test/helpers/command-helpers.ts (1)
runCommand(67-83)test/helpers/cli-runner.ts (1)
stdout(125-127)
test/unit/commands/auth/revoke-token.test.ts (3)
test/helpers/command-helpers.ts (1)
runCommand(67-83)test/helpers/cli-runner.ts (2)
stdout(125-127)stderr(129-131)src/interactive-base-command.ts (1)
error(23-52)
test/unit/commands/accounts/login.test.ts (3)
test/root-hooks.ts (2)
beforeEach(8-11)afterEach(13-61)test/helpers/command-helpers.ts (1)
runCommand(67-83)test/helpers/cli-runner.ts (1)
stdout(125-127)
test/unit/commands/bench/benchmarking.test.ts (2)
test/helpers/command-helpers.ts (1)
runCommand(67-83)test/helpers/cli-runner.ts (1)
stdout(125-127)
test/unit/commands/channels/subscribe.test.ts (3)
test/helpers/cli-runner.ts (1)
stdout(125-127)test/unit/commands/channels/publish.test.ts (1)
error(54-61)test/unit/commands/channels/occupancy/get.test.ts (1)
error(55-58)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: ui-tests
- GitHub Check: session-tests
- GitHub Check: auth-tests
- GitHub Check: Cursor Bugbot
- GitHub Check: test
- GitHub Check: e2e-cli
🔇 Additional comments (38)
test/unit/commands/support.test.ts (3)
4-20: LGTM! Topic listing tests exercise real CLI behavior.These tests appropriately invoke the actual CLI command and validate output, which aligns with the PR's objective to replace mock-heavy tests with real behavioral tests.
22-88: LGTM! Help and routing tests provide good coverage.The help flag tests (both
--helpand-h) and subcommand routing tests effectively validate CLI behavior by invoking actual commands and checking output. The regex patterns used for matching help content are appropriately flexible for integration testing.
2-2: No action needed on this import.The
runCommandimport from@oclif/testis consistent across all unit and integration tests in the codebase. The custom helper intest/helpers/command-helpers.tsis used exclusively by e2e tests, which have different requirements. The import in this file is correct for its test category.test/unit/commands/accounts/logout.test.ts (2)
35-55: LGTM!Help tests appropriately verify that documentation is present and includes expected flags and examples.
139-161: LGTM!The tests properly verify multi-account logout scenarios, ensuring the correct account is removed and remaining accounts are preserved in the output.
test/unit/commands/status.test.ts (3)
1-12: Well-structured test setup with proper HTTP mock cleanup.The test file correctly uses vitest, @oclif/test's
runCommand, and nock for HTTP mocking. ThebeforeEach/afterEachhooks properly clean up nock interceptors to prevent test pollution.
14-26: Good test coverage for operational status scenario.The test verifies real command behavior by mocking the HTTP endpoint and asserting on actual stdout output. This aligns well with the PR objective of testing actual command logic rather than just stubs.
42-72: Comprehensive error handling tests.Good coverage of error scenarios including invalid response payloads, network errors, and HTTP 500 errors. The assertions verify that errors are properly propagated from the command.
test/unit/commands/help.test.ts (2)
1-19: Clean test structure for help command.The test file appropriately uses
runCommandfrom @oclif/test without unnecessary mocking since help commands don't require network calls. The regex-based version check on line 17 is flexible enough to handle different output formats.
60-79: Good error handling tests for unknown commands.The tests properly verify that the CLI returns meaningful errors when users request help for non-existent commands. This covers an important edge case.
test/unit/commands/channels/presence/subscribe.test.ts (3)
4-9: Global type declaration for test mocks.The
declare globalblock provides type safety for the__TEST_MOCKS__pattern used across presence tests. This is a reasonable approach for sharing mock state between the test and the code under test.
16-55: Comprehensive mock setup for Ably Realtime client.The mock structure covers the essential Ably Realtime API surface: channels, presence, connection, and auth. The
presenceCallbackcapture pattern allows tests to simulate incoming presence events.
57-59: Proper cleanup of global test mocks.The
afterEachhook correctly cleans up the global__TEST_MOCKS__to prevent test pollution, which aligns with the coding guidelines for resource cleanup.test/unit/commands/channels/presence/enter.test.ts (3)
17-62: Well-structured mock setup for presence enter command.The mock configuration properly covers the presence API surface including
enter,get,leave, andsubscribemethods. The mock responses (e.g.,mockPresenceGetreturning presence members) enable meaningful assertions about command behavior.
119-138: Good test for data payload parsing.This test properly verifies that the
--dataflag's JSON string is parsed and passed topresence.enteras an object. The assertion on line 134-137 confirms the actual behavior rather than just that a stub was called.
174-190: Good error handling test for invalid JSON.The test verifies that the command properly reports an error when invalid JSON is provided via the
--dataflag. The regex assertion on line 189 is appropriately flexible.test/unit/commands/accounts/login.test.ts (4)
14-30: Excellent test isolation with temporary config directory.The setup creates an isolated config directory per test with secure permissions (0o700), preventing tests from affecting each other or the developer's actual config. The minimal config file on lines 26-29 provides just enough state for the command to run.
32-46: Thorough cleanup in afterEach.The cleanup properly:
- Cleans nock interceptors
- Restores the original
ABLY_CLI_CONFIG_DIR(handling both set and unset cases)- Removes the temporary directory
This prevents test pollution and filesystem leaks.
126-157: Good test for auto-selected app behavior.The test verifies an important UX feature where a single app is auto-selected. Note the
.twice()on line 133 which correctly accounts for the/meendpoint being called twice during the login flow.
188-236: Comprehensive error handling tests.Good coverage of error scenarios including authentication failure (401), network errors, and server errors (500). The tests verify that errors are properly captured and formatted in JSON output.
test/unit/commands/channels/list.test.ts (3)
62-82: LGTM!Help tests properly verify CLI interface documentation is accessible and contains expected content.
84-161: LGTM!Channel listing tests provide good coverage of success paths, empty responses, API errors, and flag handling. The mock verification approach properly validates the command's interaction with the Ably REST client.
164-221: LGTM!JSON output tests thoroughly validate the response structure and error handling in JSON mode. Good coverage of the
--jsonflag behavior.test/unit/commands/auth/revoke-token.test.ts (4)
68-99: LGTM!Help tests properly verify the command interface documentation, including required arguments and available flags.
101-112: LGTM!Argument validation test correctly verifies the required token argument.
114-251: LGTM!Token revocation tests provide excellent coverage using nock for HTTP mocking. The tests cover success cases, fallback behavior, JSON output, and various error scenarios including the special
token_not_foundhandling.
276-297: LGTM!Good security test verifying that the API key secret is masked in debug output. This prevents accidental credential exposure.
src/commands/bench/subscriber.ts (2)
34-38: LGTM!The new
--durationflag is well-implemented with a clear description and appropriate short flag-d. This enables timeout-based subscription limiting for benchmark tests.
802-812: LGTM!The
waitForTerminationmethod correctly delegates towaitUntilInterruptedOrTimeoutand logs the exit reason. The type assertion is safe since oclif parses--durationas an integer flag.test/unit/commands/bench/benchmarking.test.ts (4)
4-10: LGTM!Global test mocks declaration properly defines both
ablyRealtimeMockandablyRestMockto support testing both realtime and REST-based benchmark commands.
27-64: LGTM!Comprehensive mock setup properly simulates the Ably SDK interface including channels, connection state, presence, and auth. The
oncecallback for 'connected' event enables proper testing of connection-dependent flows.
158-238: LGTM!Publishing functionality tests properly verify command behavior by checking mock interactions. The use of
vi.waitForfor async assertions is appropriate for timing-sensitive tests.
289-342: LGTM!Subscription functionality tests properly verify that the subscriber enters presence with the correct role and subscribes to channel messages and presence events. This ensures proper coordination with publisher commands.
test/unit/commands/channels/history.test.ts (2)
14-49: Verify that the global mock approach aligns with Ably SDK testing guidelines.The coding guidelines specify: "When testing, mock the Ably SDK properly by stubbing the constructor (e.g., sinon.stub(Ably, 'Rest').returns(mockClient))."
This test uses a global
__TEST_MOCKS__container instead of constructor stubbing. While this may be intentional as part of the migration to lightweight integration tests, please confirm this approach properly exercises the actual Ably SDK initialization code paths.Based on coding guidelines for
**/*.test.ts.
1-2: > Likely an incorrect or invalid review comment.test/unit/commands/channels/subscribe.test.ts (2)
11-266: Well-structured test refactoring.This refactoring successfully addresses the PR objectives by:
- Replacing class-based mocks with streamlined
runCommand-based integration testing- Verifying real command behavior through stdout assertions and mock interaction checks
- Testing actual channel configuration (lines 220-265) rather than just stubs
- Properly organizing tests into logical groups (help, validation, functionality, flags)
The flag configuration tests are particularly strong, as they verify the command correctly passes options to the Ably SDK.
134-160: Test has implicit timeout via environment variable—no hanging concern.The subscribe command terminates properly in this test because the test environment (
vitest.config.ts) setsABLY_CLI_DEFAULT_DURATION="0.25"(250ms), whichwaitUntilInterruptedOrTimeout()uses as a fallback timeout when no explicit duration flag is provided. The test is designed correctly and does not require additional timeout handling.Likely an incorrect or invalid review comment.
test/unit/commands/channels/batch-publish.test.ts (1)
57-92: Argument‑validation tests now exercise real CLI failure modesThe two argument‑validation specs correctly drive the
channels:batch-publishcommand viarunCommandand assert on the resultingerror.message, so they cover actual error paths (missing channels vs missing message) instead of just test scaffolding. That’s a solid improvement in terms of FTF‑156’s goal to ensure tests validate real behavior.
75301c3 to
ba7b4ed
Compare
denissellu
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this was a massive PR, but i think i got the gist o it, just a quick question, maybe i missed it, should we add docs for the duration flag?
|
@denissellu done in ce13983 |
ce13983 to
1561578
Compare
8112f10 to
b00c926
Compare
f3ef6a4 to
5be359a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the final PR Bugbot will review for you during this billing cycle
Your free Bugbot reviews will reset on January 4
Details
Your team is on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle for each member of your team.
To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the final PR Bugbot will review for you during this billing cycle
Your free Bugbot reviews will reset on January 4
Details
Your team is on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle for each member of your team.
To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.
Bug: Signal handler prevents client cleanup causing resource leak
The cleanup() function was modified to remove client.close() with a comment that cleanup is handled by the base class finally() method. However, the signal handlers still call process.exit(0) immediately after cleanup(), which terminates the process synchronously and prevents the finally() method from ever executing. This results in the Ably Realtime connection not being properly closed when the user presses Ctrl+C, causing a resource leak. The command should either use waitUntilInterruptedOrTimeout() like other commands, or restore the client.close() call in the cleanup function.
src/commands/logs/push/subscribe.ts#L185-L200
ably-cli/src/commands/logs/push/subscribe.ts
Lines 185 to 200 in 5be359a
| // Handle process termination | |
| process.on("SIGINT", () => { | |
| if (!this.shouldOutputJson(flags)) { | |
| this.log("\nSubscription ended"); | |
| } | |
| cleanup(); | |
| process.exit(0); // Reinstated: Explicit exit on signal | |
| }); | |
| process.on("SIGTERM", () => { | |
| cleanup(); | |
| process.exit(0); // Reinstated: Explicit exit on signal | |
| }); |
Replace Testable* wrapper pattern with runCommand and global mocks for more realistic testing of actual command behavior. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace Testable* wrapper pattern with runCommand and global mocks for more realistic testing of actual command behavior. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
5be359a to
d54e00b
Compare
Replace Testable* wrapper pattern with runCommand and global mocks for more realistic testing of actual command behavior. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace Testable* wrapper pattern with runCommand and global mocks for more realistic testing of actual command behavior. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace Testable* wrapper pattern with runCommand and global mocks for more realistic testing of actual command behavior. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace Testable* wrapper pattern with runCommand, nock HTTP mocking, and temporary config directory for more realistic testing of actual command behavior. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace Testable* wrapper pattern with runCommand and global mocks for more realistic testing of actual command behavior. Includes proper cleanup with vi.restoreAllMocks(). 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…tern Replace Testable* wrapper pattern with runCommand and global mocks for more realistic testing of actual command behavior. Includes functional test for presence event display. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
… pattern Replace Testable* wrapper pattern with runCommand and global mocks for more realistic testing of actual command behavior. Uses vi.waitFor for deterministic async testing. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace Testable* wrapper pattern with runCommand and global mocks for more realistic testing of actual command behavior. Includes functional tests for --location, --data, --x and --y flags. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace Testable* wrapper pattern with runCommand and global mocks for more realistic testing of actual command behavior. Includes functional cipher test and proper cleanup with vi.clearAllMocks(). 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace Testable* wrapper pattern with runCommand and nock HTTP mocking for more realistic testing of actual command behavior. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace Testable* wrapper pattern with runCommand and nock HTTP mocking for more realistic testing of actual command behavior. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace Testable* wrapper pattern with runCommand and global mocks for more realistic testing of actual command behavior. Uses vi.waitFor for deterministic message count verification. Also adds test mode duration override to bench/subscriber.ts. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
We have a redundant job to "trigger" e2e tests - it only console prints, so skip that. We also only run them on certain changesets right now which means we cant make them a required check. Remove this constraint to always run them.
The job just takes the uploaded results and prints them, offerring no additional value. Lets remove it.
d54e00b to
a31bfc0
Compare
jamiehenson
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the previous DRY-up PR, not a comment on the quality of the offering, but there's no way I can develop a sufficient mental model for this with regards to test effectiveness, given the scope and my previous lack of familiarity of the cli code itself.
Therefore, abstracting my analysis to AI summaries/reviews, green CI and general vibes, this works.
There are numerous tests that don't test anything useful (e.g. asserting that a stub works instead of testing the actual command). There are also plenty of files that don't properly test anything. This PR improves the tests.
FTF-156
Note
Overhauls unit tests to exercise real CLI behavior, adds a duration flag to bench subscriber, and streamlines CI E2E workflows with minor docs updates.
--duration (-d)flag toably bench subscriberand implement timed shutdown viawaitUntilInterruptedOrTimeout.@oclif/test,nock, and global mocks (__TEST_MOCKS__) across commands: accounts (login/logout), auth (revoke-token), channels (batch-publish/history/list/presence/subscribe), rooms (messages), spaces, help, status, and benchmarks.e2e-web-cli-parallel.yml.--durationflag forbench subscriber.Written by Cursor Bugbot for commit a31bfc0. This will update automatically on new commits. Configure here.
Summary by CodeRabbit
--duration(-d) flag to the benchmark subscribe command, allowing you to specify how long to subscribe for in seconds instead of running indefinitely until manual interruption.✏️ Tip: You can customize this high-level summary in your review settings.