Skip to content

Conversation

@karya0
Copy link
Contributor

@karya0 karya0 commented Jan 27, 2026

Summary

Several updates to bring it up to speed with the latest AI-Dynamo code base and use cases.

Test Plan

  • Manual Verification
  • Some unit tests

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 27, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Refactors AI Dynamo workload models and public API (GenAIPerf, LMCache, LMBench, WorkerConfig), adds percentile and OpenAI TTFT tooling and wrapper scripts, updates SLURM/Kubernetes execution (container paths, pod wrapper), rewrites ai_dynamo.sh, and adds/updates tests and toml configs.

Changes

Cohort / File(s) Summary
Core AI Dynamo models & public API
src/cloudai/workloads/ai_dynamo/ai_dynamo.py, src/cloudai/workloads/ai_dynamo/__init__.py
Introduces Args/Workload/WorkerConfig, LMCacheArgs/LMCache, GenAIPerf/LMBench/CustomWorkload, Constraints; expands AIDynamoCmdArgs/AIDynamoTestDefinition, adds validation/installables/per-workload result checks; renames export GenAIPerfArgs → GenAIPerf and exposes new symbols.
Shell orchestration scripts (new/updated)
src/cloudai/workloads/ai_dynamo/ai_dynamo.sh, src/cloudai/workloads/ai_dynamo/genai_perf.sh, src/cloudai/workloads/ai_dynamo/lmbench.sh, src/cloudai/workloads/ai_dynamo/custom_workload.sh
Large ai_dynamo.sh rewrite (install dirs, per-workload configs, node-listing, connector mapping, launch orchestration, new CLI flags); adds genai_perf wrapper, lmbench runner, and custom workload orchestration with KV-cache management.
CSV percentile tooling & reporting
src/cloudai/workloads/ai_dynamo/calc_percentile_csv.py, src/cloudai/workloads/ai_dynamo/report_generation_strategy.py, tests/test_calc_percentile_csv.py
Adds calc_percentile_csv (compute_percentile, summarize, CLI) and tests; report strategy simplified to pandas-based extraction and generate_report stubbed.
OpenAI TTFT client
src/cloudai/workloads/ai_dynamo/openai_chat_client.py
New TTFT benchmarking module: TtftStats, Chat streaming, KVCacheFlusher, compute_kv_cache_token_size, and CLI orchestration for filler/baseline/post-flush runs with CSV/JSON output.
Kubernetes execution updates
src/cloudai/systems/kubernetes/kubernetes_system.py
Adds frontend-pod wrapper workflow: copies wrapper and percentile script into pod, chmods, builds serialized wrapper command (handles extra_args), execs wrapper, streams logs, copies results back, and handles ApiException cases.
Slurm strategy & container paths
src/cloudai/systems/slurm/slurm_command_gen_strategy.py, src/cloudai/workloads/ai_dynamo/slurm_command_gen_strategy.py
Adds container_install_path/container_results_path, centralizes container mounts, introduces nested TOML arg helpers and nested args handling, simplifies srun generation, updates node-count/overlap validation and logging.
Kubernetes JSON strategy adjustments
src/cloudai/workloads/ai_dynamo/kubernetes_json_gen_strategy.py
Switches internal handling to accept WorkerConfig (iterates over worker.args) for argument extraction and multinode flagging.
LMCache / LMBenchmark integration & artifacts
src/cloudai/workloads/ai_dynamo/*
Adds LMCache/LMBench artifacts and wiring across scripts and args; introduces calc_percentile_csv usage and per-workload artifact paths.
Tests & reference configs updated
tests/*, tests/ref_data/ai-dynamo.sbatch, conf/experimental/.../*.toml
Updates tests and fixtures to new types (GenAIPerf, WorkerConfig, WorkerBaseArgs, LMCache/LMBench), adds percentile unit tests, reworks sbatch and TOML to nested args and new CLI flags; updates container mounts and SLURM invocation paths.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐇 I hopped through scripts and copied files in pods,
I wrapped perf runs, counted tokens and odds,
Percentiles sorted, chat clients in line,
I rewired launches, mounts, and config design,
A carrot-cheer for this bundle — tidy and fine.

🚥 Pre-merge checks | ✅ 1 | ❌ 1
❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'AI-Dynamo updates' is vague and uses a generic term 'updates' that does not clearly convey the main changes; it lacks specificity about the actual modifications made. Provide a more specific title that highlights the primary change, such as 'Refactor AI-Dynamo workload models and add wrapper-based execution' or 'Add GenAIPerf/LMCache/LMBench public API and workspace abstraction'.
✅ Passed checks (1 passed)
Check name Status Explanation
Description check ✅ Passed The description is related to the changeset; it indicates updates to bring the codebase in line with the AI-Dynamo base, which aligns with the substantial refactoring of workload models and public API shown in the raw summary.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@karya0
Copy link
Contributor Author

karya0 commented Jan 27, 2026

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 27, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 30

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (5)
tests/test_acceptance.py (1)

469-479: Missing required name field causes pipeline failure.

GenAIPerf inherits from Benchmark which requires a name field. The pipeline error confirms:

GenAIPerf: field 'name' is required. ValidationError: 1 validation error for GenAIPerf; name field is missing.

Either provide name here, or fix the GenAIPerf class to have a default (preferred fix at the class level).

🐛 Workaround if class-level fix isn't applied
                     genai_perf=GenAIPerf(
+                        name="genai_perf",
                         **{
                             "streaming": True,
tests/report_generation_strategy/test_ai_dynamo_report_generation_strategy.py (2)

89-121: Test expects report generation behavior that was removed.

This test calls strategy.generate_report() and then asserts that report.csv exists with specific content. However, generate_report() is now implemented as pass (no-op), so:

  1. report_file.is_file() assertion on line 94 will fail
  2. All subsequent content assertions are unreachable

Either the test should be removed/updated to match the new no-op behavior, or generate_report() needs a proper implementation.


124-147: Metric tests expect different file naming and metric key formats.

The test fixture creates profile_genai_perf.csv, but get_metric() now looks for genai_perf-report.csv. Additionally, the tests use kebab-case metric names like "output-token-throughput", but the implementation searches for exact matches in the CSV's "Metric" column (e.g., "Output Token Throughput (tokens/sec)").

These tests will return METRIC_ERROR instead of the expected values.

tests/slurm_command_gen_strategy/test_ai_dynamo_slurm_command_gen_strategy.py (1)

60-74: Missing required name field for GenAIPerf.

The pipeline failure indicates:

GenAIPerf: missing required field 'name' in definition

This instantiation needs the required name field.

🐛 Workaround
         genai_perf=GenAIPerf(
+            name="genai_perf",
             **{
                 "endpoint-type": "chat",
src/cloudai/systems/kubernetes/kubernetes_system.py (1)

379-389: Use list-based subprocess call for cp_logs_cmd.

Similar to the earlier commands, avoid shell=True for the log copy operation.

♻️ Suggested fix
-        cp_logs_cmd = " ".join(
-            [
-                "kubectl",
-                "cp",
-                f"{self.default_namespace}/{frontend_pod}:{genai_perf_results_path}",
-                str(job.test_run.output_path / "genai-perf"),
-            ]
-        )
+        cp_logs_cmd = [
+            "kubectl",
+            "cp",
+            f"{self.default_namespace}/{frontend_pod}:{genai_perf_results_path}",
+            str(job.test_run.output_path / "genai-perf"),
+        ]
         logging.debug(f"Copying genai-perf results with command: {cp_logs_cmd}")
-        p = subprocess.run(cp_logs_cmd, shell=True, capture_output=True, text=True)
+        p = subprocess.run(cp_logs_cmd, capture_output=True, text=True)
🤖 Fix all issues with AI agents
In `@src/cloudai/systems/kubernetes/kubernetes_system.py`:
- Around line 376-378: The current except block for lazy.k8s.client.ApiException
only logs the error and then allows execution to continue (risking copying
missing results); update the except to stop further processing by either
re-raising the caught exception (raise) or returning early from the enclosing
function/method so downstream code that copies results does not run; modify the
except block that catches lazy.k8s.client.ApiException (the one referencing
frontend_pod) to either raise the same exception (e) or return an appropriate
failure value immediately.
- Around line 311-316: The kubectl copy calls use subprocess.run with shell=True
and f-strings (wrapper_script_path, calc_csv_script_path, frontend_pod,
pod_wrapper_path, pod_calc_csv_path) which risks shell injection; change both
subprocess.run invocations in the method to pass the command as a list of
arguments (e.g. ["kubectl", "cp", wrapper_script_path,
f"{self.default_namespace}/{frontend_pod}:{pod_wrapper_path}"]) and remove
shell=True so Python executes the command directly; do the same for the calc CSV
call (use ["kubectl","cp", calc_csv_script_path,
f"{self.default_namespace}/{frontend_pod}:{pod_calc_csv_path}"]) while keeping
capture_output=True, text=True, and check=True.

In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.py`:
- Around line 41-59: The Benchmark base model declares a required field name:
str but subclasses GenAIPerf and LMBench don't provide defaults, causing
validation errors when instantiated without args; update the GenAIPerf and
LMBench classes to declare a default name (e.g., name: str = "GenAIPerf" and
name: str = "LMBench" or other canonical defaults) so Pydantic validation
succeeds, ensuring any logic using Benchmark.report_name continues to reference
the subclass name.

In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.sh`:
- Around line 864-895: The issue is that the variable CUFILE_ENV_PATH_JSON on
line 867 is not quoted, which can cause word splitting if the path contains
spaces. To fix this, quote the variable in the redirection operator when writing
to the file in the setup_cufile function, changing the line to use
"$CUFILE_ENV_PATH_JSON" instead of $CUFILE_ENV_PATH_JSON to ensure correct
handling of paths with spaces.
- Around line 852-862: The command line in the genai-perf wrapper should quote
all variable expansions to avoid word-splitting for paths/args: wrap
${genai_perf_wrapper_script}, $RESULTS_DIR, ${genai_perf_report_file},
${calc_percentile_csv_script}, $(_gpus_per_node), the
dynamo_args/genai_perf_argument/genai_perf_args expansions, the redirection
target ${RESULTS_DIR}/genai_perf.log, and the touch "$DONE_MARKER" in quotes;
update the invocation in the block that builds the genai-perf command
(references: genai_perf_wrapper_script, RESULTS_DIR, genai_perf_report_file,
calc_percentile_csv_script, _gpus_per_node, dynamo_args["genai-perf-cmd"],
genai_perf_arguments, genai_perf_args["--extra-args"], DONE_MARKER) so every
variable is used as "${VAR}" (or the appropriate quoted form for
associative/array elements) and the log file path is quoted too.
- Around line 993-1018: In launch_lmbench, guard the pushd and quote expansions:
check pushd "$RESULTS_DIR" and if it fails log an error and exit or touch
"$DONE_MARKER" and return; after a successful pushd ensure you popd on exit;
quote all variable expansions like "${lmbench_dir}", "${RESULTS_DIR}",
"${log_file}", "${calc_percentile_csv_script}", and use proper quoting when
invoking the command and args (e.g. "$cmd" "${lmbench_arguments}"
"${lmbench_args["--extra-args"]}" or convert cmd/args into an array) to avoid
word-splitting and globbing and ensure safe handling of spaces in
launch_lmbench.
- Around line 898-914: The setup_kvbm function currently does an unsafe rm -rf
on DYN_KVBM_DISK_CACHE_DIR; before removing, validate and normalize the path
(use realpath) and ensure it is non-root, absolute, does not contain globs, and
is within an allowed base directory (e.g., a configured CACHE_BASE like
/var/cache or /tmp) or matches a safe whitelist; if validation fails, log an
error and exit instead of deleting. Also ensure all uses of
${DYN_KVBM_DISK_CACHE_DIR} are quoted to avoid word-splitting, and only perform
rm -rf after these checks succeed; keep the call to setup_cufile unchanged.
- Around line 561-563: The install step for "uv" currently pipes curl to sh and
blindly sources $HOME/.local/bin/env; change it to download the installer to a
temp file (using curl -fL), verify integrity (checksum or signature) before
executing, run the installer explicitly and check its exit status, and only
source $HOME/.local/bin/env after confirming the file exists and the install
succeeded; on any failure log an error via the existing log function and exit
with non-zero status. If possible prefer a package manager installation path
(apt/yum) or document verification requirements instead of curl | sh.
- Around line 1031-1047: The nvidia-smi invocation in function
log_gpu_utilization builds the output filename without quoting environment
variables, so paths containing spaces break; update the nvidia-smi -f argument
to quote the constructed path using the RESULTS_DIR and SLURM_NODEID variables
(e.g., use "${RESULTS_DIR}/gpu_utilization-${SLURM_NODEID}.csv") so the shell
treats it as a single filename when calling nvidia-smi in log_gpu_utilization.
- Around line 789-797: The launch_lmcache_controller function currently invokes
${dynamo_args["lmcache-controller-cmd"]} unquoted which can cause word-splitting
and broken argument parsing; update the call in launch_lmcache_controller to run
the command safely by quoting the variable or invoking it via bash -c (e.g.,
pass "${dynamo_args["lmcache-controller-cmd"]}" to bash -c or use eval-safe
invocation) so that dynamo_args["lmcache-controller-cmd"] with spaces/args is
executed correctly and output still redirected to
${RESULTS_DIR}/lmcache_controller.log.
- Around line 174-183: The min function calls use unquoted parameter expansions
which can cause word-splitting; update the two invocations that set
genai_perf_args["--warmup-request-count"] and genai_perf_args["--request-count"]
to quote their arguments (reference the genai_perf_args array keys
"--warmup-request-count" and "--request-count" and the dynamo_args keys
"min-warmup-request-count" and "min-request-count") so the calls become min
"${genai_perf_args[... ]}" "${dynamo_args[... ]}" ensuring safe expansion.
- Around line 959-991: In function launch_single_shot: remove or use the
computed but unused variable max_ctx_tokens_following (derived from isl and
cache-hit-pct) — either pass it into the python call as
--max_ctx_tokens_following or delete the computation; add robust error handling
around pushd/popd by checking their return status (or replace with a safe cd and
trap to ensure popd happens) so the script exits on directory change failure;
and ensure all variable expansions are quoted (e.g., $lmcache_path,
$RESULTS_DIR, $DONE_MARKER, $url, ${dynamo_args[…]} and any command
substitutions) to avoid word-splitting and globbing issues.
- Around line 1061-1063: The script unconditionally does cd $RESULTS_DIR before
launching log_gpu_utilization, which can fail silently; update the ai_dynamo.sh
around the cd to verify the directory exists and the cd succeeds (e.g., test for
existence or create it, then attempt cd and on failure log an error and exit
non‑zero) before starting log_gpu_utilization in the background so the logging
function isn't started in the wrong working directory; reference the RESULTS_DIR
variable, the cd invocation, and the log_gpu_utilization function when making
the change.
- Around line 799-811: The clear_lmcache function is defined but never used;
either remove it or call it during LMCache setup/benchmark startup to ensure a
clean cache state — for example invoke clear_lmcache from the LMCache
initialization path (e.g., inside start_lmcache, setup_lmcache, or before
run_benchmarks) making sure lmcache_config is available and handle curl
failures/log non-zero exits, or delete the clear_lmcache function to avoid dead
code if you don't want an automatic cache clear.
- Around line 916-957: In setup_lmcache, add a safety check before any
destructive rm -rf to validate storage_cachedir (e.g., ensure it is non-empty,
not "/" or ".", resolve with realpath and verify it is under an expected base
like $RESULTS_DIR or contains the frontend node) and abort the removal if the
check fails; also quote all variable expansions to prevent word-splitting and
globbing — specifically quote $lmcache_path in the uv pip install call, quote
"$storage_cachedir" in rm -rf and mkdir -p, quote "$LMCACHE_CONFIG_FILE" for
export/rm/-redirects, and quote "$key", "$val" and "$nkey" in the echo redirects
when writing the config; update function setup_lmcache and references to
lmcache_path, storage_cachedir, LMCACHE_CONFIG_FILE, and lmcache_config
accordingly.
- Around line 815-820: The prefill paths are intentionally disabled here:
replace the hardcoded want_prefill=0 and have_prefill=0 (which currently comment
out calls to _expected_ready_prefill and _count_initialized_prefill) with an
explanatory inline comment stating why prefill workers are disabled (e.g.,
"prefill disabled for single-node testing/temporarily pending refactor —
decode-only mode") or, if not intentional, re-enable/refactor by restoring the
real calls to _expected_ready_prefill and _count_initialized_prefill and
removing unused prefill code; reference the variables want_prefill/have_prefill
and the helper functions _expected_ready_prefill/_count_initialized_prefill and
the commented prefill launch to locate the relevant code to update.
- Around line 416-419: The appended replacement value can be split if not
properly quoted; update the append that builds result so the expanded
replacement from replace_placeholders (the val variable derived from arr[$key]
in the loop) is quoted when concatenating—use the existing result concatenation
that includes ${key} but ensure the replacement is inserted as "${val}"
(preserving spaces/special chars) so modify the code around result+="${key}
${val} " to quote the val expansion.

In `@src/cloudai/workloads/ai_dynamo/calc_percentile_csv.py`:
- Around line 61-65: The parse_float_safe function currently catches a broad
Exception; narrow it to only the expected float conversion failures by catching
ValueError and TypeError instead (i.e., change the except Exception to except
(ValueError, TypeError)) and keep the same behavior of returning float("nan") so
parse_float_safe(value: Any) still returns NaN for invalid inputs.

In `@src/cloudai/workloads/ai_dynamo/genai_perf_wrapper.sh`:
- Line 60: The assignment to profile_path uses an unquoted $result_dir which can
cause globbing and word-splitting for paths with spaces; update the find
invocation (the profile_path assignment) to quote $result_dir (and any other
shell expansions in that command) so the path is treated as a single argument
and safe from splitting/globbing.
- Around line 47-52: The code flattens the genai_perf_cmdline_args array into a
single string losing argument boundaries and then tries to execute that string;
instead keep it as an array and execute it via the array expansion. Replace the
assignment genai_perf_cmdline_args="${genai_perf_cmdline_args[*]}" (remove it),
change the log to safely show arguments (for example use printf '%q '
"${genai_perf_cmdline_args[@]}" or join with spaces for display), and launch the
command using the array expansion "${genai_perf_cmdline_args[@]}" so arguments
with spaces or special characters are preserved when executing.
- Around line 62-66: The script currently overwrites the percentile report
because the grep line (`grep ".*,.*,.*,.*" $profile_path >
$result_dir/$report_file`) uses a single >; change it to append (>>) so the
earlier python3 `${calc_percentile_csv_script} $profile_path -o
$result_dir/$report_file` output is preserved. Replace the integer division for
`output_tokens_per_second_per_gpu=$(( $output_tokens_per_second / $total_gpus
))` with a floating-point calculation (e.g., using `bc` or `awk`, or
`output_tokens_per_second_per_gpu=$(awk -v a="$output_tokens_per_second" -v
b="$total_gpus" 'BEGIN{printf "%.6f", a/b}')`) to avoid truncation, and keep the
final echo that appends the value to `$result_dir/$report_file`.
- Around line 41-43: In the case statement of genai_perf_wrapper.sh the default
branch "*)" currently silences unknown arguments; update that branch (the "*)"
handler) to report unrecognized options by printing an error to stderr
(including the offending "$1" value) and either show brief usage/help or suggest
correct flags, then exit non‑zero (e.g., exit 1) so typos or unsupported args
are caught instead of ignored.

In `@src/cloudai/workloads/ai_dynamo/report_generation_strategy.py`:
- Around line 69-70: The generate_report method is currently a no-op and must be
implemented to satisfy test_ai_dynamo_generate_report: update the
ReportGenerationStrategy.generate_report implementation to create a report.csv
file (in the same output/location the tests expect) and write the expected CSV
headers and rows the test asserts against; use the class's available data
members or helper methods to collect the rows, open the file in write mode, and
write CSV-formatted content (including correct headers and values) so the test
can find the file and validate its contents.
- Around line 27-28: Remove the empty TYPE_CHECKING block because it contains
only a no-op pass; delete the lines containing "if TYPE_CHECKING:" and the
following "pass" in report_generation_strategy.py so there is no dead code
related to TYPE_CHECKING in the module.
- Line 23: Replace the direct pandas import with the project's lazy import
utility to avoid startup overhead: remove "import pandas as pd" and instead
reference LazyImports.pd (from src.cloudai.util.lazy_imports) wherever pandas is
used in ReportGenerationStrategy or this module; ensure you import the
LazyImports symbol (e.g., "from src.cloudai.util.lazy_imports import
LazyImports") and update any code that expects a module object to call
LazyImports.pd instead so pandas is loaded lazily only when needed.

In `@src/cloudai/workloads/ai_dynamo/slurm_command_gen_strategy.py`:
- Around line 242-244: The log may report the wrong "from" value because
self.test_run.num_nodes is modified earlier; capture the original value into a
local (e.g., original_num_nodes = self.test_run.num_nodes) before any
assignments and use that in the logging.info call (log "Setting num_nodes from
%d to %d", original_num_nodes, total_nodes) right before you set
self.test_run.num_nodes = total_nodes in the method that contains the current
code (referencing self.test_run.num_nodes and total_nodes in
slurm_command_gen_strategy.py).

In `@tests/json_gen_strategy/test_ai_dynamo.py`:
- Line 49: The test instantiates GenAIPerf without the required Benchmark field
"name", causing validation to fail; update the GenAIPerf construction in
tests/json_gen_strategy/test_ai_dynamo.py (the genai_perf=GenAIPerf(...) site)
to pass a string name (e.g., a short descriptive identifier) so the Benchmark
validation succeeds and CI stops failing.

In
`@tests/report_generation_strategy/test_ai_dynamo_report_generation_strategy.py`:
- Line 67: The GenAIPerf instantiation is missing the required Benchmark 'name'
argument; update the test where GenAIPerf() is constructed (the GenAIPerf
symbol) to pass a name string (e.g., GenAIPerf(name="genai_perf")) so the
required field from the Benchmark parent is provided and the test can construct
the object successfully.

In `@tests/test_calc_percentile_csv.py`:
- Line 18: The import statement "from pathlib import Path" in the test file is
unused; remove that import line so the test module no longer contains the unused
symbol Path and the CI lint error is resolved—locate the import at the top of
tests/test_calc_percentile_csv.py (the "from pathlib import Path" line) and
delete it.

Comment on lines 69 to +70
def generate_report(self) -> None:
output_path = self.test_run.output_path
source_csv = next(output_path.rglob(CSV_FILES_PATTERN))
target_csv = output_path / "report.csv"

total_gpus = self._calculate_total_gpus()
if total_gpus is None:
logging.warning("gpus_per_node is None, skipping Overall Output Tokens per Second per GPU calculation.")
shutil.copy2(source_csv, target_csv)
return

sections = self._read_csv_sections(source_csv)
self._write_sections_with_metric(target_csv, sections, total_gpus)
pass
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

generate_report is now a no-op, breaking test assertions.

The test_ai_dynamo_generate_report test expects generate_report() to produce a report.csv file with specific content, but this method now does nothing. Either the test needs updating or the implementation needs to be completed.

🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/report_generation_strategy.py` around lines
69 - 70, The generate_report method is currently a no-op and must be implemented
to satisfy test_ai_dynamo_generate_report: update the
ReportGenerationStrategy.generate_report implementation to create a report.csv
file (in the same output/location the tests expect) and write the expected CSV
headers and rows the test asserts against; use the class's available data
members or helper methods to collect the rows, open the file in write mode, and
write CSV-formatted content (including correct headers and values) so the test
can find the file and validate its contents.

@karya0 karya0 force-pushed the dynamo branch 8 times, most recently from 9ad787e to c4617f7 Compare January 27, 2026 21:53
@karya0
Copy link
Contributor Author

karya0 commented Jan 27, 2026

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 27, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/cloudai/workloads/ai_dynamo/ai_dynamo.sh (1)

1072-1074: Add error handling for cd to workspace path.

Line 1073 changes to ${dynamo_args["workspace-path"]} without checking if the operation succeeds or quoting the path. This could silently fail if the directory doesn't exist.

♻️ Proposed fix
   if _is_vllm; then
-    cd ${dynamo_args["workspace-path"]}
+    cd "${dynamo_args["workspace-path"]}" || {
+      log "ERROR: Failed to change to workspace directory: ${dynamo_args["workspace-path"]}"
+      exit 1
+    }
   fi
🤖 Fix all issues with AI agents
In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.py`:
- Around line 74-76: The serializer _serialize_repo currently returns the
misleading literal "/invalid/repo/path" when repo is None; change it to return
None (or empty string if your schema requires) instead to avoid masking
configuration errors: replace the else branch with "None" (i.e. return
v.container_mount if v else None), update the return type to Optional[str] if
necessary, and adjust any callers/consumers that expect a string to handle None
(or validate/raise an explicit error inside _serialize_repo if you prefer
failing-fast).
- Around line 183-187: The GitRepo instances are using SSH-style URLs which
break CI without SSH keys; update the GitRepo constructor calls (e.g., the
variable repo of type Optional[GitRepo] and the other GitRepo usage around the
second occurrence) to use HTTPS-formatted repository URLs (e.g.,
https://github.com/<org>/<repo>.git) instead of "git@github.com:..."; ensure
both places where GitRepo(...) is invoked are changed so CI/CD environments can
clone without SSH auth.

In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.sh`:
- Around line 448-476: Separate the declaration and assignment to address SC2155
in exit_on_error: instead of combining local and command substitution for fatal,
first declare the variable (local fatal) and then assign it with
fatal=$(_detect_fatal_once); locate this in the exit_on_error function where
fatal is set (currently using _detect_fatal_once) and make the two-step change
so shellcheck warning SC2155 is resolved.
- Around line 210-211: The case branch for genai_perf args has a prefix
mismatch: the pattern uses `--genai_perf-args-*` but the extraction uses
`${key#--genai-perf-args-}`, so `${key#...}` won't strip the actual prefix and
will leave a leftover in the index; update them to be consistent (either change
the case pattern to `--genai-perf-args-*` or change the extraction to
`${key#--genai_perf-args-}`) in the branch handling `genai_perf_args` (look for
the `genai_perf_args["${key#...}"]="$2"` line and the `--genai_perf-args-*`
case) so the array keys are computed correctly.
- Around line 550-560: The workload-type checks (_is_genai_perf_workload,
_is_lmbench_workload, _is_single_shot_workload) rely on
dynamo_args["workload-type"] which may be unset; initialize a sensible default
for dynamo_args["workload-type"] (e.g., "genai-perf" or empty string) at
dynamo_args population/initialization point so those predicate functions behave
predictably when CLI input is absent, and ensure any existing code that sets
dynamo_args still allows the default to be overridden.

In `@src/cloudai/workloads/ai_dynamo/calc_percentile_csv.py`:
- Around line 106-110: The current logic silently does nothing for completely
empty CSVs (no headers and no rows); detect this case early by checking rows
and/or the DictReader fieldnames and emit a user-friendly warning or logger
message before returning/continuing; update the code around the variables rows
and all_columns in calc_percentile_csv.py to log a clear warning (e.g., "empty
CSV: no headers or data") when both rows is empty and there are no fieldnames,
and then skip the numeric-summary loop that calls parse_float_safe and
append_summary so behavior is explicit and debuggable.

In `@src/cloudai/workloads/ai_dynamo/report_generation_strategy.py`:
- Around line 37-47: The extract_metric_from_csv function assumes a "Metric"
column exists but only checks metric_type; update extract_metric_from_csv to
first validate that "Metric" is in df.columns (in addition to metric_type), and
if missing log a clear message including df.columns and return METRIC_ERROR;
then proceed to check metric_name in df["Metric"].values and return the metric
as before. Ensure you reference the existing symbols metric_type, metric_name,
METRIC_ERROR and use the same logging style as the other checks.
- Around line 55-56: The current metric parsing in report_generation_strategy.py
unpacks metric.split(":") into three variables which raises ValueError for
inputs with not exactly three parts; modify the parsing in the block handling
the variable metric (the code that currently does `benchmark_name, metric_name,
metric_type = metric.split(":")`) to safely split with at most two splits (e.g.
split(":", 2)) and then normalize missing parts with sensible defaults (e.g. set
metric_type to a default like "unknown" or None when absent) or validate and
reject malformed metrics with a clear error/log message; ensure the change is
applied wherever the metric variable is processed so downstream code that
expects benchmark_name, metric_name, and metric_type receives well-defined
values.

In `@src/cloudai/workloads/ai_dynamo/slurm_command_gen_strategy.py`:
- Around line 193-194: The assert messages incorrectly reference
dynamo.num_prefill_nodes and dynamo.num_decode_nodes; update the assertions in
slurm_command_gen_strategy.py that validate prefill_n and decode_n so the error
messages reflect the actual attributes (e.g., "prefill_worker.num_nodes must be
an integer" and "decode_worker.num_nodes must be an integer") or use the
variable names ("prefill_n must be an integer", "decode_n must be an integer")
for clarity; change the assert statements that check isinstance(prefill_n, int)
and isinstance(decode_n, int) to use one of these corrected messages.

In `@tests/ref_data/ai-dynamo.sbatch`:
- Line 31: Remove the duplicate `--export=ALL` flag from the sbatch command in
the script so only one `--export=ALL` remains; locate the second occurrence of
`--export=ALL` (the duplicate entry added near the sbatch options) and delete
it, leaving the original instance intact.
- Line 69: The generated SLURM argument construction in
slurm_command_gen_strategy.py (the list comprehension building args from
toml_args using prefix and the f-string at the comprehension) wraps values with
double quotes which breaks JSON-containing values; change the construction to
either wrap values in single quotes when the value contains double quotes or to
escape inner double quotes (e.g., detect if v contains '"' then use
f"{prefix}{k} '{v}'" or escape quotes) so that JSON like {"temperature": 0.7,
"max_tokens": 128} is emitted as a single shell-safe token (use the same change
where args is built and any helper that formats values).

In
`@tests/report_generation_strategy/test_ai_dynamo_report_generation_strategy.py`:
- Around line 117-119: The test empties the wrong CSV due to a filename
mismatch: replace the hyphenated filename with the underscore variant so the
existing test CSV is cleared; specifically, update the write_text call that
currently targets (ai_dynamo_tr.output_path / "genai_perf-report.csv") to target
"genai_perf_report.csv" so the subsequent call to
strategy.get_metric("invalid-metric") exercises the empty-file error path
instead of creating a new file.
- Around line 86-91: The test currently only asserts True; instead validate a
concrete behavior of AIDynamoReportGenerationStrategy.generate_report: call
strategy.generate_report(), capture its return value and assert it is None (or
the expected sentinel), and/or assert no report file was created by checking the
expected report path derived from the strategy (use
AIDynamoReportGenerationStrategy.report_path or the strategy instance to
determine the output file) or check a relevant side-effect on the ai_dynamo_tr
fixture; update test_ai_dynamo_generate_report to replace the tautological
assert with one or two assertions verifying the method's return and absence (or
presence if expected) of the report file or expected state change.

Comment on lines 74 to 76
@field_serializer("repo")
def _serialize_repo(self, v: GitRepo) -> str:
return v.container_mount if v else "/invalid/repo/path"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Serializer returns misleading path for None repo.

When repo is None, the serializer returns "/invalid/repo/path" which could mask configuration issues downstream. Consider returning None or an empty string, or raising an error during serialization if the repo is unexpectedly missing.

🐛 Proposed fix
     `@field_serializer`("repo")
-    def _serialize_repo(self, v: GitRepo) -> str:
-        return v.container_mount if v else "/invalid/repo/path"
+    def _serialize_repo(self, v: Optional[GitRepo]) -> Optional[str]:
+        return v.container_mount if v else None
🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.py` around lines 74 - 76, The
serializer _serialize_repo currently returns the misleading literal
"/invalid/repo/path" when repo is None; change it to return None (or empty
string if your schema requires) instead to avoid masking configuration errors:
replace the else branch with "None" (i.e. return v.container_mount if v else
None), update the return type to Optional[str] if necessary, and adjust any
callers/consumers that expect a string to handle None (or validate/raise an
explicit error inside _serialize_repo if you prefer failing-fast).

Comment on lines +183 to +193
repo: Optional[GitRepo] = GitRepo(
url="git@github.com:LMCache/LMCache.git",
commit="ab8530993992db873869ba882320953582d94309",
mount_as="/git/LMCache",
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

🧩 Analysis chain

🏁 Script executed:

rg -n "GitRepo\(" --type=py -A3 | grep -E "(url=|https://|git@)"

Repository: NVIDIA/cloudai

Length of output: 1971


Use HTTPS URLs for GitRepo to improve CI/CD compatibility.

The SSH URLs at lines 184 and 214 should be changed to HTTPS format (https://github.com:...) to ensure compatibility in environments without SSH key authentication. This aligns with HTTPS URLs used elsewhere in the codebase (e.g., nemo_launcher.py, megatron_bridge.py, and in ai_dynamo.py at line 255).

🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.py` around lines 183 - 187, The
GitRepo instances are using SSH-style URLs which break CI without SSH keys;
update the GitRepo constructor calls (e.g., the variable repo of type
Optional[GitRepo] and the other GitRepo usage around the second occurrence) to
use HTTPS-formatted repository URLs (e.g., https://github.com/<org>/<repo>.git)
instead of "git@github.com:..."; ensure both places where GitRepo(...) is
invoked are changed so CI/CD environments can clone without SSH auth.

Comment on lines +448 to 472
function perform_exit()
{
local exit_code=$1
local sleep_before_exit="${dynamo_args["sleep-before-exit"]}"
if [[ -n "${sleep_before_exit}" ]]; then
log "Sleeping for ${sleep_before_exit} seconds before exit"
sleep "${sleep_before_exit}"
fi
exit "${exit_code}"
}

exit_on_error() {
local fatal=$(_detect_fatal_once)
if [ -f "${DONE_MARKER}" ]; then
log "DONE_MARKER found. Skipping error check."
return
fi
if [[ "${fatal}" -gt 0 ]]; then
log "FATAL: detected ${fatal} fatal error line(s). Writing ${FATAL_ERROR_MARKER} and terminating."
sleep 1

touch "${FATAL_ERROR_MARKER}"
grep -E "${dynamo_args["worker-error-pattern"]}|UCX.*ERROR" "${RESULTS_DIR}"/*.log 2>/dev/null > "${FATAL_ERROR_MARKER}"
# Try to stop background jobs for a cleaner exit, but do not loop
kill $(jobs -p) 2>/dev/null || true
# Exit non-zero so srun can retry
exit 1
perform_exit 1
fi
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

LGTM with minor suggestion.

The error handling improvements are good:

  • perform_exit provides configurable exit delay for debugging
  • Checking DONE_MARKER before error detection prevents false positives after successful runs

Minor: Line 460 could separate declaration and assignment per SC2155, but this is stylistic.

🧰 Tools
🪛 Shellcheck (0.11.0)

[warning] 460-460: Declare and assign separately to avoid masking return values.

(SC2155)


[warning] 472-472: Quote this to prevent word splitting.

(SC2046)

🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.sh` around lines 448 - 476,
Separate the declaration and assignment to address SC2155 in exit_on_error:
instead of combining local and command substitution for fatal, first declare the
variable (local fatal) and then assign it with fatal=$(_detect_fatal_once);
locate this in the exit_on_error function where fatal is set (currently using
_detect_fatal_once) and make the two-step change so shellcheck warning SC2155 is
resolved.

Comment on lines +193 to +178
assert isinstance(prefill_n, int), "dynamo.num_prefill_nodes must be an integer"
assert isinstance(decode_n, int), "dynamo.num_decode_nodes must be an integer"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Assertion messages reference non-existent attribute names.

The messages say dynamo.num_prefill_nodes and dynamo.num_decode_nodes, but the actual attributes being checked are prefill_worker.num_nodes and decode_worker.num_nodes.

🐛 Proposed fix
-        assert isinstance(prefill_n, int), "dynamo.num_prefill_nodes must be an integer"
-        assert isinstance(decode_n, int), "dynamo.num_decode_nodes must be an integer"
+        assert isinstance(prefill_n, int), "prefill_worker.num_nodes must be an integer"
+        assert isinstance(decode_n, int), "decode_worker.num_nodes must be an integer"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
assert isinstance(prefill_n, int), "dynamo.num_prefill_nodes must be an integer"
assert isinstance(decode_n, int), "dynamo.num_decode_nodes must be an integer"
assert isinstance(prefill_n, int), "prefill_worker.num_nodes must be an integer"
assert isinstance(decode_n, int), "decode_worker.num_nodes must be an integer"
🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/slurm_command_gen_strategy.py` around lines
193 - 194, The assert messages incorrectly reference dynamo.num_prefill_nodes
and dynamo.num_decode_nodes; update the assertions in
slurm_command_gen_strategy.py that validate prefill_n and decode_n so the error
messages reflect the actual attributes (e.g., "prefill_worker.num_nodes must be
an integer" and "decode_worker.num_nodes must be an integer") or use the
variable names ("prefill_n must be an integer", "decode_n must be an integer")
for clarity; change the assert statements that check isinstance(prefill_n, int)
and isinstance(decode_n, int) to use one of these corrected messages.

--ntasks=2 \
--ntasks-per-node=1 \
--export=ALL,DYNAMO_FATAL_ERROR_FILE=fatal_error.marker \
--export=ALL \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Duplicate --export=ALL flag.

Line 31 duplicates --export=ALL which is already specified on line 23. While not harmful, it adds unnecessary redundancy to the command.

♻️ Suggested fix
  --nodes=2 \
  --ntasks=2 \
  --ntasks-per-node=1 \
-  --export=ALL \
  --output=__OUTPUT_DIR__/output/node-%n-stdout.txt \
🤖 Prompt for AI Agents
In `@tests/ref_data/ai-dynamo.sbatch` at line 31, Remove the duplicate
`--export=ALL` flag from the sbatch command in the script so only one
`--export=ALL` remains; locate the second occurrence of `--export=ALL` (the
duplicate entry added near the sbatch options) and delete it, leaving the
original instance intact.

Comment on lines 86 to +92
def test_ai_dynamo_generate_report(slurm_system: SlurmSystem, ai_dynamo_tr: TestRun, csv_content: str) -> None:
strategy = AIDynamoReportGenerationStrategy(slurm_system, ai_dynamo_tr)
# The new implementation does not generate a report file
strategy.generate_report()

report_file = ai_dynamo_tr.output_path / "report.csv"
assert report_file.is_file(), "Report CSV was not generated."

report_content = report_file.read_text()

def split_into_sections(content: str) -> list[str]:
sections = content.split("\n\n")
return [s.strip() for s in sections if s.strip()]

def normalize_csv_section(section: str) -> str:
return section.replace('"', "").strip()

actual_sections = [normalize_csv_section(s) for s in split_into_sections(report_content)]
expected_sections = [normalize_csv_section(s) for s in split_into_sections(csv_content)]

# First section should match after normalization
assert actual_sections[0] == expected_sections[0], "First section (metrics) does not match"

# Second section should have our additional metric
second_section_lines = actual_sections[1].split("\n")
assert second_section_lines[0] == "Metric,Value", "Second section header does not match"
assert second_section_lines[1] == "Output Token Throughput (tokens/sec),24", "Throughput line does not match"
assert second_section_lines[2] == "Overall Output Tokens per Second per GPU,1.0", "Added metric line is incorrect"
assert second_section_lines[3:] == ["Request Throughput (per sec),1.23", "Request Count (count),40.00"], (
"Remaining lines do not match"
)

# Third section (GPU metrics) should be identical
assert actual_sections[2] == expected_sections[2], "Third section (GPU metrics) does not match"
# Just verify the method runs without error
assert True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider adding a meaningful assertion.

The test only verifies that generate_report() runs without error. An assert True is a tautology. Consider asserting on a specific behavior or removing the redundant assertion.

♻️ Suggested improvement
 def test_ai_dynamo_generate_report(slurm_system: SlurmSystem, ai_dynamo_tr: TestRun, csv_content: str) -> None:
     strategy = AIDynamoReportGenerationStrategy(slurm_system, ai_dynamo_tr)
     # The new implementation does not generate a report file
-    strategy.generate_report()
-    # Just verify the method runs without error
-    assert True
+    # Verify the method runs without raising an exception
+    strategy.generate_report()  # No assertion needed - no exception = success
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def test_ai_dynamo_generate_report(slurm_system: SlurmSystem, ai_dynamo_tr: TestRun, csv_content: str) -> None:
strategy = AIDynamoReportGenerationStrategy(slurm_system, ai_dynamo_tr)
# The new implementation does not generate a report file
strategy.generate_report()
report_file = ai_dynamo_tr.output_path / "report.csv"
assert report_file.is_file(), "Report CSV was not generated."
report_content = report_file.read_text()
def split_into_sections(content: str) -> list[str]:
sections = content.split("\n\n")
return [s.strip() for s in sections if s.strip()]
def normalize_csv_section(section: str) -> str:
return section.replace('"', "").strip()
actual_sections = [normalize_csv_section(s) for s in split_into_sections(report_content)]
expected_sections = [normalize_csv_section(s) for s in split_into_sections(csv_content)]
# First section should match after normalization
assert actual_sections[0] == expected_sections[0], "First section (metrics) does not match"
# Second section should have our additional metric
second_section_lines = actual_sections[1].split("\n")
assert second_section_lines[0] == "Metric,Value", "Second section header does not match"
assert second_section_lines[1] == "Output Token Throughput (tokens/sec),24", "Throughput line does not match"
assert second_section_lines[2] == "Overall Output Tokens per Second per GPU,1.0", "Added metric line is incorrect"
assert second_section_lines[3:] == ["Request Throughput (per sec),1.23", "Request Count (count),40.00"], (
"Remaining lines do not match"
)
# Third section (GPU metrics) should be identical
assert actual_sections[2] == expected_sections[2], "Third section (GPU metrics) does not match"
# Just verify the method runs without error
assert True
def test_ai_dynamo_generate_report(slurm_system: SlurmSystem, ai_dynamo_tr: TestRun, csv_content: str) -> None:
strategy = AIDynamoReportGenerationStrategy(slurm_system, ai_dynamo_tr)
# The new implementation does not generate a report file
# Verify the method runs without raising an exception
strategy.generate_report() # No assertion needed - no exception = success
🤖 Prompt for AI Agents
In
`@tests/report_generation_strategy/test_ai_dynamo_report_generation_strategy.py`
around lines 86 - 91, The test currently only asserts True; instead validate a
concrete behavior of AIDynamoReportGenerationStrategy.generate_report: call
strategy.generate_report(), capture its return value and assert it is None (or
the expected sentinel), and/or assert no report file was created by checking the
expected report path derived from the strategy (use
AIDynamoReportGenerationStrategy.report_path or the strategy instance to
determine the output file) or check a relevant side-effect on the ai_dynamo_tr
fixture; update test_ai_dynamo_generate_report to replace the tautological
assert with one or two assertions verifying the method's return and absence (or
presence if expected) of the report file or expected state change.

Comment on lines +117 to +120
# Empty the CSV file to test error handling
(ai_dynamo_tr.output_path / "genai_perf-report.csv").write_text("")
assert strategy.get_metric("invalid-metric") == METRIC_ERROR
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Filename mismatch: genai_perf-report.csv vs genai_perf_report.csv.

Line 68 creates genai_perf_report.csv (underscore), but line 118 writes to genai_perf-report.csv (hyphen). This test writes to a non-existent file (creating a new empty one) rather than clearing the existing test data file. The subsequent assertion passes but doesn't test the intended error-handling path for an empty CSV.

🐛 Proposed fix
     # Empty the CSV file to test error handling
-    (ai_dynamo_tr.output_path / "genai_perf-report.csv").write_text("")
+    (ai_dynamo_tr.output_path / "genai_perf_report.csv").write_text("")
     assert strategy.get_metric("invalid-metric") == METRIC_ERROR
🤖 Prompt for AI Agents
In
`@tests/report_generation_strategy/test_ai_dynamo_report_generation_strategy.py`
around lines 117 - 119, The test empties the wrong CSV due to a filename
mismatch: replace the hyphenated filename with the underscore variant so the
existing test CSV is cleared; specifically, update the write_text call that
currently targets (ai_dynamo_tr.output_path / "genai_perf-report.csv") to target
"genai_perf_report.csv" so the subsequent call to
strategy.get_metric("invalid-metric") exercises the empty-file error path
instead of creating a new file.

@karya0 karya0 force-pushed the dynamo branch 3 times, most recently from 14a051b to e433300 Compare February 1, 2026 04:16
karya0 and others added 2 commits January 31, 2026 20:31
Signed-off-by: Kapil Arya <kapila@nvidia.com>
- Remove unused imports (field_serializer, GitRepo, File)
- Fix code formatting (ruff format)
- Fix TOML indentation (taplo fmt)
- Fix pyright type error in _get_nested_toml_args
- Fix GitRepo dict check to not require installed_path
- Update test reference file for LMBench field changes

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix clear_kv_cache to use /clear_kv_blocks endpoint (not /reset_prefix_cache)
- Add explicit port fields to AIDynamoArgs (port, etcd_port, nats_port)
- Pass connector and kvbm_metrics_port from ai_dynamo.sh to workload scripts
- Update custom_workload.sh to receive all port config via command line args
- Only fetch KVBM metrics when connector is kvbm and port is specified

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@karya0 karya0 marked this pull request as ready for review February 1, 2026 10:31
@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 1, 2026

Greptile Overview

Greptile Summary

This PR updates the experimental AI-Dynamo workload integration and scenarios to match newer AI-Dynamo use cases. Key changes include:

  • Refactoring AIDynamoTestDefinition/AIDynamoCmdArgs to support multiple workloads (genai-perf, aiperf, lmbench, kvstorage) and new helper scripts (openai_chat_client.py, calc_percentile_csv.py).
  • Updating Slurm and Kubernetes execution paths: Slurm command generation now emits nested --*-args-* parameters and uses container install/results paths; Kubernetes now copies a wrapper + helper script into the frontend pod and executes the wrapper around genai-perf profile.
  • Reworking report/metric extraction to read *_report.csv outputs.

The overall direction fits the existing CloudAI architecture: test definitions provide installables and command-gen/report strategies interpret cmd_args and output artifacts. However, a few issues in the new scripts/strategies can cause immediate runtime failures (see comments).

Confidence Score: 2/5

  • Not safe to merge as-is due to likely runtime failures in Slurm pathing/mounts and in new helper scripts.
  • Multiple issues will deterministically break common execution paths: Slurm container mounts no longer include required script/repo mounts while the generated command references those container paths; constraint_check can raise TypeError with default/omitted TP/PP; new helper CLI can crash when required args are missing; and openai_chat_client has incorrect binary-search bounds.
  • src/cloudai/workloads/ai_dynamo/slurm_command_gen_strategy.py, src/cloudai/workloads/ai_dynamo/ai_dynamo.py, src/cloudai/workloads/ai_dynamo/openai_chat_client.py, src/cloudai/workloads/ai_dynamo/calc_percentile_csv.py

Important Files Changed

Filename Overview
conf/experimental/ai_dynamo/test/vllm.toml Updates AI-Dynamo test config TOML (no functional Python code changes).
conf/experimental/ai_dynamo/test_scenario/vllm_k8s.toml Updates Kubernetes scenario TOML for vLLM AI-Dynamo (config-only changes).
conf/experimental/ai_dynamo/test_scenario/vllm_kvbm_slurm.toml Updates SLURM kvbm scenario TOML (config-only changes).
conf/experimental/ai_dynamo/test_scenario/vllm_slurm.toml Updates SLURM vLLM scenario TOML (config-only changes).
src/cloudai/systems/kubernetes/kubernetes_system.py Adds wrapper/script copy + new wrapper invocation for genai-perf in pods; introduces shell=True kubectl cp usage (already noted in prior threads) and new command assembly paths.
src/cloudai/systems/slurm/slurm_command_gen_strategy.py Refactors container mount paths for Slurm system strategy (low risk path constant changes).
src/cloudai/workloads/ai_dynamo/init.py Updates AI-Dynamo workload package exports to reflect new workload/config classes.
src/cloudai/workloads/ai_dynamo/ai_dynamo.py Large refactor adding Workload/WorkerConfig and multiple benchmark configs; introduces constraint_check with None/list arithmetic risk (already noted in prior threads).
src/cloudai/workloads/ai_dynamo/ai_dynamo.sh Updates main AI-Dynamo bash workflow script; prior-thread issues include unquoted arg expansions and arithmetic on empty vars.
src/cloudai/workloads/ai_dynamo/aiperf.sh Adds/updates aiperf wrapper script; prior-thread issue includes eval-based command injection risk.
src/cloudai/workloads/ai_dynamo/calc_percentile_csv.py Adds new CSV summarization helper with percentile computations; CLI lacks required --input guard and will crash if omitted.
src/cloudai/workloads/ai_dynamo/genai_perf.sh Updates genai-perf wrapper script; prior-thread issue includes eval-based command injection risk.
src/cloudai/workloads/ai_dynamo/kubernetes_json_gen_strategy.py Adjusts k8s JSON generation to use WorkerConfig.args instead of direct WorkerBaseArgs (straightforward refactor).
src/cloudai/workloads/ai_dynamo/kvstorage.sh Renamed/updated kvstorage workload script; prior-thread issues include possible division by zero in KV sizing.
src/cloudai/workloads/ai_dynamo/lmbench.sh Updates lmbench workload script; prior-thread issues include undefined variables and output_file ordering.
src/cloudai/workloads/ai_dynamo/openai_chat_client.py Adds OpenAI chat client script; contains a logic bug in min_filler_prompts calculation and can crash if --out is not provided (already noted).
src/cloudai/workloads/ai_dynamo/report_generation_strategy.py Refactors report strategy to pandas and changes can_handle_directory to always True while generate_report is a no-op (already flagged in prior threads).
src/cloudai/workloads/ai_dynamo/slurm_command_gen_strategy.py Extends Slurm command generation for multiple workloads; still emits shell-unsafe dict repr strings for nested dict args (already flagged in prior threads) and changes mounts.
tests/json_gen_strategy/test_ai_dynamo.py Updates JSON gen tests for new WorkerConfig + new workload objects.
tests/ref_data/ai-dynamo.sbatch Updates golden sbatch fixture; contains shell-unsafe quoting for JSON/dict args (already flagged in prior threads).
tests/report_generation_strategy/test_ai_dynamo_report_generation_strategy.py Updates report strategy tests; prior-thread issue includes editing wrong filename so empty-file branch isn’t exercised.
tests/slurm_command_gen_strategy/test_ai_dynamo_slurm_command_gen_strategy.py Updates slurm command gen tests for new arg serialization; depends on currently shell-unsafe dict arg formatting (already flagged).
tests/test_acceptance.py Updates acceptance tests for new AI-Dynamo workflow behavior (test-only changes).
tests/test_calc_percentile_csv.py Adds unit tests for new calc_percentile_csv helpers (covers percentile and parsing logic).

Sequence Diagram

sequenceDiagram
  participant Orchestrator as CloudAI Orchestrator
  participant TD as AIDynamoTestDefinition
  participant Slurm as AIDynamoSlurmCommandGenStrategy
  participant K8s as KubernetesSystem
  participant Pod as Frontend Pod

  Orchestrator->>TD: installables()
  TD-->>Orchestrator: docker_image + scripts + repos

  alt Slurm run
    Orchestrator->>Slurm: generate run command
    Slurm->>Slurm: _container_mounts()
    Slurm->>Slurm: _gen_script_args()
    Slurm-->>Orchestrator: srun ... bash /cloudai_install/ai_dynamo.sh ...
    Note over Slurm,Orchestrator: Requires install dir + dynamo repo mounts
  else Kubernetes run
    Orchestrator->>K8s: _run_genai_perf(job)
    K8s->>K8s: kubectl cp genai_perf.sh, calc_percentile_csv.py
    K8s->>Pod: chmod +x /tmp/genai_perf.sh
    K8s->>Pod: exec wrapper + genai-perf profile
    Pod-->>K8s: genai_perf.log + report CSV
  end

  Orchestrator->>TD: was_run_successful(output_path)
  TD-->>Orchestrator: checks <workload>_report.csv exists

  Orchestrator->>TD: constraint_check(tr)
  TD-->>Orchestrator: validates TP/PP vs GPU/node and node counts
Loading

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

6 files reviewed, 2 comments

Edit Code Review Agent Settings | Greptile

Comment on lines +361 to +366
prefill_tp = prefill_worker.args.tensor_parallel_size if prefill_worker else 1
decode_tp = decode_worker.args.tensor_parallel_size if decode_worker else 1
prefill_pp = prefill_worker.args.pipeline_parallel_size if prefill_worker else 1
decode_pp = decode_worker.args.pipeline_parallel_size if decode_worker else 1
prefill_nodes = prefill_worker.num_nodes if prefill_worker else 0
decode_nodes = decode_worker.num_nodes if decode_worker else 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

accessing .args.tensor_parallel_size without checking if .args exists first

Suggested change
prefill_tp = prefill_worker.args.tensor_parallel_size if prefill_worker else 1
decode_tp = decode_worker.args.tensor_parallel_size if decode_worker else 1
prefill_pp = prefill_worker.args.pipeline_parallel_size if prefill_worker else 1
decode_pp = decode_worker.args.pipeline_parallel_size if decode_worker else 1
prefill_nodes = prefill_worker.num_nodes if prefill_worker else 0
decode_nodes = decode_worker.num_nodes if decode_worker else 1
prefill_tp = prefill_worker.args.tensor_parallel_size if prefill_worker and prefill_worker.args else 1
decode_tp = decode_worker.args.tensor_parallel_size if decode_worker and decode_worker.args else 1
prefill_pp = prefill_worker.args.pipeline_parallel_size if prefill_worker and prefill_worker.args else 1
decode_pp = decode_worker.args.pipeline_parallel_size if decode_worker and decode_worker.args else 1

Comment on lines +329 to +330
prefill_config["gpus-per-worker"]=$(( prefill_args["--tensor-parallel-size"] * prefill_args["--pipeline-parallel-size"] ))
decode_config["gpus-per-worker"]=$(( decode_args["--tensor-parallel-size"] * decode_args["--pipeline-parallel-size"] ))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

arithmetic will fail if --tensor-parallel-size or --pipeline-parallel-size are unset (empty string results in invalid arithmetic)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 23

🤖 Fix all issues with AI agents
In `@src/cloudai/systems/kubernetes/kubernetes_system.py`:
- Around line 347-356: The wrapper flag used in the wrapper_cmd array is
incorrect: genai_perf.sh expects --report_name but the code passes --report_file
(see variables wrapper_cmd, report_file, pod_wrapper_path); update the argument
list in wrapper_cmd to use "--report_name" (or add both "--report_name" and
"--report_file" if you want backward compatibility) so the script receives the
intended report name (ensure the value remains report_file and keep
pod_calc_csv_path and genai_perf_results_path unchanged).

In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.py`:
- Around line 361-366: The comparisons can raise TypeError because
tensor_parallel_size and pipeline_parallel_size default to None; coerce those
fields to an integer before using them in constraint checks by replacing the
current assignments for prefill_tp, decode_tp, prefill_pp, decode_pp with
guarded/coerced values (e.g., treat None as 1) so that prefill_tp, decode_tp,
prefill_pp, decode_pp are always ints; locate the assignments to prefill_tp,
decode_tp, prefill_pp, decode_pp that reference prefill_worker.args and
decode_worker.args and change them to use a None-check/fallback (e.g., value if
value is not None else 1) so subsequent comparisons like prefill_tp > decode_tp
do not fail.

In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.sh`:
- Around line 968-999: The launch_workload function currently uses eval with
expanded_config/expanded_arguments which is unsafe; instead build a proper
argument array and exec it directly: create a local -a cmd_array and append the
command and all fixed flags (--install_dir, --result_dir, --model, --url, etc.)
using the existing namerefs workload_config_ref/workload_args_ref and
dynamo_args values (so you pull values like "${dynamo_args["model"]}" into the
array as separate elements), then append any config/argument items as separate
array elements (don't use array_to_args string interpolation), and run it as
"${cmd_array[@]}" > "${RESULTS_DIR}/$workload_name.log" 2>&1; this removes eval
and preserves correct quoting for
INSTALL_DIR/RESULTS_DIR/script/port/endpoint/connector and the workload name.

In `@src/cloudai/workloads/ai_dynamo/calc_percentile_csv.py`:
- Around line 83-93: The CLI currently allows omitting --input which makes
input_path None and causes open(None) errors; update the argparse configuration
in main() by making the input argument required (e.g., change
parser.add_argument("--input", "-i", ...) to include required=True) and keep the
existing output_path logic (output_path = args.output or
f"{input_path}_summary.csv"); optionally add a short validation to raise a clear
error if args.input is falsy before opening the file to ensure meaningful
feedback.

In `@src/cloudai/workloads/ai_dynamo/custom_workload.sh`:
- Around line 160-163: The TODO notes that LMCache isn’t being cleared: when the
connector variable equals "lmcache" the script should call the clear_lmcache
function (or log/track the action) but currently it’s commented out; update
custom_workload.sh to detect if [[ "$connector" == "lmcache" ]] then invoke
clear_lmcache and emit a log message (or, if you don’t implement clearing now,
add a clear_lmcache TODO log that opens an issue and reference that issue id),
and ensure clear_lmcache is defined or sourced before use so the call in the
connector branch (clear_lmcache) will succeed.
- Around line 24-38: Remove the unused local variable by deleting the
declaration "local args=\"$@\"" from the parse_custom_workload_args function;
leave the rest of the function (the while loop, case branches, and use of
workload_args) intact so argument parsing still works and ShellCheck no longer
reports an unused variable.
- Around line 166-188: The main() function currently proceeds even when required
variables (set by process_args) like result_dir/result_name, install_dir, model,
url, port or all_isl are empty; add validation after process_args "$@" to check
each required variable (e.g., result_dir, report_name, install_dir, model, url,
port, all_isl) and if any are missing log a clear error via log (or echo) and
exit non‑zero; ensure the validation runs before using report_file or invoking
python3/openai_chat_client.py so the script fails fast with a helpful message
rather than building empty paths/URLs.
- Around line 3-5: The usage header in custom_workload.sh is stale: update the
comment block at the top of the script to reflect the actual flags parsed by the
script (e.g., --result_dir, --report_name, --install_dir, --isl, and any other
options the argument parser checks) instead of the old
--report_file/--calc_percentile_csv_script/--gpus_per_node; make the usage line
and any example invocation match the exact option names used in the script’s
parser logic (refer to the parsed variables like report_name, install_dir, isl,
result_dir) and include the trailing "-- <lmbench-cmdline-args>" portion if the
script still forwards remaining args.

In `@src/cloudai/workloads/ai_dynamo/genai_perf.sh`:
- Around line 95-97: The sed -ie invocation runs unguarded and will fail if
profile_path is empty; update the logic around profile_path (the variable set by
find using result_dir) to first check that profile_path is non-empty and that
the file exists (using the [[ -n "$profile_path" ]] and [[ -f "$profile_path" ]]
checks) and only then run the sed -i cleanup and subsequent processing; move the
sed command inside the existence check that references profile_path so sed never
runs on an empty path.
- Around line 52-55: The script's option parsing in genai_perf.sh only handles
--report_name but the caller (kubernetes_system.py) passes --report_file, so the
flag is ignored by the catch-all and the chosen filename is lost; update the
case block that currently matches --report_name to also accept --report_file
(map --report_file to the same variable used for the report filename, e.g.,
report_name or report_file) and shift appropriately, or alternatively change the
caller to send --report_name; modify the option parsing branch (the case
handling for --report_name) to handle both names so the passed filename is
honored.
- Around line 78-88: The script currently concatenates cmdline_args and
extra_args into full_cmd and uses eval (full_cmd / eval $full_cmd), which breaks
quoting and allows injection; change to build an array (e.g., cmd_array) by
appending elements: the base command token(s) from cmd, then each item from
cmdline_args and extra_args as separate entries, and execute with
"${cmd_array[@]}" (avoid eval). Also reconcile the parameter name mismatch by
accepting and honoring --report_file in the argument parsing logic (in addition
to or instead of --report_name) so the caller-provided report path overrides the
default genai_perf_report.csv. Finally, guard the sed usage on profile_path by
first checking the file exists and is non-empty (move the sed call inside the if
that verifies profile_path from find or add an explicit [[ -f "$profile_path" ]]
check) to avoid running sed on an empty path.

In `@src/cloudai/workloads/ai_dynamo/lmbench.sh`:
- Around line 25-63: The script parses several CLI options in lmbench.sh
(--model, --url, --port, --endpoint, --gpus_per_node) into variables (model,
url, port, endpoint, gpus_per_node) that are never used; either remove those
case branches and the corresponding variables to avoid dead/ confusing code, or
wire them into the script (export them or use them where needed) and add a
comment documenting intended future use. Update the option parsing switch (the
--model/--url/--port/--endpoint/--gpus_per_node branches) and any top-level
variable declarations (model, url, port, endpoint, gpus_per_node) accordingly,
keeping --repo/lmbench_dir and other used options intact.
- Line 87: The script uses pushd/popd (e.g., the pushd $lmbench_dir calls)
without checking for failure; update each pushd invocation to verify it
succeeded and abort (or handle) if it fails (for example: run pushd $lmbench_dir
and if its exit status is non‑zero print a clear error mentioning $lmbench_dir
and exit with non‑zero status), ensure popd is only called when pushd succeeded
(or use a trap to safely popd on exit), and apply the same change for the second
occurrence around line 119 so the script never continues executing in the wrong
directory.
- Line 89: The command-line builder in lmbench.sh appends an undefined variable
`key` into cmdline_args, causing empty/stale values; either initialize/assign
`key` before use (e.g., set key="..." in the script logic that prepares
arguments) or guard its expansion when optional by using parameter expansion in
the cmdline assembly (e.g., replace ${key} with a conditional expansion like
${key:+$key} or ${key:-} so no empty token is injected); update the line that
builds cmdline_args (the string that includes --output
${result_dir}/lmbench_bench_output.csv ${key} ${extra_args}) accordingly.
- Line 114: The shell redirection order is wrong so stderr is not captured;
update the eval invocation that uses variables cmd, cmdline_args, qps,
output_file and artifacts_dir so stdout is redirected to
${artifacts_dir}/lmbench_bench_output_${qps}.log first and then stderr is merged
into stdout (i.e., change the redirection order so 2>&1 appears after the >
logfile redirect) to ensure both stdout and stderr go into
lmbench_bench_output_${qps}.log.
- Around line 78-83: The case branches handling unknown arguments ('--*)' and
'*' in lmbench.sh currently swallow arguments with "shift 2" and "shift", hiding
misconfigurations; update these branches in the argument-parsing case statement
to report unrecognized options (use stderr via echo or printf to include the
offending token) and either continue with a warning or exit non-zero (e.g., exit
1) to fail fast; reference the '--*)' and '*' patterns and the shifts so you
replace the silent shifts with a warning/exit that includes the unknown argument
value.
- Around line 75-76: cmdline_args is declared as an array but is being assigned
a plain string (cmdline_args="$@"), which breaks argument passing; change the
assignment where the loop sets cmdline_args to use the array form (populate the
array from the positional parameters) and then update all call sites that
consume it (places referencing cmdline_args) to expand the array properly using
the array expansion form so each original argument is preserved when passed to
commands; refer to the cmdline_args variable and the loop that currently does
the string assignment and the usage sites that expand it to make these changes.

In `@src/cloudai/workloads/ai_dynamo/openai_chat_client.py`:
- Around line 123-126: The flush_kv_cache function is passing
args.filler_len_chars into Chat as an input-size-in-tokens (isl) which misaligns
the flag name/help and can produce oversized prompts; update flush_kv_cache to
either (A) treat the CLI flag as tokens (rename/adjust args to filler_len_tokens
and its help) and pass that token count to Chat, or (B) convert the given
character length to an estimated token count before constructing Chat (use your
project’s token-estimator util or a simple chars→tokens heuristic) so Chat(...)
receives a token count; modify references to args.filler_len_chars in
flush_kv_cache and any related flag/help text (or rename to filler_len_tokens)
to keep semantics consistent.
- Around line 209-216: The current JSONL branch writes pretty-printed multi-line
JSON via json.dump(result, indent=2), which breaks JSONL; change the write logic
in the block that opens out_path (variables: out_path, args.out, result) to emit
a single-line JSON object per run by using json.dumps(result) and writing that
plus a trailing newline (or call the existing log_jsonl helper if available)
instead of json.dump(..., indent=2).
- Around line 192-194: The flush log currently prints the constant
NUM_FILLER_PROMPTS rather than the CLI override; update the print to use the
resolved value (prefer args.num_filler_prompts if provided, otherwise fall back
to NUM_FILLER_PROMPTS) right before calling flush_kv_cache(args, client). Locate
the print and the subsequent call to flush_kv_cache in openai_chat_client.py and
replace the hardcoded NUM_FILLER_PROMPTS in the message with the effective
variable used by flush_kv_cache (e.g., num_prompts = getattr(args,
"num_filler_prompts", NUM_FILLER_PROMPTS) and log that value).
- Around line 130-154: The CLI currently parses all flags but doesn't enforce
that --url, --model, --isl, --osl, and --out are supplied when --dump_csv_header
is not set; update parse_args() to validate after parsing: if
args.dump_csv_header is False then check that args.url, args.model, args.isl,
args.osl, and args.out are present (non-None) and call ap.error(...) with a
clear message listing the missing required flags; keep the existing defaults for
--num_filler_prompts and --filler_len_chars and do not require those when
--dump_csv_header is set.
- Around line 62-72: The init currently asserts token count which can be
unreliable and doesn't enforce max_ctx_tokens; replace the assert with explicit
validation: first check if isl > max_ctx_tokens and raise a ValueError if so,
then ensure you produce exactly isl tokens by looping to build raw_doc and
encoding until len(ids) == isl (disable truncation on encode or avoid max_length
so it doesn't silently cut tokens), trimming or appending characters as needed
to reach exactly isl tokens before calling decode; do these changes inside
__init__ (referencing the __init__ method, self.isl, self.max_ctx_tokens,
self.tok, raw_doc, ids, and doc).
- Around line 80-120: The stream method currently passes
stream_options={"include_usage": True} to client.chat.completions.create which
raises a TypeError on OpenAI SDK <1.26.0; modify stream to detect that specific
TypeError from client.chat.completions.create, then retry the creation call
without the stream_options kwarg (preserving model, messages, temperature,
stream, max_tokens) so telemetry parsing logic (prompt_tokens,
prompt_tokens_details, first_tok_t, TtftStats) can still run; ensure you only
retry once and propagate other exceptions unchanged, and keep the existing
JSONDecodeError and general exception handling behavior.

Comment on lines +347 to +356
report_file = "genai_perf_report.csv"
wrapper_cmd = [
"/bin/bash",
pod_wrapper_path,
"--result_dir",
genai_perf_results_path,
"--report_file",
report_file,
"--calc_percentile_csv_script",
pod_calc_csv_path,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Wrapper flag mismatch: --report_file isn’t parsed by genai_perf.sh.
The script accepts --report_name, so this flag is ignored. Use --report_name (or support both in the script) to avoid silent defaults.

🔧 Proposed fix
-            "--report_file",
+            "--report_name",
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
report_file = "genai_perf_report.csv"
wrapper_cmd = [
"/bin/bash",
pod_wrapper_path,
"--result_dir",
genai_perf_results_path,
"--report_file",
report_file,
"--calc_percentile_csv_script",
pod_calc_csv_path,
report_file = "genai_perf_report.csv"
wrapper_cmd = [
"/bin/bash",
pod_wrapper_path,
"--result_dir",
genai_perf_results_path,
"--report_name",
report_file,
"--calc_percentile_csv_script",
pod_calc_csv_path,
🤖 Prompt for AI Agents
In `@src/cloudai/systems/kubernetes/kubernetes_system.py` around lines 347 - 356,
The wrapper flag used in the wrapper_cmd array is incorrect: genai_perf.sh
expects --report_name but the code passes --report_file (see variables
wrapper_cmd, report_file, pod_wrapper_path); update the argument list in
wrapper_cmd to use "--report_name" (or add both "--report_name" and
"--report_file" if you want backward compatibility) so the script receives the
intended report name (ensure the value remains report_file and keep
pod_calc_csv_path and genai_perf_results_path unchanged).

Comment on lines +361 to +366
prefill_tp = prefill_worker.args.tensor_parallel_size if prefill_worker else 1
decode_tp = decode_worker.args.tensor_parallel_size if decode_worker else 1
prefill_pp = prefill_worker.args.pipeline_parallel_size if prefill_worker else 1
decode_pp = decode_worker.args.pipeline_parallel_size if decode_worker else 1
prefill_nodes = prefill_worker.num_nodes if prefill_worker else 0
decode_nodes = decode_worker.num_nodes if decode_worker else 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Potential None comparison in constraint_check.

tensor_parallel_size and pipeline_parallel_size can be None (they have default=None). The comparisons like prefill_tp > decode_tp will fail with TypeError if either is None.

🐛 Proposed fix
-        prefill_tp = prefill_worker.args.tensor_parallel_size if prefill_worker else 1
-        decode_tp = decode_worker.args.tensor_parallel_size if decode_worker else 1
-        prefill_pp = prefill_worker.args.pipeline_parallel_size if prefill_worker else 1
-        decode_pp = decode_worker.args.pipeline_parallel_size if decode_worker else 1
+        prefill_tp = (prefill_worker.args.tensor_parallel_size or 1) if prefill_worker else 1
+        decode_tp = (decode_worker.args.tensor_parallel_size or 1) if decode_worker else 1
+        prefill_pp = (prefill_worker.args.pipeline_parallel_size or 1) if prefill_worker else 1
+        decode_pp = (decode_worker.args.pipeline_parallel_size or 1) if decode_worker else 1
🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.py` around lines 361 - 366, The
comparisons can raise TypeError because tensor_parallel_size and
pipeline_parallel_size default to None; coerce those fields to an integer before
using them in constraint checks by replacing the current assignments for
prefill_tp, decode_tp, prefill_pp, decode_pp with guarded/coerced values (e.g.,
treat None as 1) so that prefill_tp, decode_tp, prefill_pp, decode_pp are always
ints; locate the assignments to prefill_tp, decode_tp, prefill_pp, decode_pp
that reference prefill_worker.args and decode_worker.args and change them to use
a None-check/fallback (e.g., value if value is not None else 1) so subsequent
comparisons like prefill_tp > decode_tp do not fail.

Comment on lines +83 to +93
def main() -> None:
parser = argparse.ArgumentParser(description="Summarize LMCACHE bench CSV metrics")
parser.add_argument("--input", "-i", help="Path to input CSV (e.g., lmcache_bench_output_0.1.csv)")
parser.add_argument("--output", "-o", help="Path to write summary CSV. Defaults to <input>_summary.csv")
args = parser.parse_args()

input_path = args.input
output_path = args.output or f"{input_path}_summary.csv"

rows: List[Dict[str, Any]] = []
with open(input_path, newline="") as f:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Make --input required to avoid open(None) failures.
If --input is omitted, input_path is None and the script fails later with a cryptic error.

Suggested fix
-    parser.add_argument("--input", "-i", help="Path to input CSV (e.g., lmcache_bench_output_0.1.csv)")
+    parser.add_argument("--input", "-i", required=True, help="Path to input CSV (e.g., lmcache_bench_output_0.1.csv)")
🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/calc_percentile_csv.py` around lines 83 - 93,
The CLI currently allows omitting --input which makes input_path None and causes
open(None) errors; update the argparse configuration in main() by making the
input argument required (e.g., change parser.add_argument("--input", "-i", ...)
to include required=True) and keep the existing output_path logic (output_path =
args.output or f"{input_path}_summary.csv"); optionally add a short validation
to raise a clear error if args.input is falsy before opening the file to ensure
meaningful feedback.

Comment on lines 3 to 5
# Called as:
# bash ./custom_workload.sh --result_dir <result_dir> --report_file <report_file> --calc_percentile_csv_script <calc_percentile_csv_script> --gpus_per_node <gpus_per_node> -- <lmbench-cmdline-args>

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Update the usage comment to match the actual flags.
The header mentions --report_file/--calc_percentile_csv_script/--gpus_per_node, but the parser expects --report_name/--install_dir/--isl, etc.

Suggested fix
-# Called as:
-    #   bash ./custom_workload.sh --result_dir <result_dir> --report_file <report_file> --calc_percentile_csv_script <calc_percentile_csv_script> --gpus_per_node <gpus_per_node> -- <lmbench-cmdline-args>
+# Called as:
+#   bash ./custom_workload.sh --result_dir <result_dir> --report_name <report_name> \
+#     --install_dir <install_dir> --model <model> --url <url> --port <port> --isl <isl_list> \
+#     [--endpoint <endpoint>] [--connector <connector>] [--kvbm_metrics_port <port>] -- <custom-workload-args>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Called as:
# bash ./custom_workload.sh --result_dir <result_dir> --report_file <report_file> --calc_percentile_csv_script <calc_percentile_csv_script> --gpus_per_node <gpus_per_node> -- <lmbench-cmdline-args>
# Called as:
# bash ./custom_workload.sh --result_dir <result_dir> --report_name <report_name> \
# --install_dir <install_dir> --model <model> --url <url> --port <port> --isl <isl_list> \
# [--endpoint <endpoint>] [--connector <connector>] [--kvbm_metrics_port <port>] -- <custom-workload-args>
🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/custom_workload.sh` around lines 3 - 5, The
usage header in custom_workload.sh is stale: update the comment block at the top
of the script to reflect the actual flags parsed by the script (e.g.,
--result_dir, --report_name, --install_dir, --isl, and any other options the
argument parser checks) instead of the old
--report_file/--calc_percentile_csv_script/--gpus_per_node; make the usage line
and any example invocation match the exact option names used in the script’s
parser logic (refer to the parsed variables like report_name, install_dir, isl,
result_dir) and include the trailing "-- <lmbench-cmdline-args>" portion if the
script still forwards remaining args.

Comment on lines 24 to 38
function parse_custom_workload_args()
{
local args="$@"
while [[ $# -gt 0 ]]; do
case "$1" in
--*)
workload_args["${1}"]="$2"
shift 2
;;
*)
shift
;;
esac
done
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove unused local variable in parse_custom_workload_args.
Shellcheck flags the unused args variable; dropping it avoids confusion.

Suggested fix
 function parse_custom_workload_args()
 {
-  local args="$@"
   while [[ $# -gt 0 ]]; do
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function parse_custom_workload_args()
{
local args="$@"
while [[ $# -gt 0 ]]; do
case "$1" in
--*)
workload_args["${1}"]="$2"
shift 2
;;
*)
shift
;;
esac
done
}
function parse_custom_workload_args()
{
while [[ $# -gt 0 ]]; do
case "$1" in
--*)
workload_args["${1}"]="$2"
shift 2
;;
*)
shift
;;
esac
done
}
🧰 Tools
🪛 Shellcheck (0.11.0)

[warning] 26-26: args appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 26-26: Assigning an array to a string! Assign as array, or use * instead of @ to concatenate.

(SC2124)

🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/custom_workload.sh` around lines 24 - 38,
Remove the unused local variable by deleting the declaration "local args=\"$@\""
from the parse_custom_workload_args function; leave the rest of the function
(the while loop, case branches, and use of workload_args) intact so argument
parsing still works and ShellCheck no longer reports an unused variable.

Comment on lines 80 to 120
def stream(self, client: OpenAI, max_tokens: int) -> TtftStats:
stats = TtftStats(0, 0, 0)

start = time.perf_counter()
try:
stream = client.chat.completions.create(
model=self.model_id,
messages=self.messages,
temperature=0.0,
stream=True,
stream_options={"include_usage": True},
max_tokens=max_tokens,
)

first_tok_t: float | None = None
for chunk in stream:
# Capture prompt_tokens from usage if available
if chunk.usage and chunk.usage.prompt_tokens:
stats.prompt_tokens = chunk.usage.prompt_tokens
# Capture cached_tokens from prompt_tokens_details if available
usage_details = chunk.usage and chunk.usage.prompt_tokens_details
if usage_details and usage_details.cached_tokens is not None:
stats.cached_tokens = usage_details.cached_tokens
if first_tok_t is None and chunk.choices and chunk.choices[0].delta and chunk.choices[0].delta.content:
first_tok_t = time.perf_counter()

if first_tok_t is None:
raise RuntimeError("no tokens returned")

stats.ttft_seconds = round(first_tok_t - start, 3)
return stats
except json.JSONDecodeError as e:
print(f"Error: JSON decode error during streaming: {e}")
print("This may indicate empty SSE events from the server - likely a server-side bug")
# Return partial stats with error indication
stats.ttft_seconds = -1 # Indicate error
return stats
except Exception as e:
print(f"Error during streaming: {type(e).__name__}: {e}")
stats.ttft_seconds = -1 # Indicate error
return stats
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

OpenAI Python SDK stream_options include_usage chat.completions version history

💡 Result:

What stream_options={"include_usage": True} does (Chat Completions streaming)

When you call client.chat.completions.create(..., stream=True, stream_options={"include_usage": True}), the API streams one extra final chunk (right before data: [DONE]) whose:

  • usage contains the total token usage for the whole request
  • choices is always [] (empty)

All earlier chunks include usage: null. [1][2][3]

When it was added (API + Python SDK)

  • OpenAI API support added: May 6, 2024 (Chat Completions + Completions) [4]
  • OpenAI Python SDK version that includes it: openai==1.26.0, released May 6, 2024 [5]

References

  1. Chat Completions API: stream_options.include_usage behavior [1]
  2. Streaming docs note about empty choices when include_usage is enabled [2]
  3. Cookbook example showing the final usage chunk [3]
  4. OpenAI API changelog entry adding stream_options: {"include_usage": true} (May 6, 2024) [4]
  5. PyPI release date for openai==1.26.0 (May 6, 2024) [5]

Sources: [1] (platform.openai.com) [2] (platform.openai.com) [3] (developers.openai.com) [4] (platform.openai.com) [5] (pypi.org)

Citations:


🌐 Web query:

OpenAI Python SDK prompt_tokens_details cached_tokens support version

💡 Result:

usage.prompt_tokens_details.cached_tokens is an API response field emitted when Prompt Caching is in effect (and 0 otherwise). OpenAI documents it under Prompt Caching monitoring, where cached_tokens lives at usage.prompt_tokens_details.cached_tokens. [1][2]

In the OpenAI Python SDK (openai package), the typed prompt_tokens_details.cached_tokens field is present at least as far back as openai==1.61.0 (example payload in an issue report that includes prompt_tokens_details: { cached_tokens: 0, ... } and lists library version openai 1.61.0). [3]

If you’re on a recent SDK (even 1.99.6) and still see usage.prompt_tokens_details == None, it’s commonly because the upstream endpoint/provider didn’t return prompt_tokens_details (e.g., “OpenAI-compatible” gateways / non-OpenAI backends), not because the SDK can’t represent it. [4]

Sources: Prompt caching guide / response schema [1][2], openai-python issue showing openai 1.61.0 with cached_tokens in prompt_tokens_details [3], issue showing prompt_tokens_details=None despite a recent SDK when using a non-OpenAI model/provider [4].


🏁 Script executed:

# Find and examine the specific file and lines mentioned
find . -name "openai_chat_client.py" -type f

Repository: NVIDIA/cloudai

Length of output: 113


🏁 Script executed:

# Examine the file to see the actual code
cat -n ./src/cloudai/workloads/ai_dynamo/openai_chat_client.py | sed -n '70,130p'

Repository: NVIDIA/cloudai

Length of output: 3276


🏁 Script executed:

# Check for any requirements or version constraints on openai package
find . -name "requirements*.txt" -o -name "setup.py" -o -name "pyproject.toml" | xargs grep -l "openai" 2>/dev/null

Repository: NVIDIA/cloudai

Length of output: 42


🏁 Script executed:

# Check for version constraints more directly
cat pyproject.toml | grep -A 5 -B 5 openai

Repository: NVIDIA/cloudai

Length of output: 40


🏁 Script executed:

# Also check if there's a setup.py or requirements.txt
ls -la | grep -E "requirements|setup|pyproject"

Repository: NVIDIA/cloudai

Length of output: 123


🏁 Script executed:

# Read the pyproject.toml file
cat pyproject.toml

Repository: NVIDIA/cloudai

Length of output: 4148


🏁 Script executed:

# Also check if there's any version pinning or constraints mentioned
rg "openai" . --type toml --type txt

Repository: NVIDIA/cloudai

Length of output: 40


🏁 Script executed:

# Check if openai is imported in the file and how it's used
head -30 ./src/cloudai/workloads/ai_dynamo/openai_chat_client.py

Repository: NVIDIA/cloudai

Length of output: 1003


🏁 Script executed:

# Search for where openai might be specified as an optional dependency or elsewhere
rg "openai" . --type py | head -20

Repository: NVIDIA/cloudai

Length of output: 410


🏁 Script executed:

# Check if there are any tests for this file or SDK version handling
find . -name "*test*openai*" -o -name "*openai*test*" | head -10

Repository: NVIDIA/cloudai

Length of output: 40


🏁 Script executed:

# Check if there's any documentation about SDK version requirements
rg "1\.26|1\.61|openai.*version|sdk.*version" . --type md --type txt

Repository: NVIDIA/cloudai

Length of output: 40


🏁 Script executed:

# Look at how this code is actually called to understand the real-world impact
rg "\.stream\(" ./src/cloudai/workloads/ai_dynamo/ -A 3 -B 3

Repository: NVIDIA/cloudai

Length of output: 2659


Add compatibility fallback for OpenAI SDK versions < 1.26.0.

The stream_options={"include_usage": True} parameter is unavailable in OpenAI Python SDK versions before 1.26.0 (released May 6, 2024). If a user has an older SDK, line 85–92 raises TypeError, which is caught by the broad except Exception handler at line 117, causing the method to degrade gracefully but return -1 for all metrics.

While the current boolean-guarded checks at lines 100–102 already safely handle missing prompt_tokens_details attributes, add a targeted fallback to retry without stream_options when that parameter is not supported. This preserves telemetry collection on older SDK versions instead of losing all metrics.

🛠️ Compatibility fallback
-            stream = client.chat.completions.create(
-                model=self.model_id,
-                messages=self.messages,
-                temperature=0.0,
-                stream=True,
-                stream_options={"include_usage": True},
-                max_tokens=max_tokens,
-            )
+            try:
+                stream = client.chat.completions.create(
+                    model=self.model_id,
+                    messages=self.messages,
+                    temperature=0.0,
+                    stream=True,
+                    stream_options={"include_usage": True},
+                    max_tokens=max_tokens,
+                )
+            except TypeError:
+                stream = client.chat.completions.create(
+                    model=self.model_id,
+                    messages=self.messages,
+                    temperature=0.0,
+                    stream=True,
+                    max_tokens=max_tokens,
+                )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def stream(self, client: OpenAI, max_tokens: int) -> TtftStats:
stats = TtftStats(0, 0, 0)
start = time.perf_counter()
try:
stream = client.chat.completions.create(
model=self.model_id,
messages=self.messages,
temperature=0.0,
stream=True,
stream_options={"include_usage": True},
max_tokens=max_tokens,
)
first_tok_t: float | None = None
for chunk in stream:
# Capture prompt_tokens from usage if available
if chunk.usage and chunk.usage.prompt_tokens:
stats.prompt_tokens = chunk.usage.prompt_tokens
# Capture cached_tokens from prompt_tokens_details if available
usage_details = chunk.usage and chunk.usage.prompt_tokens_details
if usage_details and usage_details.cached_tokens is not None:
stats.cached_tokens = usage_details.cached_tokens
if first_tok_t is None and chunk.choices and chunk.choices[0].delta and chunk.choices[0].delta.content:
first_tok_t = time.perf_counter()
if first_tok_t is None:
raise RuntimeError("no tokens returned")
stats.ttft_seconds = round(first_tok_t - start, 3)
return stats
except json.JSONDecodeError as e:
print(f"Error: JSON decode error during streaming: {e}")
print("This may indicate empty SSE events from the server - likely a server-side bug")
# Return partial stats with error indication
stats.ttft_seconds = -1 # Indicate error
return stats
except Exception as e:
print(f"Error during streaming: {type(e).__name__}: {e}")
stats.ttft_seconds = -1 # Indicate error
return stats
def stream(self, client: OpenAI, max_tokens: int) -> TtftStats:
stats = TtftStats(0, 0, 0)
start = time.perf_counter()
try:
try:
stream = client.chat.completions.create(
model=self.model_id,
messages=self.messages,
temperature=0.0,
stream=True,
stream_options={"include_usage": True},
max_tokens=max_tokens,
)
except TypeError:
stream = client.chat.completions.create(
model=self.model_id,
messages=self.messages,
temperature=0.0,
stream=True,
max_tokens=max_tokens,
)
first_tok_t: float | None = None
for chunk in stream:
# Capture prompt_tokens from usage if available
if chunk.usage and chunk.usage.prompt_tokens:
stats.prompt_tokens = chunk.usage.prompt_tokens
# Capture cached_tokens from prompt_tokens_details if available
usage_details = chunk.usage and chunk.usage.prompt_tokens_details
if usage_details and usage_details.cached_tokens is not None:
stats.cached_tokens = usage_details.cached_tokens
if first_tok_t is None and chunk.choices and chunk.choices[0].delta and chunk.choices[0].delta.content:
first_tok_t = time.perf_counter()
if first_tok_t is None:
raise RuntimeError("no tokens returned")
stats.ttft_seconds = round(first_tok_t - start, 3)
return stats
except json.JSONDecodeError as e:
print(f"Error: JSON decode error during streaming: {e}")
print("This may indicate empty SSE events from the server - likely a server-side bug")
# Return partial stats with error indication
stats.ttft_seconds = -1 # Indicate error
return stats
except Exception as e:
print(f"Error during streaming: {type(e).__name__}: {e}")
stats.ttft_seconds = -1 # Indicate error
return stats
🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/openai_chat_client.py` around lines 80 - 120,
The stream method currently passes stream_options={"include_usage": True} to
client.chat.completions.create which raises a TypeError on OpenAI SDK <1.26.0;
modify stream to detect that specific TypeError from
client.chat.completions.create, then retry the creation call without the
stream_options kwarg (preserving model, messages, temperature, stream,
max_tokens) so telemetry parsing logic (prompt_tokens, prompt_tokens_details,
first_tok_t, TtftStats) can still run; ensure you only retry once and propagate
other exceptions unchanged, and keep the existing JSONDecodeError and general
exception handling behavior.

Comment on lines 130 to 154
def parse_args() -> argparse.Namespace:
ap = argparse.ArgumentParser(
prog=Path(sys.argv[0]).name,
description="Single-shot chat completion client for TTFT benchmark.",
)
ap.add_argument("--dump_csv_header", action="store_true", help="Only dump CSV header and exit.")
ap.add_argument("--url", help="URL of the API endpoint.")
ap.add_argument("--model", help="Model name/ID.")
ap.add_argument("--max_ctx_tokens", type=int, default=131_072, help="Max context tokens.")
ap.add_argument("--isl", type=int, help="Input tokens.")
ap.add_argument("--osl", type=int, help="Output tokens.")
ap.add_argument("--out", help="JSONL file for results.")
ap.add_argument(
"--num_filler_prompts",
type=int,
default=NUM_FILLER_PROMPTS,
help="Number of filler prompts to send for cache flush.",
)
ap.add_argument(
"--filler_len_chars",
type=int,
default=FILLER_LEN_CHARS,
help="Length of filler prompt in characters.",
)
return ap.parse_args()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate required CLI args based on --dump_csv_header.

--url/--model/--isl/--osl/--out are used unconditionally later, but they aren’t required at parse time. This yields opaque runtime errors. Add validation in parse_args() so users get a clear CLI error.

✅ Proposed validation
-    return ap.parse_args()
+    args = ap.parse_args()
+    if args.dump_csv_header:
+        if not args.out:
+            ap.error("--out is required with --dump_csv_header")
+        return args
+
+    missing = [name for name in ("url", "model", "isl", "osl", "out") if getattr(args, name) is None]
+    if missing:
+        ap.error(f"Missing required args: {', '.join('--' + m for m in missing)}")
+    return args
🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/openai_chat_client.py` around lines 130 -
154, The CLI currently parses all flags but doesn't enforce that --url, --model,
--isl, --osl, and --out are supplied when --dump_csv_header is not set; update
parse_args() to validate after parsing: if args.dump_csv_header is False then
check that args.url, args.model, args.isl, args.osl, and args.out are present
(non-None) and call ap.error(...) with a clear message listing the missing
required flags; keep the existing defaults for --num_filler_prompts and
--filler_len_chars and do not require those when --dump_csv_header is set.

Comment on lines +209 to +216
out_path = Path(args.out)
with out_path.open("a", encoding="utf-8") as f:
if out_path.suffix == ".csv":
line = ",".join(str(v) for v in result.values())
f.write(line + "\n")
else:
json.dump(result, f, indent=2)
f.write("\n")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

JSONL output isn’t actually JSONL.

json.dump(..., indent=2) writes multi‑line JSON, and appending multiple runs yields invalid JSONL. Write one JSON object per line (or use log_jsonl).

🛠️ JSONL-safe write
-        else:
-            json.dump(result, f, indent=2)
-            f.write("\n")
+        else:
+            f.write(json.dumps(result, separators=(",", ":")) + "\n")
🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/openai_chat_client.py` around lines 209 -
216, The current JSONL branch writes pretty-printed multi-line JSON via
json.dump(result, indent=2), which breaks JSONL; change the write logic in the
block that opens out_path (variables: out_path, args.out, result) to emit a
single-line JSON object per run by using json.dumps(result) and writing that
plus a trailing newline (or call the existing log_jsonl helper if available)
instead of json.dump(..., indent=2).

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 16

🤖 Fix all issues with AI agents
In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.sh`:
- Around line 704-708: The mark_done function currently sleeps for a hardcoded
10000 seconds before touching $DONE_MARKER to keep the job alive; either add an
inline comment above function mark_done explaining that the long sleep is
intentional to keep the job running until workloads finish, or make the duration
configurable by accepting a parameter or reading a variable from dynamo_args
(e.g., DYNAMO_DONE_SLEEP) and defaulting to 10000, and then use that variable in
mark_done so the sleep duration can be adjusted at runtime; reference the
mark_done function and the DONE_MARKER and dynamo_args symbols when updating.
- Around line 65-73: Extract the duplicated functions by creating a shared
utilities file (e.g., commons or utils) that declares the log and min functions
(referencing the existing function names log() and min()) and a guard to avoid
double-sourcing, then update ai_dynamo.sh (and the other scripts genai_perf.sh,
lmbench.sh, custom_workload.sh) to source that common file instead of redefining
log() and min(); ensure the sourced path is resolved robustly (relative to the
script or via an env var) and remove the duplicate function definitions from
each script.
- Around line 816-818: The log redirection in launch_lmcache_controller does not
quote the path, which will break if RESULTS_DIR contains spaces; update the
redirection to quote the logfile path (use
"${RESULTS_DIR}/lmcache_controller.log" for the stdout/stderr redirection) while
leaving the invocation of lmcache_config["controller_cmd"] unchanged so the logs
are reliably written even with spaces in RESULTS_DIR.
- Around line 205-213: In _populate_nodelist, quote variable expansions to
prevent word-splitting: use "$(echo "$DYNAMO_NODELIST" | cut -d','
-f1-"${num_nodes}")" instead of unquoted DYNAMO_NODELIST, quote exclude_nodelist
when grepping (e.g. printf '%s\n' "$exclude_nodelist" | grep -q -- "$node" or
echo "$exclude_nodelist" | grep -q -- "$node"), and quote "$node" in the grep
and when appending to nodelist (nodelist+="${node},"); also quote "${num_nodes}"
and other parameter expansions where used.
- Around line 574-580: The stderr redirection order is incorrect: move the
`2>&1` after the file redirect so stderr is captured to the logfile;
specifically change the invocation that runs the node setup command (the bash -c
call using dynamo_args["node-setup-cmd"] and logfile) to use `>> "$logfile"
2>&1` and do the same for the `log "Node environment:\n$(env)"` redirect so both
stdout and stderr go into logfile.
- Around line 546-556: dynamo_args["workloads"] has no default, so the predicate
helpers (_is_genai_perf_workload, _is_lmbench_workload,
_is_custom_workload_workload) silently return false when --dynamo-workloads
isn't passed; add a default assignment for dynamo_args["workloads"] in the
dynamo_args initialization block (e.g., set to an empty string or a sensible
default like "" or "none") so those functions safely evaluate string containment
and workloads can be launched when the flag is omitted.

In `@src/cloudai/workloads/ai_dynamo/custom_workload.sh`:
- Around line 89-94: The case branch labeled '--*)' is unreachable because the
earlier '--' case uses break to exit the loop; remove the redundant '--*)'
handler from the case statement (or merge any needed logic into the existing
'--' case) so only the intended '--' case remains and no unreachable code
exists; inspect the case statement around the '--' and '--*)' labels and delete
the '--*)' block or its contents accordingly.
- Around line 186-188: The grep call that sets kv_cache_token_size is too broad
and may match unintended lines; change the pattern to match the exact output
from openai_chat_client.py (for example "KV cache token size:" or anchor with ^)
when reading kv_cache_token_size_file so kv_cache_token_size only captures the
numeric value, then keep the existing cut/tr -d ' ' pipeline to extract the
number and recalc num_filler_prompts; update the line that computes
kv_cache_token_size (referencing kv_cache_token_size_file and
kv_cache_token_size) so it uses grep -F or a more specific regex like '^KV cache
token size:' before computing num_filler_prompts and calling log.
- Around line 176-203: Several variable expansions in the custom_workload.sh
snippet (calls to openai_chat_client.py and uses of variables install_dir,
model, url, port, kv_cache_token_size_file, num_filler_tokens, report_file,
all_isl, isl) are unquoted and risk word-splitting; update every expansion to
use quotes (e.g., "$install_dir", "$model", "$url:$port/v1",
"$kv_cache_token_size_file", "$num_filler_tokens", "$report_file", "$all_isl")
and quote the loop input (for isl in $(echo "$all_isl" | tr ',' '\n') or
preferably use IFS=',' read -ra) so values containing spaces or special chars
are preserved when invoking openai_chat_client.py and when assigning
kv_cache_token_size and num_filler_prompts.
- Around line 138-158: The curl invocations use unquoted variable expansions
(api_endpoint and kvbm_metrics_endpoint) which risks word splitting; update the
three curl calls that set status and response to use quoted expansions (e.g.,
curl -s "${api_endpoint}/metrics", curl -s -X POST
"${api_endpoint}/clear_kv_blocks", and curl -s "${kvbm_metrics_endpoint}") so
the full URL is passed as a single argument; keep the existing grep and echo
fallbacks unchanged and ensure any future curl uses follow the same quoting
pattern.

In `@src/cloudai/workloads/ai_dynamo/openai_chat_client.py`:
- Line 224: Remove the unnecessary semicolon at the end of the statement that
assigns kv_cache_token_size (the call to compute_kv_cache_token_size(args,
client)); edit the line where kv_cache_token_size =
compute_kv_cache_token_size(args, client); to drop the trailing semicolon so it
becomes a normal Python statement, preserving the variable name and function
call.
- Around line 125-128: The flush_kv_cache function creates a new Chat on each
loop iteration which re-loads the tokenizer via AutoTokenizer.from_pretrained;
instead instantiate a single Chat once before the loop and reuse it for all
filler prompts (call Chat.stream multiple times), e.g. create chat =
Chat(args.num_filler_tokens, args.model, args.max_ctx_tokens) outside the for
loop and then call chat.stream(client, 1) inside the loop so the expensive
tokenizer load happens only once.
- Around line 183-187: Fix the typo and trailing whitespace in the canary log:
update the log call that currently reads "Sending an initialcanary query with
1000 tokens..." to "Sending an initial canary query with 1000 tokens..." and
remove any trailing whitespace on the subsequent line; locate the calls around
the Chat instantiation (Chat(1000, args.model, args.max_ctx_tokens)), the
canary_chat variable, the canary_stats assignment from
canary_chat.stream(client, 1), and the final log that prints
canary_stats.ttft_seconds and canary_stats.cached_tokens to make these exact
textual corrections.
- Around line 179-180: The scaled filler prompt bounds calculation for
maxFillerPrompts and minFillerPrompts can produce an invalid binary-search
range; validate and normalize after computing these values by (1) ensuring
args.min_filler_prompts and args.max_filler_prompts are ordered (swap or raise
if min>max), (2) compute maxFillerPrompts = max(1, args.max_filler_prompts //
10) and minFillerPrompts = max(1, min(10, args.min_filler_prompts // 10)), and
(3) if minFillerPrompts >= maxFillerPrompts, adjust minFillerPrompts =
maxFillerPrompts - 1 (or set maxFillerPrompts = minFillerPrompts + 1) to
guarantee a valid range; update the logic around the assignments to
maxFillerPrompts and minFillerPrompts in openai_chat_client.py accordingly.
- Around line 162-164: Rename the function SendFillerQueries to snake_case
send_filler_queries and modify it to instantiate a single Chat object outside
the loop so the tokenizer is loaded once; inside send_filler_queries create chat
= Chat(args.num_filler_tokens, args.model, args.max_ctx_tokens) and call
chat.stream(client, 1) in the loop instead of re-creating Chat each iteration,
and update any call sites that reference SendFillerQueries to use
send_filler_queries.

In `@src/cloudai/workloads/ai_dynamo/slurm_command_gen_strategy.py`:
- Around line 49-58: The loop over toml_args produces invalid shell args when a
dict value doesn't match the GitRepo pattern ("url","commit","mount_as") or the
File pattern ("src" as PosixPath); update the handling in the block that
iterates toml_args.items() so that instead of falling back to stringifying the
dict you validate and fail fast: if a dict value is not one of the recognized
shapes (check keys "url","commit","mount_as" OR key "src" with
isinstance(v["src"], PosixPath)), raise a clear ValueError that includes the
dict key name (the toml_args key) and a short message about unsupported nested
dicts; do not append the dict to args or attempt to stringify it. This change
should be applied to the same loop that references toml_args, args and prefix.

Comment on lines 65 to 73
function log()
{
echo "[$(date --iso-8601=ns) $(hostname)]: $@"
echo -e "[$(date +%F\ %T) $(hostname)]: $*"
}

function min()
{
echo "$(( $1 < $2 ? $1 : $2 ))"
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider extracting common log function to reduce duplication.

The log function is duplicated across ai_dynamo.sh, genai_perf.sh, lmbench.sh, and custom_workload.sh. Consider sourcing a common utilities file to maintain consistency.

🧰 Tools
🪛 Shellcheck (0.11.0)

[info] 70-73: This function is never invoked. Check usage (or ignored if invoked indirectly).

(SC2329)

🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.sh` around lines 65 - 73, Extract
the duplicated functions by creating a shared utilities file (e.g., commons or
utils) that declares the log and min functions (referencing the existing
function names log() and min()) and a guard to avoid double-sourcing, then
update ai_dynamo.sh (and the other scripts genai_perf.sh, lmbench.sh,
custom_workload.sh) to source that common file instead of redefining log() and
min(); ensure the sourced path is resolved robustly (relative to the script or
via an env var) and remove the duplicate function definitions from each script.

Comment on lines 546 to 556
_is_genai_perf_workload() {
[[ "${dynamo_args["workloads"]}" == *"genai_perf.sh"* ]]
}

_is_lmbench_workload() {
[[ "${dynamo_args["workloads"]}" == *"lmbench.sh"* ]]
}

_is_custom_workload_workload() {
[[ "${dynamo_args["workloads"]}" == *"custom_workload.sh"* ]]
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

rg -n "dynamo_args\[.workloads.\]" src/cloudai/workloads/ai_dynamo/ai_dynamo.sh

Repository: NVIDIA/cloudai

Length of output: 248


🏁 Script executed:

#!/bin/bash
# Search for dynamo_args initialization and workloads argument handling
rg -n "dynamo_args\s*\[" src/cloudai/workloads/ai_dynamo/ai_dynamo.sh | head -20

Repository: NVIDIA/cloudai

Length of output: 1226


🏁 Script executed:

#!/bin/bash
# Check the full dynamo_args initialization and search for workloads handling
sed -n '43,100p' src/cloudai/workloads/ai_dynamo/ai_dynamo.sh

Repository: NVIDIA/cloudai

Length of output: 1815


🏁 Script executed:

#!/bin/bash
# Search for argument parsing related to workloads
rg -n "workloads" src/cloudai/workloads/ai_dynamo/ai_dynamo.sh -B2 -A2

Repository: NVIDIA/cloudai

Length of output: 543


🏁 Script executed:

#!/bin/bash
# Search for argument parsing logic in the script
rg -n "getopt|while.*\$@|\-\-[a-z]" src/cloudai/workloads/ai_dynamo/ai_dynamo.sh | head -30

Repository: NVIDIA/cloudai

Length of output: 1794


🏁 Script executed:

#!/bin/bash
# Get the full argument parsing loop and check for default handling
sed -n '140,200p' src/cloudai/workloads/ai_dynamo/ai_dynamo.sh

Repository: NVIDIA/cloudai

Length of output: 1850


🏁 Script executed:

#!/bin/bash
# Get the launch_workloads function to see if it handles empty workloads
sed -n '1005,1100p' src/cloudai/workloads/ai_dynamo/ai_dynamo.sh

Repository: NVIDIA/cloudai

Length of output: 1449


🏁 Script executed:

#!/bin/bash
# Check if there's any documentation or comment about workloads being optional
rg -n "workloads|genai_perf|lmbench|custom_workload" src/cloudai/workloads/ai_dynamo/ai_dynamo.sh | grep -i "optional\|required\|usage\|help" | head -10

Repository: NVIDIA/cloudai

Length of output: 40


🏁 Script executed:

#!/bin/bash
# Check for README or usage documentation in ai_dynamo directory
find src/cloudai/workloads/ai_dynamo -type f \( -name "README*" -o -name "*.md" \) | xargs cat 2>/dev/null || echo "No README found"

Repository: NVIDIA/cloudai

Length of output: 40


🏁 Script executed:

#!/bin/bash
# Check if there's any validation or assertion for required arguments
rg -n "validate|required|assert" src/cloudai/workloads/ai_dynamo/ai_dynamo.sh | head -20

Repository: NVIDIA/cloudai

Length of output: 114


🏁 Script executed:

#!/bin/bash
# Check the validate_environment function
sed -n '630,700p' src/cloudai/workloads/ai_dynamo/ai_dynamo.sh

Repository: NVIDIA/cloudai

Length of output: 1990


Missing default for dynamo_args["workloads"].

The dynamo_args["workloads"] key is only set via CLI argument --dynamo-workloads and has no initialization default. If not provided, the workload predicate functions at lines 546-556 will all fail silently, preventing any workloads from launching on the frontend node. Add a default value in the dynamo_args initialization block.

🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.sh` around lines 546 - 556,
dynamo_args["workloads"] has no default, so the predicate helpers
(_is_genai_perf_workload, _is_lmbench_workload, _is_custom_workload_workload)
silently return false when --dynamo-workloads isn't passed; add a default
assignment for dynamo_args["workloads"] in the dynamo_args initialization block
(e.g., set to an empty string or a sensible default like "" or "none") so those
functions safely evaluate string containment and workloads can be launched when
the flag is omitted.

Comment on lines 574 to 580
log "Launching node setup command: ${dynamo_args["node-setup-cmd"]}"
bash -c "${dynamo_args["node-setup-cmd"]}"
bash -c "${dynamo_args["node-setup-cmd"]}" 2>&1 >> "$logfile"
log "Node setup complete"
fi

log "Node environment:\n$(env)" 2>&1 >> "$logfile"
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix redirect order for proper stderr capture.

The 2>&1 must come after the file redirect to capture both stdout and stderr to the log file. Current order redirects stderr to original stdout (terminal), not the file.

🐛 Proposed fix
-    bash -c "${dynamo_args["node-setup-cmd"]}" 2>&1 >> "$logfile"
+    bash -c "${dynamo_args["node-setup-cmd"]}" >> "$logfile" 2>&1
     log "Node setup complete"
   fi
 
-  log "Node environment:\n$(env)" 2>&1 >> "$logfile"
+  log "Node environment:\n$(env)" >> "$logfile" 2>&1
🧰 Tools
🪛 Shellcheck (0.11.0)

[warning] 575-575: To redirect stdout+stderr, 2>&1 must be last (or use '{ cmd > file; } 2>&1' to clarify).

(SC2069)


[warning] 579-579: To redirect stdout+stderr, 2>&1 must be last (or use '{ cmd > file; } 2>&1' to clarify).

(SC2069)

🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.sh` around lines 574 - 580, The
stderr redirection order is incorrect: move the `2>&1` after the file redirect
so stderr is captured to the logfile; specifically change the invocation that
runs the node setup command (the bash -c call using
dynamo_args["node-setup-cmd"] and logfile) to use `>> "$logfile" 2>&1` and do
the same for the `log "Node environment:\n$(env)"` redirect so both stdout and
stderr go into logfile.

Comment on lines 704 to 708
function mark_done()
{
sleep 10000
touch "$DONE_MARKER"
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Clarify purpose of long sleep in mark_done.

The sleep 10000 (~2.7 hours) before touching $DONE_MARKER appears intentional to keep the job alive while workloads run. Consider adding a comment explaining this design or making the duration configurable via dynamo_args.

🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.sh` around lines 704 - 708, The
mark_done function currently sleeps for a hardcoded 10000 seconds before
touching $DONE_MARKER to keep the job alive; either add an inline comment above
function mark_done explaining that the long sleep is intentional to keep the job
running until workloads finish, or make the duration configurable by accepting a
parameter or reading a variable from dynamo_args (e.g., DYNAMO_DONE_SLEEP) and
defaulting to 10000, and then use that variable in mark_done so the sleep
duration can be adjusted at runtime; reference the mark_done function and the
DONE_MARKER and dynamo_args symbols when updating.

Comment on lines 162 to 164
def SendFillerQueries(args: argparse.Namespace, client: OpenAI, num: int):
for _ in range(num):
_ = Chat(args.num_filler_tokens, args.model, args.max_ctx_tokens).stream(client, 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Rename function to snake_case and optimize tokenizer loading.

SendFillerQueries should be send_filler_queries per PEP 8. Also, reuse the Chat instance to avoid repeated tokenizer loading.

Suggested fix
-def SendFillerQueries(args: argparse.Namespace, client: OpenAI, num: int):
+def send_filler_queries(args: argparse.Namespace, client: OpenAI, num: int) -> None:
+    filler_chat = Chat(args.num_filler_tokens, args.model, args.max_ctx_tokens)
     for _ in range(num):
-        _ = Chat(args.num_filler_tokens, args.model, args.max_ctx_tokens).stream(client, 1)
+        _ = filler_chat.stream(client, 1)

Update call site at line 191:

-        SendFillerQueries(args, client, numFillerPrompts)
+        send_filler_queries(args, client, numFillerPrompts)
🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/openai_chat_client.py` around lines 162 -
164, Rename the function SendFillerQueries to snake_case send_filler_queries and
modify it to instantiate a single Chat object outside the loop so the tokenizer
is loaded once; inside send_filler_queries create chat =
Chat(args.num_filler_tokens, args.model, args.max_ctx_tokens) and call
chat.stream(client, 1) in the loop instead of re-creating Chat each iteration,
and update any call sites that reference SendFillerQueries to use
send_filler_queries.

Comment on lines 179 to 180
maxFillerPrompts = args.max_filler_prompts // 10
minFillerPrompts = min(10, args.min_filler_prompts // 10)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add bounds validation to prevent invalid binary search range.

If args.min_filler_prompts > args.max_filler_prompts, or if scaled values result in minFillerPrompts >= maxFillerPrompts, the function may not behave as expected.

Suggested fix
     maxFillerPrompts = args.max_filler_prompts // 10
     minFillerPrompts = min(10, args.min_filler_prompts // 10)
+    if minFillerPrompts >= maxFillerPrompts:
+        log(f"Warning: Invalid filler prompt range (min={minFillerPrompts*10}, max={maxFillerPrompts*10}). Using defaults.")
+        minFillerPrompts = 1
+        maxFillerPrompts = 100
     log(f"Doing binary search for the number of filler prompts required to flush the KV cache between {minFillerPrompts} and {maxFillerPrompts}...")
🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/openai_chat_client.py` around lines 179 -
180, The scaled filler prompt bounds calculation for maxFillerPrompts and
minFillerPrompts can produce an invalid binary-search range; validate and
normalize after computing these values by (1) ensuring args.min_filler_prompts
and args.max_filler_prompts are ordered (swap or raise if min>max), (2) compute
maxFillerPrompts = max(1, args.max_filler_prompts // 10) and minFillerPrompts =
max(1, min(10, args.min_filler_prompts // 10)), and (3) if minFillerPrompts >=
maxFillerPrompts, adjust minFillerPrompts = maxFillerPrompts - 1 (or set
maxFillerPrompts = minFillerPrompts + 1) to guarantee a valid range; update the
logic around the assignments to maxFillerPrompts and minFillerPrompts in
openai_chat_client.py accordingly.

Comment on lines 183 to 187
log(f"Sending an initialcanary query with 1000 tokens...")
canary_chat = Chat(1000, args.model, args.max_ctx_tokens)
canary_stats = canary_chat.stream(client, 1)
log(f"Initial Canary query: {canary_stats.ttft_seconds:.3f}s with {canary_stats.cached_tokens} cached tokens")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix typo and trailing whitespace.

Line 183 has a typo "initialcanary" (missing space), and line 187 has trailing whitespace.

Suggested fix
-    log(f"Sending an initialcanary query with 1000 tokens...")
+    log(f"Sending an initial canary query with 1000 tokens...")
     canary_chat = Chat(1000, args.model, args.max_ctx_tokens)
     canary_stats = canary_chat.stream(client, 1)
     log(f"Initial Canary query: {canary_stats.ttft_seconds:.3f}s with {canary_stats.cached_tokens} cached tokens")
- 
+
     while minFillerPrompts < maxFillerPrompts:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
log(f"Sending an initialcanary query with 1000 tokens...")
canary_chat = Chat(1000, args.model, args.max_ctx_tokens)
canary_stats = canary_chat.stream(client, 1)
log(f"Initial Canary query: {canary_stats.ttft_seconds:.3f}s with {canary_stats.cached_tokens} cached tokens")
log(f"Sending an initial canary query with 1000 tokens...")
canary_chat = Chat(1000, args.model, args.max_ctx_tokens)
canary_stats = canary_chat.stream(client, 1)
log(f"Initial Canary query: {canary_stats.ttft_seconds:.3f}s with {canary_stats.cached_tokens} cached tokens")
🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/openai_chat_client.py` around lines 183 -
187, Fix the typo and trailing whitespace in the canary log: update the log call
that currently reads "Sending an initialcanary query with 1000 tokens..." to
"Sending an initial canary query with 1000 tokens..." and remove any trailing
whitespace on the subsequent line; locate the calls around the Chat
instantiation (Chat(1000, args.model, args.max_ctx_tokens)), the canary_chat
variable, the canary_stats assignment from canary_chat.stream(client, 1), and
the final log that prints canary_stats.ttft_seconds and
canary_stats.cached_tokens to make these exact textual corrections.


if args.compute_kv_cache_token_size:
log(f"Computing KV cache token size...")
kv_cache_token_size = compute_kv_cache_token_size(args, client);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Remove unnecessary semicolon.

Python doesn't require semicolons at end of statements.

Suggested fix
-        kv_cache_token_size = compute_kv_cache_token_size(args, client);
+        kv_cache_token_size = compute_kv_cache_token_size(args, client)
🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/openai_chat_client.py` at line 224, Remove
the unnecessary semicolon at the end of the statement that assigns
kv_cache_token_size (the call to compute_kv_cache_token_size(args, client));
edit the line where kv_cache_token_size = compute_kv_cache_token_size(args,
client); to drop the trailing semicolon so it becomes a normal Python statement,
preserving the variable name and function call.

Comment on lines +49 to +58
for k, v in toml_args.items():
if isinstance(v, dict):
if "url" in v and "commit" in v and "mount_as" in v:
args.extend([f'{prefix}{k} "{v["mount_as"]}"'])
elif "src" in v and isinstance(v["src"], PosixPath):
args.extend([f'{prefix}{k} "{v["src"].name}"'])
else:
args.append(f'{prefix}{k} "{v}"')
else:
args.append(f'{prefix}{k} "{v}"')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, let's examine the actual file and the full method context
cat -n src/cloudai/workloads/ai_dynamo/slurm_command_gen_strategy.py | sed -n '40,65p'

Repository: NVIDIA/cloudai

Length of output: 1517


🏁 Script executed:

# Search for the class definition and imports to understand what types are involved
cat -n src/cloudai/workloads/ai_dynamo/slurm_command_gen_strategy.py | sed -n '1,50p'

Repository: NVIDIA/cloudai

Length of output: 2470


🏁 Script executed:

# Look for BaseModel subclasses in the ai_dynamo module that might have dict fields
rg -n "class.*\(.*BaseModel" src/cloudai/workloads/ai_dynamo/ -A 20

Repository: NVIDIA/cloudai

Length of output: 10798


🏁 Script executed:

# Search for usages of _get_toml_args to see what arguments are passed
rg -n "_get_toml_args|toml_args" src/cloudai/workloads/ai_dynamo/ -B 3 -A 3

Repository: NVIDIA/cloudai

Length of output: 4605


🏁 Script executed:

# Find File and GitRepo definitions
rg -n "class File|class GitRepo" src/cloudai/workloads/ai_dynamo/ -A 15

Repository: NVIDIA/cloudai

Length of output: 40


🏁 Script executed:

# Also check the imports in ai_dynamo.py to understand where these come from
head -40 src/cloudai/workloads/ai_dynamo/ai_dynamo.py

Repository: NVIDIA/cloudai

Length of output: 1173


🏁 Script executed:

# Find File and GitRepo in cloudai.core
find src -name "*.py" -type f | xargs rg -l "class File|class GitRepo" | head -5

Repository: NVIDIA/cloudai

Length of output: 91


🏁 Script executed:

# Let's look at cloudai.core module structure
find src/cloudai/core -name "*.py" | head -20

Repository: NVIDIA/cloudai

Length of output: 109


🏁 Script executed:

# Search for File class definition
rg -n "class File" src/cloudai/core/ -A 20

Repository: NVIDIA/cloudai

Length of output: 115


🏁 Script executed:

# Search for GitRepo class definition
rg -n "class GitRepo" src/cloudai/core/ -A 20

Repository: NVIDIA/cloudai

Length of output: 115


🏁 Script executed:

# Let's find where File and GitRepo are actually defined
rg -n "class File" src/cloudai/ -A 20 | head -60

Repository: NVIDIA/cloudai

Length of output: 1380


🏁 Script executed:

# Also search for GitRepo
rg -n "class GitRepo" src/cloudai/ -A 20 | head -60

Repository: NVIDIA/cloudai

Length of output: 1489


🏁 Script executed:

# Check the directory structure
ls -la src/cloudai/ | head -20

Repository: NVIDIA/cloudai

Length of output: 1105


🏁 Script executed:

# Check the Installable base class
rg -n "class Installable" src/cloudai/_core/ -A 15

Repository: NVIDIA/cloudai

Length of output: 1001


🏁 Script executed:

# Let's also verify how File is actually used when model_dump is called
# Check if there are any tests or usages that show what the dict looks like
rg -n "model_dump" src/cloudai/workloads/ai_dynamo/ -B 3 -A 5

Repository: NVIDIA/cloudai

Length of output: 2239


🏁 Script executed:

# Let's check the complete File definition and see if it's actually a BaseModel or has custom serialization
cat -n src/cloudai/_core/installables.py | sed -n '140,180p'

Repository: NVIDIA/cloudai

Length of output: 1440


🏁 Script executed:

# Check if File or any Installable has custom serialization
rg -n "model_serializer|field_serializer|ConfigDict" src/cloudai/_core/installables.py -B 2 -A 2

Repository: NVIDIA/cloudai

Length of output: 265


🏁 Script executed:

# Let's see actual test files to understand what structures are passed through
find src -path "*test*" -name "*ai_dynamo*" -type f

Repository: NVIDIA/cloudai

Length of output: 40


🏁 Script executed:

# Check if there are any tests for _get_toml_args
find . -path "*test*" -name "*.py" -exec rg -l "_get_toml_args\|slurm_command_gen_strategy" {} \;

Repository: NVIDIA/cloudai

Length of output: 40


🏁 Script executed:

# Let's verify what Pydantic model_dump does with Path objects
# Create a simple test to see the output
python3 << 'EOF'
from pathlib import Path, PosixPath
from pydantic import BaseModel
from dataclasses import dataclass

`@dataclass`
class TestFile:
    src: Path
    
class TestModel(BaseModel):
    file_field: TestFile
    
# Test what model_dump produces
m = TestModel(file_field=TestFile(src=Path("/tmp/test")))
result = m.model_dump()
print("Result:", result)
print("Type of src:", type(result['file_field']['src']))
print("Is PosixPath?", isinstance(result['file_field']['src'], PosixPath))
EOF

Repository: NVIDIA/cloudai

Length of output: 173


🏁 Script executed:

# Also check what dict fields actually exist in the models being serialized
rg -n "repo|script" src/cloudai/workloads/ai_dynamo/ai_dynamo.py | grep "Field\|:" | head -20

Repository: NVIDIA/cloudai

Length of output: 1389


🏁 Script executed:

# Let's check what dict fields could exist besides repo and script
# by examining what BaseModel subclasses have dict-typed fields
cat -n src/cloudai/workloads/ai_dynamo/ai_dynamo.py | sed -n '40,100p'

Repository: NVIDIA/cloudai

Length of output: 2522


🏁 Script executed:

# Check the Args class which has extra="allow"
rg -n "class Args" src/cloudai/workloads/ai_dynamo/ai_dynamo.py -A 10

Repository: NVIDIA/cloudai

Length of output: 365


🏁 Script executed:

# Let's trace what actually flows through _get_toml_args by looking at actual calls
cat -n src/cloudai/workloads/ai_dynamo/slurm_command_gen_strategy.py | sed -n '70,115p'

Repository: NVIDIA/cloudai

Length of output: 2339


🏁 Script executed:

# Let's check if Args could receive arbitrary dict values through extra="allow"
rg -n "extra.*allow" src/cloudai/workloads/ai_dynamo/ai_dynamo.py

Repository: NVIDIA/cloudai

Length of output: 708


🏁 Script executed:

# Check what dict structures could actually come through - test with a sample config
# Look at existing test configurations
find . -path "*test*" -name "*.toml" -o -name "*.yaml" | grep -i ai_dynamo | head -5

Repository: NVIDIA/cloudai

Length of output: 220


🏁 Script executed:

# Let's trace through: when Args with extra="allow" is dumped, what could be in those extra fields?
# Check how configs are loaded and what might end up in extra fields
rg -n "Args\|extra" src/cloudai/workloads/ai_dynamo/ai_dynamo.py | head -30

Repository: NVIDIA/cloudai

Length of output: 40


🏁 Script executed:

# Let's examine a test configuration to see what structure is passed to _get_toml_args
cat conf/experimental/ai_dynamo/test/vllm.toml

Repository: NVIDIA/cloudai

Length of output: 2046


🏁 Script executed:

# Also check what happens when a BaseModel with extra="allow" is dumped with arbitrary nested dicts
python3 << 'EOF'
from pydantic import BaseModel, ConfigDict
from pathlib import Path

class Args(BaseModel):
    model_config = ConfigDict(extra="allow", populate_by_name=True)

# Simulate loading config with nested dicts
args = Args.model_validate({
    'simple_value': 'test',
    'nested_dict': {'key': 'value', 'another': 'nested'},
})

result = args.model_dump(by_alias=True, exclude_none=True)
print("Dumped result:")
print(result)
for k, v in result.items():
    print(f"  {k}: {v} (type: {type(v).__name__})")
EOF

Repository: NVIDIA/cloudai

Length of output: 250


🏁 Script executed:

# Let's verify if the fallback case at line 56 can actually be hit
# by checking if any tests or real usage passes nested dicts through Args
rg -n "Args\(" src/cloudai/workloads/ai_dynamo/ -A 5 | head -40

Repository: NVIDIA/cloudai

Length of output: 2674


🏁 Script executed:

# Let's also test what the actual output would be if we hit the fallback
python3 << 'EOF'
from pathlib import Path, PosixPath

# Simulate what happens in line 56 when a nested dict is passed
prefix = "--test-"
k = "myfield"
v = {'nested': 'dict', 'another': 'value'}

# This is what line 56 does:
result = f'{prefix}{k} "{v}"'
print("Fallback output:")
print(result)
print("\nThis is invalid as a shell argument because:")
print("- The dict repr contains quotes and special chars")
print("- It's not valid TOML syntax")
EOF

Repository: NVIDIA/cloudai

Length of output: 255


🏁 Script executed:

# Check if there are any validation or error handling for dict values
rg -n "isinstance.*dict\|raise.*dict" src/cloudai/workloads/ai_dynamo/slurm_command_gen_strategy.py

Repository: NVIDIA/cloudai

Length of output: 40


Address the fallback dict handling to prevent invalid shell arguments.

The fallback case at line 56 produces Python dict repr output (e.g., --field "{'key': 'value'}") when encountering dicts that don't match the GitRepo or File patterns. Since Args and related classes use extra="allow", arbitrary nested dicts can be added via configuration and will produce invalid shell syntax. Either raise an error for unexpected dict types, or add explicit validation to prevent nested dicts from reaching this fallback.

🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/slurm_command_gen_strategy.py` around lines
49 - 58, The loop over toml_args produces invalid shell args when a dict value
doesn't match the GitRepo pattern ("url","commit","mount_as") or the File
pattern ("src" as PosixPath); update the handling in the block that iterates
toml_args.items() so that instead of falling back to stringifying the dict you
validate and fail fast: if a dict value is not one of the recognized shapes
(check keys "url","commit","mount_as" OR key "src" with isinstance(v["src"],
PosixPath)), raise a clear ValueError that includes the dict key name (the
toml_args key) and a short message about unsupported nested dicts; do not append
the dict to args or attempt to stringify it. This change should be applied to
the same loop that references toml_args, args and prefix.

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

6 files reviewed, 7 comments

Edit Code Review Agent Settings | Greptile

Comment on lines +370 to +373
prefill_tp = prefill_worker.args.tensor_parallel_size if prefill_worker else 1
decode_tp = decode_worker.args.tensor_parallel_size if decode_worker else 1
prefill_pp = prefill_worker.args.pipeline_parallel_size if prefill_worker else 1
decode_pp = decode_worker.args.pipeline_parallel_size if decode_worker else 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accessing prefill_worker.args.tensor_parallel_size without checking if args exists first. If args is None, this will raise AttributeError.

Suggested change
prefill_tp = prefill_worker.args.tensor_parallel_size if prefill_worker else 1
decode_tp = decode_worker.args.tensor_parallel_size if decode_worker else 1
prefill_pp = prefill_worker.args.pipeline_parallel_size if prefill_worker else 1
decode_pp = decode_worker.args.pipeline_parallel_size if decode_worker else 1
prefill_tp = prefill_worker.args.tensor_parallel_size if prefill_worker and prefill_worker.args else 1
decode_tp = decode_worker.args.tensor_parallel_size if decode_worker and decode_worker.args else 1
prefill_pp = prefill_worker.args.pipeline_parallel_size if prefill_worker and prefill_worker.args else 1
decode_pp = decode_worker.args.pipeline_parallel_size if decode_worker and decode_worker.args else 1

Comment on lines +329 to +330
prefill_config["gpus-per-worker"]=$(( prefill_args["--tensor-parallel-size"] * prefill_args["--pipeline-parallel-size"] ))
decode_config["gpus-per-worker"]=$(( decode_args["--tensor-parallel-size"] * decode_args["--pipeline-parallel-size"] ))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Arithmetic will fail if --tensor-parallel-size or --pipeline-parallel-size are unset or empty. Empty string in arithmetic results in invalid syntax error.

Suggested change
prefill_config["gpus-per-worker"]=$(( prefill_args["--tensor-parallel-size"] * prefill_args["--pipeline-parallel-size"] ))
decode_config["gpus-per-worker"]=$(( decode_args["--tensor-parallel-size"] * decode_args["--pipeline-parallel-size"] ))
prefill_config["gpus-per-worker"]=$(( ${prefill_args["--tensor-parallel-size"]:-1} * ${prefill_args["--pipeline-parallel-size"]:-1} ))
decode_config["gpus-per-worker"]=$(( ${decode_args["--tensor-parallel-size"]:-1} * ${decode_args["--pipeline-parallel-size"]:-1} ))

minFillerPrompts = min(10, args.min_filler_prompts // 10)
log(f"Doing binary search for the number of filler prompts required to flush the KV cache between {minFillerPrompts} and {maxFillerPrompts}...")

log(f"Sending an initialcanary query with 1000 tokens...")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo: "initialcanary" should be "initial canary"

Suggested change
log(f"Sending an initialcanary query with 1000 tokens...")
log(f"Sending an initial canary query with 1000 tokens...")

--min_filler_prompts 10

kv_cache_token_size=$(grep cache $kv_cache_token_size_file | cut -d':' -f 2 | tr -d ' ')
num_filler_prompts=$((kv_cache_token_size / num_filler_tokens))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Division by zero will occur if kv_cache_token_size is 0 or if the grep fails to extract a value.


pushd $lmbench_dir

cmdline_args="${cmdline_args} --output ${result_dir}/lmbench_bench_output.csv ${key} ${extra_args}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

${key} is referenced but never defined in this script

# launch genai-perf
log "Launching genai-perf with args: $full_cmd"

eval $full_cmd
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using eval with user-controlled input can lead to command injection. Consider using direct command execution instead.

Comment on lines +345 to +349

# Build wrapper command with proper parameters
report_file = "genai_perf_report.csv"
wrapper_cmd = [
"/bin/bash",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check if tdef.cmd_args.genai_perf.args exists before accessing it

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 16

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
src/cloudai/workloads/ai_dynamo/ai_dynamo.sh (4)

149-193: 🧹 Nitpick | 🔵 Trivial

Inconsistent key prefixing across associative arrays.

lmcache_args keys have no -- prefix (line 166) while lmbench_args, genai_perf_args, and custom_workload_args do (lines 170, 174, 178). Similarly, lmcache_config keys lack -- while lmbench_config, genai_perf_config, custom_workload_config include it.

This is presumably intentional (lmcache values are accessed by key directly, others go through array_to_args), but the inconsistency is a maintenance trap. Consider adding a brief comment near the declarations (lines 30-41) explaining which arrays use ---prefixed keys and why.


836-856: ⚠️ Potential issue | 🟠 Major

Prefill readiness check is bypassed while prefill workers are still launched.

Lines 838 and 842 hardcode want_prefill=0 and have_prefill=0, bypassing prefill readiness validation. However, launch_prefill is still called at line 1079 when _is_prefill_node. This means wait_for_dynamo_frontend (line 1023) will proceed to launch workloads before prefill workers have initialized, potentially causing failures.

Either re-enable the prefill readiness checks or add a comment explaining why prefill readiness is intentionally skipped (e.g., prefill workers initialize fast enough or workloads don't depend on prefill).


419-435: ⚠️ Potential issue | 🟠 Major

shopt -s nocasematch leaks into global shell state.

nocasematch is enabled on line 424 inside the loop but never reverted. After array_to_args returns, all subsequent [[ ... == ... ]] comparisons in the script become case-insensitive, which could cause subtle match failures (e.g., backend detection, connector checks).

🐛 Proposed fix: save and restore shopt state
 function array_to_args()
 {
   local -n arr=$1
   local result=""
+  local _prev_nocasematch
+  _prev_nocasematch=$(shopt -p nocasematch 2>/dev/null || true)
   for key in "${!arr[@]}"; do
     shopt -s nocasematch
     val=$(replace_placeholders "${arr[$key]}")
     # Quote values that contain spaces
     if [[ "$val" == *" "* ]]; then
       val="${val//\"/\\\"}"  # Escape existing quotes
       result+="${key} \"${val}\" "
     else
       result+="${key} ${val} "
     fi
   done
+  eval "$_prev_nocasematch"
   echo "$result"
 }

The same issue exists in arg_array_to_string (line 367) — though nocasematch isn't set there currently, it's worth noting.


746-823: 🧹 Nitpick | 🔵 Trivial

LGTM with minor notes.

Worker launch logic is correct. The connector injection and port offset calculations handle multi-worker-per-node scenarios properly.

Minor: $log_file on lines 779 and 822 should be quoted to handle paths with spaces (> "$log_file" 2>&1).

🤖 Fix all issues with AI agents
In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.py`:
- Around line 337-352: The installables list in the installables property
currently has self.hf_model commented out without explanation; either re-enable
self.hf_model in the returned list (remove the comment) if HF model downloads
should be part of installables, or leave it commented and add a short clarifying
comment next to the commented line explaining why HF model is excluded (e.g.,
"HF model download is performed elsewhere" or "temporarily disabled—tracked by
ISSUE-XXXX"); if you choose to keep it disabled, also create or link an issue to
track re-enabling and reference that issue ID in the comment.
- Around line 46-48: Update the docstring for the Workload class to accurately
describe its purpose (e.g., "Base configuration for a workload" or "Represents
the base configuration and metadata for a custom workload") instead of the
copy-pasted "Arguments for custom workloads"; locate the Workload class
definition inheriting from BaseModel and replace the misleading docstring with a
concise, accurate description that reflects the class role and contents.
- Around line 158-164: The prefill_worker field currently defaults to a
WorkerConfig instance which makes truthiness checks (e.g., in
get_cached_nodes_spec, constraint_check, kubernetes_json_gen_strategy) behave as
if prefill is configured; change prefill_worker to be optional by updating its
annotation to Optional[WorkerConfig] and set Field(default=None) (remove
default_factory), so unconfigured manifests produce None; update any
serialization/validation aliases if present and run tests that exercise
get_cached_nodes_spec, constraint_check, and kubernetes_json_gen_strategy to
confirm behavior.

In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.sh`:
- Around line 858-874: The _query_frontend function is dead code and builds JSON
via fragile string interpolation using dynamo_args["model"] and content, which
can break for special characters; either remove the unused _query_frontend
function or (if keeping for debugging) rewrite it to construct the payload
securely with jq (already required earlier) and then POST the safe JSON file at
"$RESULTS_DIR/curl_cmd.json" to "${dynamo_args["url"]}/v1/chat/completions",
ensuring proper escaping of dynamo_args["model"], the messages array, and
max_tokens instead of naive string concatenation.
- Around line 938-940: The log call references lmcache_path before it's
assigned, so move the assignment local lmcache_path="${lmcache_config["repo"]}"
above the log or change the log to reference the repo directly; update the block
around the log and the uv pip install -e $lmcache_path call (both referencing
lmcache_path) so the variable is assigned prior to use (ensure
lmcache_config["repo"] exists) and the install uses the same lmcache_path
variable.
- Around line 988-1019: In launch_workload remove the unused nameref
workload_args_ref (the array_to_args call already takes the name string) to
avoid dead code, and update the invocation of the workload script and log
redirect to safely handle spaces by quoting the path expressions: quote
"${INSTALL_DIR}/$script" where the script is executed and quote the log file
path as "${RESULTS_DIR}/$workload_name.log"; leave array_to_args usage unchanged
and keep other dynamo_args references as-is.
- Around line 196-221: The function _populate_nodelist currently uses cut -d','
-f1-${num_nodes} which only examines the first num_nodes entries and can miss
valid nodes when some of those are excluded; change the logic in
_populate_nodelist to iterate over all entries in DYNAMO_NODELIST (split on
commas) and collect nodes not present in exclude_nodelist until you have
num_nodes, then stop; keep existing behavior of returning empty string for
zero/empty num_nodes, trimming the trailing comma from nodelist before echo, and
continue using the same identifiers (num_nodes, exclude_nodelist,
DYNAMO_NODELIST, _populate_nodelist) to locate and update the code.
- Around line 909-916: The setup_storage_cache_dir function mutates the global
STORAGE_CACHE_DIR by appending to it, making it non-idempotent and causing
nested paths on repeated calls; change it to compute the final path in a local
variable (e.g., local target_dir) using TEST_USER, dynamo_args["frontend-node"],
and dynamo_args["connector"], perform rm -rf, mkdir -p and chmod 755 on that
local variable, then assign the global STORAGE_CACHE_DIR only if it is unset (or
explicitly intended), and ensure all uses of the path in rm/mkdir/chmod are
quoted to avoid word splitting.

In `@src/cloudai/workloads/ai_dynamo/custom_workload.sh`:
- Around line 148-162: In function print_metrics, remove or restore use of the
unused local variables frontend_metrics_endpoint and component_metrics_endpoint
to address ShellCheck SC2034; either delete the lines that set
frontend_metrics_endpoint and component_metrics_endpoint if you intend only to
log KVBM metrics, or uncomment and re-enable the corresponding curl+log blocks
that reference those variables (the commented curl/grep and log lines) so
frontend_metrics_endpoint and component_metrics_endpoint are actually used.
- Around line 247-260: The bytes_per_token computed by the awk expression can be
zero which will cause a division-by-zero when computing g2_token_size; after
calculating bytes_per_token (from g1_memory_total_gib and g1_token_size) add a
guard that checks if bytes_per_token is zero or unset and if so log an error via
log and skip or set g2_token_size to 0 instead of performing the division;
ensure the branch that uses DYN_KVBM_CPU_CACHE_GB only performs
g2_token_size=$(( (g2_cache_gb * 1024 * 1024 * 1024) / bytes_per_token )) when
bytes_per_token is non-zero (use the same log function and variables
g1_memory_total_gib, g1_token_size, bytes_per_token, g2_cache_gb to locate the
code).

In `@src/cloudai/workloads/ai_dynamo/openai_chat_client.py`:
- Around line 68-73: Replace the unexplained magic number 37 used to compute
num_tokens by dynamically measuring the tokenizer overhead of the chat prompt
template instead of hardcoding it: build the chat/template string used around
the user doc (or a placeholder like "<DOC>"), call self.tok.encode on that
template (add_special_tokens=False, truncation=True) to get overhead =
len(template_ids) - len(placeholder_ids) or simply len(template_ids) and then
compute num_tokens = self.isl - overhead; update code around raw_doc,
num_tokens, and tok.encode to use this computed overhead and add a short comment
explaining the measurement so the behavior adapts to different models/tokenizers
(replace all uses of the literal 37).
- Line 46: The log print in openai_chat_client currently contains an extra space
inside the f-string before the closing brace in time.strftime (the expression
"time.strftime('%Y-%m-%d %H:%M:%S') }"), which adds a trailing space to every
timestamp; remove that space so the f-string ends "...{time.strftime('%Y-%m-%d
%H:%M:%S')}] ..." and keep the rest of the print (including
os.getenv('HOSTNAME') and the message variable) unchanged.
- Around line 244-249: Rename the duplicate "Run 1" warmup label to avoid
confusion: change the log call that currently prints "=== Run 1: warmup ==="
(the warmup before calling Chat(...).stream) to a distinct label such as "===
Warmup ===" or "=== Run 0: warmup ===", so the subsequent log "=== Run 1:
baseline TTFT ===" remains accurate; update only the string passed to log around
the warmup variable (warmup = Chat(args.model, args.isl).stream(client, 1) and
its follow-up log) to reflect the new label.
- Around line 184-185: The minFillerPrompts assignment uses min(1,
args.min_filler_prompts) which clamps values to at most 1 and breaks logic;
change it to max(1, args.min_filler_prompts) so minFillerPrompts enforces a
floor of 1 while preserving larger user inputs; update the line where
maxFillerPrompts and minFillerPrompts are set (references: maxFillerPrompts,
minFillerPrompts, args.max_filler_prompts, args.min_filler_prompts).

In `@src/cloudai/workloads/ai_dynamo/slurm_command_gen_strategy.py`:
- Around line 112-119: The if-check guarding prefill args is ineffective because
AIDynamoArgs.prefill_worker uses default_factory=WorkerConfig and is always
truthy; either remove the redundant if and always call
self._get_nested_toml_args(td.cmd_args.dynamo.prefill_worker, "--prefill-"), or
change the condition to a real sentinel (e.g., check
td.cmd_args.dynamo.prefill_worker.num_nodes > 0 or an explicit enabled flag on
WorkerConfig) so that prefill args are only added when prefill is actually
configured; update the call site in slurm_command_gen_strategy where
td.cmd_args.dynamo.prefill_worker is referenced and ensure the chosen sentinel
exists on WorkerConfig.
- Around line 34-35: Replace f-string interpolation inside logging calls with
lazy %-style formatting: change logging.info(f"hf_home_path:
{td.cmd_args.hf_home_path}") and logging.info(f"storage_cache_dir:
{td.cmd_args.storage_cache_dir}") in slurm_command_gen_strategy.py to use
logging.info("hf_home_path: %s", td.cmd_args.hf_home_path) and
logging.info("storage_cache_dir: %s", td.cmd_args.storage_cache_dir"). Do the
same pattern for the other occurrences referenced in ai_dynamo.py (lines around
the logging calls in ai_dynamo functions) so all logging uses logger.info("msg
%s", value) instead of f-strings to preserve lazy formatting.

Comment on lines +46 to 48
class Workload(BaseModel):
"""Arguments for custom workloads."""

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Docstring is copy-pasted from Args.

The Workload class docstring says "Arguments for custom workloads" — should describe what Workload actually represents (e.g., "Base configuration for a workload").

🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.py` around lines 46 - 48, Update
the docstring for the Workload class to accurately describe its purpose (e.g.,
"Base configuration for a workload" or "Represents the base configuration and
metadata for a custom workload") instead of the copy-pasted "Arguments for
custom workloads"; locate the Workload class definition inheriting from
BaseModel and replace the misleading docstring with a concise, accurate
description that reflects the class role and contents.

Comment on lines +158 to +164
decode_worker: WorkerConfig = Field(default_factory=WorkerConfig)
decode_cmd: str = Field(
default="python3 -m dynamo.vllm",
serialization_alias="decode-cmd",
validation_alias=AliasChoices("decode-cmd", "decode_cmd"),
)
prefill_worker: PrefillWorkerArgs | None = None
prefill_worker: WorkerConfig = Field(default_factory=WorkerConfig)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, let's examine the prefill_worker definition at line 158-164
sed -n '158,164p' src/cloudai/workloads/ai_dynamo/ai_dynamo.py

Repository: NVIDIA/cloudai

Length of output: 384


🏁 Script executed:

# Find the full AIDynamoArgs class definition to see prefill_worker context
sed -n '130,170p' src/cloudai/workloads/ai_dynamo/ai_dynamo.py

Repository: NVIDIA/cloudai

Length of output: 1476


🏁 Script executed:

# Check the WorkerConfig definition to understand num_nodes default
rg -n "class WorkerConfig" src/cloudai/workloads/ai_dynamo/ -A 15

Repository: NVIDIA/cloudai

Length of output: 1423


🏁 Script executed:

# Examine the get_cached_nodes_spec method in slurm_command_gen_strategy.py
rg -n "def get_cached_nodes_spec" src/cloudai/workloads/ai_dynamo/slurm_command_gen_strategy.py -A 25

Repository: NVIDIA/cloudai

Length of output: 1257


🏁 Script executed:

# Check how the if-condition on line 184 is supposed to work - look for any explicit None assignments
rg -n "prefill_worker.*None" src/cloudai/workloads/ai_dynamo/

Repository: NVIDIA/cloudai

Length of output: 40


🏁 Script executed:

# Check test configurations to see how prefill_worker is set
find src/cloudai/workloads/ai_dynamo -name "*.py" | xargs rg -l "prefill_worker" | head -5

Repository: NVIDIA/cloudai

Length of output: 228


🏁 Script executed:

# Look at a test or example config to see expected behavior
rg -n "prefill_worker" src/cloudai/workloads/ai_dynamo/ -B1 -A1 | head -40

Repository: NVIDIA/cloudai

Length of output: 3717


🏁 Script executed:

# Check if there's any logic that explicitly sets prefill_worker to None or checks for presence before use
rg -n "prefill_worker" src/cloudai/workloads/ai_dynamo/slurm_command_gen_strategy.py -B2 -A2

Repository: NVIDIA/cloudai

Length of output: 957


🏁 Script executed:

# Check if there are any test configuration files that show how prefill_worker should be used
find . -name "*.yaml" -o -name "*.yml" -o -name "*.toml" | xargs rg -l "prefill" 2>/dev/null | head -5

Repository: NVIDIA/cloudai

Length of output: 358


🏁 Script executed:

# Look at kubernetes_json_gen_strategy.py line 66 to understand the expectation
sed -n '60,70p' src/cloudai/workloads/ai_dynamo/kubernetes_json_gen_strategy.py

Repository: NVIDIA/cloudai

Length of output: 542


🏁 Script executed:

# Check if there's a conditional way that prefill_worker could be None or falsy anywhere
sed -n '360,380p' src/cloudai/workloads/ai_dynamo/ai_dynamo.py

Repository: NVIDIA/cloudai

Length of output: 1223


🏁 Script executed:

# Look at the full constraint_check method and surrounding context
sed -n '370,400p' src/cloudai/workloads/ai_dynamo/ai_dynamo.py

Repository: NVIDIA/cloudai

Length of output: 1758


🏁 Script executed:

# Look at the git diff or history to see what the previous type was
git log --oneline -n 20 src/cloudai/workloads/ai_dynamo/ai_dynamo.py

Repository: NVIDIA/cloudai

Length of output: 99


🏁 Script executed:

# Check if there's a test config that explicitly doesn't set prefill_worker to see expected behavior
cat ./conf/experimental/ai_dynamo/test/vllm.toml

Repository: NVIDIA/cloudai

Length of output: 2046


🏁 Script executed:

# Check the other test config
cat ./conf/experimental/ai_dynamo/test_scenario/vllm_slurm.toml

Repository: NVIDIA/cloudai

Length of output: 1806


🏁 Script executed:

# Verify the behavior in get_cached_nodes_spec with decode-only config
# Check lines 183-193 for the logic
sed -n '182,210p' src/cloudai/workloads/ai_dynamo/slurm_command_gen_strategy.py

Repository: NVIDIA/cloudai

Length of output: 1475


🏁 Script executed:

# Double-check: what would be prefill_n if prefill_worker has default WorkerConfig(num_nodes=1)?
# Trace through: line 184 condition would always pass, line 185 assigns prefill_n = 1
# Then line 199 would calculate total_nodes as 1 (prefill) + decode_n
# For vllm.toml decode-only, this would be 1 + 1 = 2 instead of 1
sed -n '199,205p' src/cloudai/workloads/ai_dynamo/slurm_command_gen_strategy.py

Repository: NVIDIA/cloudai

Length of output: 302


prefill_worker default causes decode-only setups to over-allocate nodes.

prefill_worker defaults to WorkerConfig(num_nodes=1). In get_cached_nodes_spec() (slurm_command_gen_strategy.py line 184), the check if td.cmd_args.dynamo.prefill_worker: is always true because Pydantic BaseModel instances are always truthy. This means prefill_n gets set to the default num_nodes=1 even when prefill is not configured, causing total_nodes = 1 + decode_n instead of just decode_n. For the decode-only test config (vllm.toml), this allocates 2 nodes instead of 1.

Other code expects prefill_worker to be None when unconfigured: constraint_check() at line 381 uses if prefill_worker else 0, and kubernetes_json_gen_strategy.py line 66 checks if not prefill_worker: and raises an error.

Make prefill_worker optional:

-    prefill_worker: WorkerConfig = Field(default_factory=WorkerConfig)
+    prefill_worker: Optional[WorkerConfig] = None
🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.py` around lines 158 - 164, The
prefill_worker field currently defaults to a WorkerConfig instance which makes
truthiness checks (e.g., in get_cached_nodes_spec, constraint_check,
kubernetes_json_gen_strategy) behave as if prefill is configured; change
prefill_worker to be optional by updating its annotation to
Optional[WorkerConfig] and set Field(default=None) (remove default_factory), so
unconfigured manifests produce None; update any serialization/validation aliases
if present and run tests that exercise get_cached_nodes_spec, constraint_check,
and kubernetes_json_gen_strategy to confirm behavior.

Comment on lines 337 to 352
@property
def installables(self) -> list[Installable]:
return [self.docker_image, self.script, self.dynamo_repo, self.hf_model]
"""Get all installables for this test definition."""
result = [
self.docker_image,
self.script,
#self.hf_model,
self.genai_perf_script,
self.calc_percentile_csv,
self.cmd_args.lmbench.script,
self.cmd_args.custom_workload.script,
File(Path(__file__).parent.parent / "ai_dynamo/openai_chat_client.py"),
*self.git_repos,
]

return result
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Commented-out self.hf_model in installables.

Line 343 has #self.hf_model commented out. If this is intentional (e.g., HF model download handled elsewhere), add a brief comment explaining why. If it's temporary, track it.

Do you want me to open an issue to track re-enabling self.hf_model in installables?

🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.py` around lines 337 - 352, The
installables list in the installables property currently has self.hf_model
commented out without explanation; either re-enable self.hf_model in the
returned list (remove the comment) if HF model downloads should be part of
installables, or leave it commented and add a short clarifying comment next to
the commented line explaining why HF model is excluded (e.g., "HF model download
is performed elsewhere" or "temporarily disabled—tracked by ISSUE-XXXX"); if you
choose to keep it disabled, also create or link an issue to track re-enabling
and reference that issue ID in the comment.

Comment on lines +858 to 874
_query_frontend() {
local content="${1:-The color of sky is}"
content=$(echo "$content" | sed 's/"/\\"/g' | sed 's/\n/\\n/g')
local max_tokens="${2:-10}"

local json='{
"model": "'${dynamo_args["model"]}'",
"messages": [{"role": "user", "content": "The color of sky is"}],
"messages": [{"role": "user", "content": "'"$content"'"}],
"stream": false,
"max_tokens": 10
"max_tokens": '$max_tokens',
"temperature": 0,
"top_p": 0.0001
}'
curl -s -X POST "${dynamo_args["url"]}/v1/chat/completions" -H "Content-Type: application/json" -d "$json"

echo "$json" > "$RESULTS_DIR/curl_cmd.json"
curl -s -X POST "${dynamo_args["url"]}/v1/chat/completions" -H "Content-Type: application/json" -d @$RESULTS_DIR/curl_cmd.json
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

_query_frontend is never invoked and uses fragile JSON construction.

This function (flagged by ShellCheck SC2329) is dead code in the current flow. If retained for debugging, note that the JSON on lines 863-870 is built via string interpolation — a model name with special characters would break the JSON. Consider using jq (already required at line 643) for safe JSON construction.

🧰 Tools
🪛 Shellcheck (0.11.0)

[info] 858-874: This function is never invoked. Check usage (or ignored if invoked indirectly).

(SC2329)


[info] 873-873: Double quote to prevent globbing and word splitting.

(SC2086)

🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.sh` around lines 858 - 874, The
_query_frontend function is dead code and builds JSON via fragile string
interpolation using dynamo_args["model"] and content, which can break for
special characters; either remove the unused _query_frontend function or (if
keeping for debugging) rewrite it to construct the payload securely with jq
(already required earlier) and then POST the safe JSON file at
"$RESULTS_DIR/curl_cmd.json" to "${dynamo_args["url"]}/v1/chat/completions",
ensuring proper escaping of dynamo_args["model"], the messages array, and
max_tokens instead of naive string concatenation.

Comment on lines +68 to +73
raw_doc = "".join(random.choices(string.ascii_letters + string.digits, k=self.isl * 4))

num_tokens = self.isl - 37
ids = self.tok.encode(raw_doc, add_special_tokens=False, truncation=True, max_length=num_tokens)
assert len(ids) == num_tokens, f"Expected {num_tokens} tokens, got {len(ids)}"
doc = self.tok.decode(ids, skip_special_tokens=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Magic number 37 is unexplained and likely model-dependent.

Line 70 uses self.isl - 37 to account for what appears to be chat template overhead, but this number is not documented and will vary across models/tokenizers. If the model's chat template adds more or fewer tokens, the final prompt won't match the intended isl.

Consider deriving the overhead dynamically (e.g., tokenize the template with a placeholder and measure the delta), or at minimum document what 37 represents.

🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/openai_chat_client.py` around lines 68 - 73,
Replace the unexplained magic number 37 used to compute num_tokens by
dynamically measuring the tokenizer overhead of the chat prompt template instead
of hardcoding it: build the chat/template string used around the user doc (or a
placeholder like "<DOC>"), call self.tok.encode on that template
(add_special_tokens=False, truncation=True) to get overhead = len(template_ids)
- len(placeholder_ids) or simply len(template_ids) and then compute num_tokens =
self.isl - overhead; update code around raw_doc, num_tokens, and tok.encode to
use this computed overhead and add a short comment explaining the measurement so
the behavior adapts to different models/tokenizers (replace all uses of the
literal 37).

Comment on lines +184 to +185
maxFillerPrompts = args.max_filler_prompts
minFillerPrompts = min(1, args.min_filler_prompts)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Bug: min(1, args.min_filler_prompts) always clamps to ≤ 1, ignoring user input.

min(1, x) returns 1 when x ≥ 1 and returns x when x < 1 (which would be 0 or negative — breaking the binary search). This should be max(1, ...) to enforce a floor of 1 while respecting larger user-supplied values.

Suggested fix
-    minFillerPrompts = min(1, args.min_filler_prompts)
+    minFillerPrompts = max(1, args.min_filler_prompts)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
maxFillerPrompts = args.max_filler_prompts
minFillerPrompts = min(1, args.min_filler_prompts)
maxFillerPrompts = args.max_filler_prompts
minFillerPrompts = max(1, args.min_filler_prompts)
🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/openai_chat_client.py` around lines 184 -
185, The minFillerPrompts assignment uses min(1, args.min_filler_prompts) which
clamps values to at most 1 and breaks logic; change it to max(1,
args.min_filler_prompts) so minFillerPrompts enforces a floor of 1 while
preserving larger user inputs; update the line where maxFillerPrompts and
minFillerPrompts are set (references: maxFillerPrompts, minFillerPrompts,
args.max_filler_prompts, args.min_filler_prompts).

Comment on lines +244 to +249
log("=== Run 1: warmup ===")
warmup = Chat(args.model, args.isl).stream(client, 1)
log(f"Run 1: warmup: TTFT = {warmup.ttft_seconds:.3f}s with {warmup.cached_tokens} cached tokens")

# ---------------- RUN 1 ----------------
log("=== Run 1: baseline TTFT ===")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Duplicate "Run 1" label is confusing.

Line 244 logs "=== Run 1: warmup ===" and line 249 logs "=== Run 1: baseline TTFT ===". The warmup should be labeled separately (e.g., "Run 0" or just "Warmup") to avoid ambiguity in logs.

🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/openai_chat_client.py` around lines 244 -
249, Rename the duplicate "Run 1" warmup label to avoid confusion: change the
log call that currently prints "=== Run 1: warmup ===" (the warmup before
calling Chat(...).stream) to a distinct label such as "=== Warmup ===" or "===
Run 0: warmup ===", so the subsequent log "=== Run 1: baseline TTFT ===" remains
accurate; update only the string passed to log around the warmup variable
(warmup = Chat(args.model, args.isl).stream(client, 1) and its follow-up log) to
reflect the new label.

Comment on lines +34 to +35
logging.info(f"hf_home_path: {td.cmd_args.hf_home_path}")
logging.info(f"storage_cache_dir: {td.cmd_args.storage_cache_dir}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Avoid f-string formatting in logging.info calls.

Using f-strings with logging defeats lazy formatting — the string is always interpolated even when the log level suppresses the message. Use %-style placeholders instead.

-        logging.info(f"hf_home_path: {td.cmd_args.hf_home_path}")
-        logging.info(f"storage_cache_dir: {td.cmd_args.storage_cache_dir}")
+        logging.info("hf_home_path: %s", td.cmd_args.hf_home_path)
+        logging.info("storage_cache_dir: %s", td.cmd_args.storage_cache_dir)

This pattern recurs in ai_dynamo.py as well (lines 333, 360, 365, 368, 384, etc.).

🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/slurm_command_gen_strategy.py` around lines
34 - 35, Replace f-string interpolation inside logging calls with lazy %-style
formatting: change logging.info(f"hf_home_path: {td.cmd_args.hf_home_path}") and
logging.info(f"storage_cache_dir: {td.cmd_args.storage_cache_dir}") in
slurm_command_gen_strategy.py to use logging.info("hf_home_path: %s",
td.cmd_args.hf_home_path) and logging.info("storage_cache_dir: %s",
td.cmd_args.storage_cache_dir"). Do the same pattern for the other occurrences
referenced in ai_dynamo.py (lines around the logging calls in ai_dynamo
functions) so all logging uses logger.info("msg %s", value) instead of f-strings
to preserve lazy formatting.

Comment on lines 112 to 119
if td.cmd_args.dynamo.prefill_worker:
args.extend(self._get_toml_args(td.cmd_args.dynamo.prefill_worker, "--prefill-"))
args.extend(self._get_toml_args(td.cmd_args.dynamo.decode_worker, "--decode-"))
args.extend(self._get_toml_args(td.cmd_args.genai_perf, "--genai-perf-"))
args.extend(self._get_nested_toml_args(td.cmd_args.dynamo.prefill_worker, "--prefill-"))
args.extend(self._get_nested_toml_args(td.cmd_args.dynamo.decode_worker, "--decode-"))

args.extend(self._get_nested_toml_args(td.cmd_args.lmcache, "--lmcache-"))
args.extend(self._get_nested_toml_args(td.cmd_args.genai_perf, "--genai_perf-"))
args.extend(self._get_nested_toml_args(td.cmd_args.lmbench, "--lmbench-"))
args.extend(self._get_nested_toml_args(td.cmd_args.custom_workload, "--custom_workload-"))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Conditional on prefill_worker is always truthy.

AIDynamoArgs.prefill_worker has default_factory=WorkerConfig, so it's never None or falsy. The if guard on line 112 will always pass. If the intent is to skip prefill args when not configured, you'd need an explicit sentinel (e.g., checking num_nodes > 0). Otherwise, remove the guard for clarity.

🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/slurm_command_gen_strategy.py` around lines
112 - 119, The if-check guarding prefill args is ineffective because
AIDynamoArgs.prefill_worker uses default_factory=WorkerConfig and is always
truthy; either remove the redundant if and always call
self._get_nested_toml_args(td.cmd_args.dynamo.prefill_worker, "--prefill-"), or
change the condition to a real sentinel (e.g., check
td.cmd_args.dynamo.prefill_worker.num_nodes > 0 or an explicit enabled flag on
WorkerConfig) so that prefill args are only added when prefill is actually
configured; update the call site in slurm_command_gen_strategy where
td.cmd_args.dynamo.prefill_worker is referenced and ensure the chosen sentinel
exists on WorkerConfig.

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

7 files reviewed, 9 comments

Edit Code Review Agent Settings | Greptile

Comment on lines +62 to +66
def can_handle_directory(self) -> bool:
return True

def generate_report(self) -> None:
output_path = self.test_run.output_path
source_csv = next(output_path.rglob(CSV_FILES_PATTERN))
target_csv = output_path / "report.csv"

total_gpus = self._calculate_total_gpus()
if total_gpus is None:
logging.warning("gpus_per_node is None, skipping Overall Output Tokens per Second per GPU calculation.")
shutil.copy2(source_csv, target_csv)
return

sections = self._read_csv_sections(source_csv)
self._write_sections_with_metric(target_csv, sections, total_gpus)
pass
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Report generation now disabled
can_handle_directory() always returns True and generate_report() is now a no-op. This will cause CloudAI to select this strategy for any directory and then generate no report artifacts, which breaks prior behavior and can hide missing output files. If the intent is “metrics-only” mode, this should at least preserve can_handle_directory() gating and/or write the expected report CSV.

Comment on lines 116 to +120
assert strategy.get_metric("invalid-metric") == METRIC_ERROR

(ai_dynamo_tr.output_path / "profile_genai_perf.csv").write_text("")
assert strategy.get_metric("default") == METRIC_ERROR
# Empty the CSV file to test error handling
(ai_dynamo_tr.output_path / "genai_perf-report.csv").write_text("")
assert strategy.get_metric("invalid-metric") == METRIC_ERROR
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test writes wrong filename
This test empties genai_perf-report.csv, but the implementation reads genai_perf_report.csv (underscore). As written, the test doesn’t exercise the empty-file path at all.

Suggested change
assert strategy.get_metric("invalid-metric") == METRIC_ERROR
(ai_dynamo_tr.output_path / "profile_genai_perf.csv").write_text("")
assert strategy.get_metric("default") == METRIC_ERROR
# Empty the CSV file to test error handling
(ai_dynamo_tr.output_path / "genai_perf-report.csv").write_text("")
assert strategy.get_metric("invalid-metric") == METRIC_ERROR
(ai_dynamo_tr.output_path / "genai_perf_report.csv").write_text("")

Comment on lines +372 to +401
def constraint_check(self, tr: TestRun) -> bool:
prefill_worker = tr.test.cmd_args.dynamo.prefill_worker
decode_worker = tr.test.cmd_args.dynamo.decode_worker

prefill_tp = prefill_worker.args.tensor_parallel_size if prefill_worker else 1
decode_tp = decode_worker.args.tensor_parallel_size if decode_worker else 1
prefill_pp = prefill_worker.args.pipeline_parallel_size if prefill_worker else 1
decode_pp = decode_worker.args.pipeline_parallel_size if decode_worker else 1
prefill_nodes = prefill_worker.num_nodes if prefill_worker else 0
decode_nodes = decode_worker.num_nodes if decode_worker else 1

if self.constraints.prefill_tp_le_decode_tp and prefill_tp > decode_tp:
logging.info("constraint_check failed for: prefill_tp_le_decode_tp")
return False
logging.info("constraint_check passed for: prefill_tp_le_decode_tp")

gpus_per_node = tr.test.cmd_args.gpus_per_node
if self.constraints.tp_times_pp_le_gpus_per_node and (
prefill_tp * prefill_pp > gpus_per_node or decode_tp * decode_pp > gpus_per_node
):
logging.info("constraint_check failed for: tp_times_pp_le_gpus_per_node")
return False
logging.info("constraint_check passed for: tp_times_pp_le_gpus_per_node")

num_nodes = tr.test.cmd_args.num_nodes
nodes_check = self.constraints.prefill_decode_nodes_le_total_nodes
if nodes_check and prefill_nodes + decode_nodes > num_nodes:
logging.info("constraint_check failed for: prefill_decode_nodes_le_total_nodes")
return False
logging.info("constraint_check passed for: prefill_decode_nodes_le_total_nodes")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Constraint check can crash
constraint_check() multiplies prefill_tp * prefill_pp and decode_tp * decode_pp, but these fields are declared as int | list[int] | None. When unset they’ll be None, and None * None will raise TypeError (and lists will also break comparisons/multiplication). This will fail for configs that omit parallelism values (common in TOML).

Also appears in: src/cloudai/workloads/ai_dynamo/ai_dynamo.py:376-391

Comment on lines +310 to +316
logging.debug(f"Copying wrapper script {wrapper_script_path} to pod {frontend_pod}")
cp_wrapper_cmd = f"kubectl cp {wrapper_script_path} {self.default_namespace}/{frontend_pod}:{pod_wrapper_path}"
subprocess.run(cp_wrapper_cmd, shell=True, capture_output=True, text=True, check=True)

logging.debug(f"Copying calc_percentile_csv script {calc_csv_script_path} to pod {frontend_pod}")
cp_calc_cmd = f"kubectl cp {calc_csv_script_path} {self.default_namespace}/{frontend_pod}:{pod_calc_csv_path}"
subprocess.run(cp_calc_cmd, shell=True, capture_output=True, text=True, check=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shell injection via kubectl cp
_run_genai_perf() builds kubectl cp ... commands as strings and runs them with shell=True using paths derived from installed artifacts and pod/namespace names. If any of these contain shell metacharacters (spaces, ;, etc.), the command will break or execute unintended shell tokens. This is avoidable by passing an argument list to subprocess.run (no shell) or by properly escaping.

Also note: this occurs for both cp_wrapper_cmd and cp_calc_cmd.

Comment on lines 10 to 20

export SLURM_JOB_MASTER_NODE=$(scontrol show hostname $SLURM_JOB_NODELIST | head -n 1)

srun --export=ALL --mpi=pmix -N2 --container-image=nvcr.io/nvidia/ai-dynamo:24.09 --container-mounts=__OUTPUT_DIR__/output:/cloudai_run_results,__OUTPUT_DIR__/install:/cloudai_install,__OUTPUT_DIR__/output,__INSTALL_DIR__:__INSTALL_DIR__,__INSTALL_DIR__/huggingface:/root/.cache/huggingface,__CLOUDAI_DIR__/src/cloudai/workloads/ai_dynamo/ai_dynamo.sh:__CLOUDAI_DIR__/src/cloudai/workloads/ai_dynamo/ai_dynamo.sh --output=__OUTPUT_DIR__/output/mapping-stdout.txt --error=__OUTPUT_DIR__/output/mapping-stderr.txt bash -c "echo \$(date): \$(hostname):node \${SLURM_NODEID}:rank \${SLURM_PROCID}."
srun --export=ALL --mpi=pmix -N2 --container-image=nvcr.io/nvidia/ai-dynamo:24.09 --container-mounts=__OUTPUT_DIR__/output:/cloudai_run_results,__OUTPUT_DIR__/install:/cloudai_install,__OUTPUT_DIR__/output,__INSTALL_DIR__:/git/dynamo,__INSTALL_DIR__/LMCache__ab8530993992db873869ba882320953582d94309:/git/LMCache,__INSTALL_DIR__/LMBenchmark__e1406623c5e88878cf2b7fbd64fe6c47f7dcb66f:/git/LMBenchmark,__INSTALL_DIR__/huggingface:/workspace/hf_home --output=__OUTPUT_DIR__/output/mapping-stdout.txt --error=__OUTPUT_DIR__/output/mapping-stderr.txt bash -c "echo \$(date): \$(hostname):node \${SLURM_NODEID}:rank \${SLURM_PROCID}."

srun --export=ALL --mpi=pmix -N2 --container-image=nvcr.io/nvidia/ai-dynamo:24.09 --container-mounts=__OUTPUT_DIR__/output:/cloudai_run_results,__OUTPUT_DIR__/install:/cloudai_install,__OUTPUT_DIR__/output,__INSTALL_DIR__:__INSTALL_DIR__,__INSTALL_DIR__/huggingface:/root/.cache/huggingface,__CLOUDAI_DIR__/src/cloudai/workloads/ai_dynamo/ai_dynamo.sh:__CLOUDAI_DIR__/src/cloudai/workloads/ai_dynamo/ai_dynamo.sh --ntasks=2 --ntasks-per-node=1 --output=__OUTPUT_DIR__/output/metadata/node-%N.toml --error=__OUTPUT_DIR__/output/metadata/nodes.err bash /cloudai_install/slurm-metadata.sh
srun --export=ALL --mpi=pmix -N2 --container-image=nvcr.io/nvidia/ai-dynamo:24.09 --container-mounts=__OUTPUT_DIR__/output:/cloudai_run_results,__OUTPUT_DIR__/install:/cloudai_install,__OUTPUT_DIR__/output,__INSTALL_DIR__:/git/dynamo,__INSTALL_DIR__/LMCache__ab8530993992db873869ba882320953582d94309:/git/LMCache,__INSTALL_DIR__/LMBenchmark__e1406623c5e88878cf2b7fbd64fe6c47f7dcb66f:/git/LMBenchmark,__INSTALL_DIR__/huggingface:/workspace/hf_home --ntasks=2 --ntasks-per-node=1 --output=__OUTPUT_DIR__/output/metadata/node-%N.toml --error=__OUTPUT_DIR__/output/metadata/nodes.err bash /cloudai_install/slurm-metadata.sh

num_retries=${DYNAMO_NUM_RETRY_ON_FAILURE:-0}
for try in $(seq 0 $num_retries); do
echo "Try $try of $num_retries"
rm -f __OUTPUT_DIR__/output/fatal_error.marker 2>/dev/null || true
srun \
srun \
--export=ALL \
--mpi=pmix \
-N2 \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Golden sbatch has unsafe args
This fixture now contains unescaped JSON/dict content in an argument (e.g. --genai_perf-extra-inputs "{"temperature": 0.7, ...}"). When used as a real sbatch script, the shell will mis-parse this into multiple tokens (or syntax errors), so the golden file encodes a command line that can’t actually run as written. This likely stems from _get_toml_args() emitting Python dict reprs rather than shell-safe strings.

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 9, 2026

Additional Comments (4)

src/cloudai/workloads/ai_dynamo/ai_dynamo.sh
Script no longer executable
ai_dynamo.sh changed mode from 100755 to 100644, but both Slurm and (likely) local execution paths assume it’s runnable. On Slurm you invoke it via bash /cloudai_install/ai_dynamo.sh, which is fine, but any direct execution (e.g., ./ai_dynamo.sh or tooling that relies on the executable bit) will fail after this change. Please either restore the executable bit or ensure all call sites always execute it via an explicit shell.


src/cloudai/workloads/ai_dynamo/__init__.py
Stale exports break imports
__init__.py still imports/exports DecodeWorkerArgs and PrefillWorkerArgs, but the workload schema was switched to WorkerConfig/WorkerBaseArgs (tests and TOMLs were updated accordingly). If these legacy names are no longer supported, this will cause confusing API surface and import errors for downstream users expecting the new types only; if they are supported, they should be wired consistently (e.g., avoid mixing WorkerConfig + PrefillWorkerArgs semantics).


src/cloudai/workloads/ai_dynamo/slurm_command_gen_strategy.py
Broken quoting for dict values
_get_toml_args() wraps values in double quotes, but for dict-like values it falls back to f'{prefix}{k} "{v}"' where v becomes Python’s dict repr (single quotes, braces). That string is not shell-safe and will be split by spaces/special chars, producing invalid argv. This is already visible in tests/ref_data/ai-dynamo.sbatch where --genai_perf-extra-inputs "{"temperature": 0.7, ...}" is not properly escaped.

If these args are expected to be structured JSON, they need to be emitted as a shell-safe single argument (e.g., JSON string with proper escaping) rather than Python repr.


src/cloudai/workloads/ai_dynamo/slurm_command_gen_strategy.py
Node cache key becomes wrong
get_cached_nodes_spec() builds cache_key using self.test_run.num_nodes, then mutates self.test_run.num_nodes before caching the result. This means subsequent calls may compute a different key for the same run, bypassing the cache and potentially re-running node selection logic with different inputs.

Additionally, the log line Setting num_nodes from %d to %d logs self.test_run.num_nodes after it may have already been set to total_nodes, so it won’t report the real “from” value.

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

22 files reviewed, 5 comments

Edit Code Review Agent Settings | Greptile

Comment on lines 66 to +68

def _get_nested_toml_args(self, base_model: BaseModel, prefix: str) -> List[str]:
result = self._get_toml_args(base_model, prefix, exclude=["args"])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Broken repo/file arg quoting
_get_toml_args() emits dict-valued fields using f'... "{v}"', which stringifies the whole dict with single quotes (Python repr). That produces shell-unsafe tokens and breaks args like --*-extra-inputs or any nested config dict when used in sbatch/srun. This is the same failure mode seen in the updated golden sbatch (unescaped {"temperature": ...} argument).

Consider serializing dict/list values to a shell-safe string (e.g., JSON via json.dumps(v)), and ensure proper quoting/escaping (or avoid string concatenation and build an argv list). Also applies to other dict-like fields not covered by the url/commit/mount_as and src special-cases.

Also appears in: tests/ref_data/ai-dynamo.sbatch:~40-70 (invalid quoting for --genai_perf-extra-inputs).

Comment on lines 115 to +119

assert strategy.get_metric("invalid-metric") == METRIC_ERROR

(ai_dynamo_tr.output_path / "profile_genai_perf.csv").write_text("")
assert strategy.get_metric("default") == METRIC_ERROR
# Empty the CSV file to test error handling
(ai_dynamo_tr.output_path / "genai_perf-report.csv").write_text("")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test edits wrong file
This test empties genai_perf-report.csv, but the implementation reads ${benchmark_name}_report.csv (default benchmark_name is genai_perf), i.e. genai_perf_report.csv. As written, the “empty file” case isn’t exercised and the assertion is checking invalid-metric again.

Update the test to empty genai_perf_report.csv and assert a real metric returns METRIC_ERROR when the CSV is empty.

Comment on lines +46 to +48
result_dir="$2"
shift 2
;;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Uses undefined variable
cmdline_args is built using ${key}, but key is never set in this script. This will expand to an empty string (or cause issues under set -u) and indicates the command line is not what’s intended.

Remove ${key} or replace it with the correct flag/value (if you intended to pass a specific option, define it explicitly from parsed args).

Comment on lines 86 to 88

eval $full_cmd

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

eval on constructed cmd
This script runs eval $full_cmd, where $full_cmd is assembled from config/CLI values (cmd, cmdline_args, extra_args). If any of those contain shell metacharacters, the executed command can be altered (and even benign inputs with quotes/spaces can mis-parse).

Prefer executing via an argv array (e.g. cmd=(genai-perf profile ...) and "${cmd[@]}") and append extra args as array elements rather than string concatenation.

Comment on lines +320 to +323
{
process_args "$@"

report_file=$result_dir/$report_name
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Possible division by zero
num_filler_prompts is computed as 1 + (kv_cache_token_size / num_filler_tokens). If kv_cache_token_size extraction fails and stays 0, or if num_filler_tokens is 0/empty, this can divide by zero or silently produce a bad value that breaks later calls.

Guard the arithmetic (validate both values are positive integers) and fail fast with a clear error when KV cache token size can’t be computed.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
src/cloudai/workloads/ai_dynamo/ai_dynamo.sh (3)

535-548: ⚠️ Potential issue | 🔴 Critical

Bug: Substring-based node matching can produce false positives.

_is_frontend_node, _is_decode_node, and _is_prefill_node all use glob *"$name"* against a comma-separated list. A node named node1 will falsely match a list containing node10.

Notably, _populate_nodelist (line 210) already handles this correctly with ,${exclude_nodelist}, / ,$node, fencing. Apply the same pattern here.

🐛 Proposed fix
 _is_decode_node() {
   local name="$(_current_node_name)"
-  [[ "${decode_config["node-list"]}" == *"$name"* ]]
+  [[ ",${decode_config["node-list"]}," == *",$name,"* ]]
 }
 
 _is_prefill_node() {
   local name="$(_current_node_name)"
-  [[ "${prefill_config["node-list"]}" == *"$name"* ]]
+  [[ ",${prefill_config["node-list"]}," == *",$name,"* ]]
 }

For _is_frontend_node, if frontend-node is always a single hostname (not a list), the current pattern is less dangerous but still risky:

 _is_frontend_node() {
   local name="$(_current_node_name)"
-  [[ "${dynamo_args["frontend-node"]}" == *"$name"* ]]
+  [[ "${dynamo_args["frontend-node"]}" == "$name" ]]
 }

83-93: ⚠️ Potential issue | 🟠 Major

_csv_index_of uses substring matching, risking false positives on similar hostnames.

Line 88's *"$name"* check means node1 matches node10. This feeds into rank assignment in _apply_sglang_section_args, so a wrong rank could cause multi-node sglang workers to misconfigure.

Consider exact matching:

♻️ Proposed fix
   for i in "${!arr[@]}"; do
-    if [[ "${arr[$i]}" == "$name" || "${arr[$i]}" == *"$name"* || "$name" == *"${arr[$i]}"* ]]; then
+    if [[ "${arr[$i]}" == "$name" ]]; then
       echo "$i"; return 0
     fi
   done

If fuzzy matching is intentional (e.g., short vs FQDN hostnames), use a more targeted approach like stripping the domain suffix rather than arbitrary substring containment.


1057-1081: ⚠️ Potential issue | 🟠 Major

setup_lmcache and setup_kvbm only run on the frontend node — non-frontend workers won't have the environment set up.

Lines 1060-1061 are inside the if _is_frontend_node block, so decode/prefill nodes on other machines never call setup_kvbm (which exports DYN_KVBM_DISK_CACHE_DIR and prepares cache) or setup_lmcache (which installs the package and writes the config). While the parent shell exports these variables, SLURM-launched worker processes on remote nodes won't inherit them.

A node-setup-cmd mechanism exists (line 45) to run setup on all nodes, but it requires explicit user configuration and isn't automatically invoked. For multi-node deployments with kvbm or lmcache connectors, either move the connector setup outside the frontend conditional block, or document that users must configure node-setup-cmd to include the necessary setup steps.

🤖 Fix all issues with AI agents
In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.sh`:
- Around line 839-843: The readiness loop currently bypasses prefill readiness
by hardcoding want_prefill=0 and have_prefill=0; restore the real checks by
calling _expected_ready_prefill for want_prefill and _count_initialized_prefill
for have_prefill (instead of the commented-out calls) so the loop waits for
prefill workers to be ready before continuing, or if this bypass is intentional
add a clear inline comment on why prefill readiness is skipped (referencing
want_prefill, have_prefill, _expected_ready_prefill, _count_initialized_prefill
and the prefill worker launch around the prefill launch site).
- Around line 971-987: The log_gpu_utilization function currently blocks forever
on non-frontend nodes because it calls wait_for_dynamo_frontend which relies on
cluster-wide worker counts
(_count_initialized_decode/_count_initialized_prefill) that only read local
files; remove the wait_for_dynamo_frontend call from log_gpu_utilization and
either (a) drop the cluster-wide wait entirely so GPU monitoring starts
immediately for the worker, or (b) replace it with a simple local readiness
check (e.g., verify the local worker init marker the other functions use) so the
function no longer spins waiting for other nodes; update references in
log_gpu_utilization and keep wait_for_dynamo_frontend for codepaths that truly
need cluster-wide synchronization.
- Around line 775-781: The worker command builds duplicate --extra-args because
array_to_args processes all keys including "--extra-args" and then the script
appends ${decode_args["--extra-args"]} again; update either array_to_args to
skip the "--extra-args" key or remove the explicit
${decode_args["--extra-args"]} append in the command construction where
CUDA_VISIBLE_DEVICES... ${decode_config["cmd"]} $(array_to_args decode_args)
${decode_args["--extra-args"]} > $log_file 2>&1 &, and apply the same fix to the
launch_prefill command block so --extra-args appears only once.
- Around line 224-254: The _set_nodelists function can hit bash arithmetic
errors when decode_config["num-nodes"] or prefill_config["num-nodes"] are empty;
ensure both have safe defaults before any numeric comparison: set
decode_config["num-nodes"] and prefill_config["num-nodes"] to 0 when unset (e.g.
use parameter expansion defaults or explicitly assign) and update the
comparisons that use prefill_num_nodes and decode_nodelist_count (the checks
that call _csv_len and compare to decode_config["num-nodes"] /
prefill_num_nodes) to use the :-0 guard so arithmetic comparisons never receive
an empty string.

Comment on lines +224 to +254
_set_nodelists()
{
if [[ -z "${DYNAMO_NODELIST:-}" ]]; then
log "ERROR: DYNAMO_NODELIST is not set"
exit 1
fi

if [[ -z "${decode_config["node-list"]}" ]]; then
decode_config["node-list"]=$(_populate_nodelist "${decode_config["num-nodes"]}" "")
fi

if [[ -z "${prefill_config["node-list"]}" ]]; then
prefill_config["node-list"]=$(_populate_nodelist "${prefill_config["num-nodes"]}" "${decode_config["node-list"]}")
fi

# Prefill nodelist should match prefill node count (skip validation if num-nodes is 0)
local prefill_num_nodes="${prefill_config["num-nodes"]:-0}"
if [[ "$prefill_num_nodes" -gt 0 ]]; then
local prefill_nodelist_count=$(_csv_len "${prefill_config["node-list"]}")
if [[ "${prefill_nodelist_count}" -ne "${prefill_num_nodes}" ]]; then
log "ERROR: number of nodes in prefill nodelist (${prefill_nodelist_count}) does not match prefill node count (${prefill_num_nodes})"
exit 1
fi
fi

local decode_nodelist_count=$(_csv_len "${decode_config["node-list"]}")
if [[ "${decode_nodelist_count}" -ne "${decode_config["num-nodes"]}" ]]; then
log "ERROR: number of nodes in decode nodelist (${decode_nodelist_count}) does not match decode node count (${decode_config["num-nodes"]})"
exit 1
fi
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Missing defaults for decode_config["num-nodes"] and prefill_config["num-nodes"] will crash _set_nodelists.

If --decode-num-nodes is never passed, decode_config["num-nodes"] is empty. Line 250 then runs [[ "" -ne "" ]], which is a bash arithmetic error. Same risk for prefill_config["num-nodes"] on line 241 (mitigated by the :-0 default there, but decode has no such guard).

Add defaults in the initialization block or guard the comparison.

🐛 Proposed fix — add defaults
 declare -A prefill_config
+prefill_config["num-nodes"]=0
 declare -A prefill_args
 declare -A decode_config
+decode_config["num-nodes"]=0
 declare -A decode_args
🧰 Tools
🪛 Shellcheck (0.11.0)

[warning] 242-242: Declare and assign separately to avoid masking return values.

(SC2155)


[warning] 249-249: Declare and assign separately to avoid masking return values.

(SC2155)

🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.sh` around lines 224 - 254, The
_set_nodelists function can hit bash arithmetic errors when
decode_config["num-nodes"] or prefill_config["num-nodes"] are empty; ensure both
have safe defaults before any numeric comparison: set decode_config["num-nodes"]
and prefill_config["num-nodes"] to 0 when unset (e.g. use parameter expansion
defaults or explicitly assign) and update the comparisons that use
prefill_num_nodes and decode_nodelist_count (the checks that call _csv_len and
compare to decode_config["num-nodes"] / prefill_num_nodes) to use the :-0 guard
so arithmetic comparisons never receive an empty string.

Comment on lines +775 to 781
log "Decode cmd: ${decode_config["cmd"]} $(array_to_args decode_args) ${decode_args["--extra-args"]}"
CUDA_VISIBLE_DEVICES=$gpu_list \
VLLM_NIXL_SIDE_CHANNEL_PORT=$nixl_port \
DYN_VLLM_KV_EVENT_PORT=$kv_event_port \
${dynamo_args["decode-cmd"]} \
${decode_config["cmd"]} \
$(array_to_args decode_args) ${decode_args["--extra-args"]} > $log_file 2>&1 &
done
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

--extra-args value is emitted twice in the worker command line.

array_to_args decode_args iterates all keys including --extra-args, producing --extra-args <value> in its output. Then ${decode_args["--extra-args"]} appends the raw value again. The same issue applies to launch_prefill (lines 818-823).

Either skip --extra-args inside array_to_args, or remove the separate append.

🐛 Proposed fix — skip the key in `array_to_args`
 function array_to_args()
 {
   local -n arr=$1
   local result=""
   for key in "${!arr[@]}"; do
+    # Skip --extra-args; appended separately by callers
+    [[ "$key" == "--extra-args" ]] && continue
     shopt -s nocasematch
     val=$(replace_placeholders "${arr[$key]}")
🧰 Tools
🪛 Shellcheck (0.11.0)

[warning] 780-780: Quote this to prevent word splitting.

(SC2046)


[info] 780-780: Double quote to prevent globbing and word splitting.

(SC2086)


[info] 780-780: Double quote to prevent globbing and word splitting.

(SC2086)

🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.sh` around lines 775 - 781, The
worker command builds duplicate --extra-args because array_to_args processes all
keys including "--extra-args" and then the script appends
${decode_args["--extra-args"]} again; update either array_to_args to skip the
"--extra-args" key or remove the explicit ${decode_args["--extra-args"]} append
in the command construction where CUDA_VISIBLE_DEVICES...
${decode_config["cmd"]} $(array_to_args decode_args)
${decode_args["--extra-args"]} > $log_file 2>&1 &, and apply the same fix to the
launch_prefill command block so --extra-args appears only once.

Comment on lines 839 to 843
local want_prefill=0 #$(_expected_ready_prefill)
local want_decode=$(_expected_ready_decode)

while :; do
local have_prefill=$(_count_initialized_prefill)
local have_prefill=0 #$(_count_initialized_prefill)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Prefill worker readiness is bypassed (hardcoded to 0).

Lines 839 and 843 hardcode want_prefill=0 and have_prefill=0 with the real calls commented out. If prefill workers are launched (line 1080), they won't be waited on before the frontend is declared "ready" and workloads begin.

If intentional, add a comment explaining why. If not, restore the calls.

🧰 Tools
🪛 Shellcheck (0.11.0)

[warning] 840-840: Declare and assign separately to avoid masking return values.

(SC2155)

🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.sh` around lines 839 - 843, The
readiness loop currently bypasses prefill readiness by hardcoding want_prefill=0
and have_prefill=0; restore the real checks by calling _expected_ready_prefill
for want_prefill and _count_initialized_prefill for have_prefill (instead of the
commented-out calls) so the loop waits for prefill workers to be ready before
continuing, or if this bypass is intentional add a clear inline comment on why
prefill readiness is skipped (referencing want_prefill, have_prefill,
_expected_ready_prefill, _count_initialized_prefill and the prefill worker
launch around the prefill launch site).

Comment on lines +971 to +987
function log_gpu_utilization()
{
# Check if nvidia-smi is available
if ! command -v nvidia-smi &> /dev/null; then
log "Error: nvidia-smi not found"
return
fi

wait_for_dynamo_frontend
log "Starting GPU utilization monitoring"

nvidia-smi \
--query-gpu=timestamp,name,pci.bus_id,pstate,pcie.link.gen.max,pcie.link.gen.current,temperature.gpu,utilization.gpu,utilization.memory,memory.total,memory.free,memory.used \
--format=csv \
-l 5 \
-f ${RESULTS_DIR}/gpu_utilization-${SLURM_NODEID}.csv
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

log_gpu_utilization will never start monitoring on non-frontend nodes.

wait_for_dynamo_frontend (line 979) expects cluster-wide worker counts but only sees local log files via _count_initialized_decode/_count_initialized_prefill. On a non-frontend node, the local worker count will never reach the cluster-wide threshold, so the function spins in the wait loop indefinitely and nvidia-smi monitoring never starts.

Consider either removing the wait_for_dynamo_frontend call from this function (the workers are already running when GPU monitoring matters), or using a simpler local readiness check.

🧰 Tools
🪛 Shellcheck (0.11.0)

[info] 986-986: Double quote to prevent globbing and word splitting.

(SC2086)


[info] 986-986: Double quote to prevent globbing and word splitting.

(SC2086)

🤖 Prompt for AI Agents
In `@src/cloudai/workloads/ai_dynamo/ai_dynamo.sh` around lines 971 - 987, The
log_gpu_utilization function currently blocks forever on non-frontend nodes
because it calls wait_for_dynamo_frontend which relies on cluster-wide worker
counts (_count_initialized_decode/_count_initialized_prefill) that only read
local files; remove the wait_for_dynamo_frontend call from log_gpu_utilization
and either (a) drop the cluster-wide wait entirely so GPU monitoring starts
immediately for the worker, or (b) replace it with a simple local readiness
check (e.g., verify the local worker init marker the other functions use) so the
function no longer spins waiting for other nodes; update references in
log_gpu_utilization and keep wait_for_dynamo_frontend for codepaths that truly
need cluster-wide synchronization.

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

24 files reviewed, 9 comments

Edit Code Review Agent Settings | Greptile

Comment on lines +145 to +167
# Combine aiperf_args (key-value pairs) and extra_args
cmdline_args=""
for key in "${!aiperf_args[@]}"; do
local val="${aiperf_args[$key]}"
# Quote values that contain spaces so eval doesn't split them
if [[ "$val" == *" "* ]]; then
val="${val//\"/\\\"}" # Escape existing quotes
cmdline_args+="$key \"${val}\" "
else
cmdline_args+="$key ${val} "
fi
done
cmdline_args+="$extra_args"

# Build the full command with model and url
full_cmd="$cmd $cmdline_args"

# launch aiperf
log "Launching aiperf with args: $full_cmd"

eval "$full_cmd"

log "Done with aiperf run"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Command injection via eval
aiperf.sh builds full_cmd from cmd, parsed --* args, and extra_args, then executes it with eval "$full_cmd" (line ~165). This is unsafe and will also mis-handle quoting/escaping for many legitimate values (e.g., JSON, paths with spaces), since eval re-parses the string as shell code.

This is the same issue already noted for genai_perf.sh; it also affects this new script and should be fixed the same way (execute via an argv array / no eval).

Comment on lines 220 to 230
}

client = OpenAI(base_url=args.url, api_key="dummy-key-for-local-server")

if args.compute_kv_cache_token_size:
log(f"Computing KV cache token size...")
kv_cache_token_size = compute_kv_cache_token_size(args, client);
log(f"KV cache token size: {kv_cache_token_size}")
with Path(args.out).open("a", encoding="utf-8") as f:
f.write(f"KV cache token size: {kv_cache_token_size}\n")
return
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Crash when --out unset
main() writes results using Path(args.out).open(...) in multiple branches (e.g., --compute_kv_cache_token_size and --dump_csv_header), but --out is not required in parse_args() (line ~148) and defaults to None. If a user runs with --dump_csv_header (or --compute_kv_cache_token_size) without --out, this will raise a TypeError when constructing Path(None).

Make --out required for modes that write output (or provide a default).

Comment on lines +107 to +116
artifacts_dir="${result_dir}/lmbench_artifacts"
mkdir -p "$artifacts_dir"

for qps in ${all_qps//,/ }; do
log "Launching lmbench with args: $cmd $cmdline_args --qps $qps --output $output_file"
output_file="${artifacts_dir}/lmbench_bench_output_${qps}.csv"
report_file="${result_dir}/${report_name//.csv/_${qps}.csv}"
eval "$cmd $cmdline_args --qps $qps --output $output_file" > "${artifacts_dir}/lmbench_bench_output_${qps}.log" 2>&1
python3 ${install_dir}/${calc_percentile_csv_script} --input $output_file --output ${report_file}
done
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

output_file used before set
Inside the QPS loop, the log line uses --output $output_file before output_file is assigned (line ~111-113). This means logs will show an empty/previous value and makes debugging harder; it also indicates the command composition order is wrong.

Move the output_file=... assignment before it’s referenced in the log / command string.

Comment on lines +1012 to +1033
local workload_name="${workload_config_ref["--name"]}"
local script="${workload_config_ref["--script"]}"
local expanded_config=$(array_to_args "$workload_config_name")
local expanded_arguments=$(array_to_args "$workload_args_name")

log "Launching $workload_name with cmd: ${INSTALL_DIR}/$script $expanded_config -- $expanded_arguments"

# Use eval to properly handle values with spaces in expanded_config
bash ${INSTALL_DIR}/$script \
--install_dir "$INSTALL_DIR" \
--result_dir "$RESULTS_DIR" \
--model "${dynamo_args["model"]}" \
--url "http://${dynamo_args["frontend-node"]}" \
--port "${dynamo_args["port"]}" \
--endpoint "${dynamo_args["endpoint"]}" \
--gpus_per_node "$(_gpus_per_node)" \
--connector "${dynamo_args["connector"]}" \
--kvbm_metrics_port "${DYN_KVBM_METRICS_PORT:-6880}" \
--decode-nodes "${decode_config["node-list"]}" \
$expanded_config \
-- $expanded_arguments > ${RESULTS_DIR}/$workload_name.log 2>&1

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Arg injection via unquoted expansions
launch_workload() expands $expanded_config and $expanded_arguments directly into the bash ${INSTALL_DIR}/$script ... invocation (lines ~1020-1032) without quoting. If any values contain spaces, quotes, or shell metacharacters, they will be split or interpreted as separate flags/commands, breaking the run and potentially allowing unintended argument injection.

This is especially likely now that TOML can include JSON-like strings (e.g., extra-inputs). Consider building an argv array and appending elements, rather than concatenating/expanding strings.

Comment on lines +242 to +250

# Calculate total G1 memory across all GPUs
local g1_memory_total_gib=$(awk "BEGIN {printf \"%.2f\", $g1_memory_per_gpu_gib * $tp_size}")
log "G1 (GPU) total memory across $tp_size GPUs: $g1_memory_total_gib GiB"

# Calculate bytes per token = (total_G1_GiB * 1024^3) / G1_tokens
# Using awk to handle the initial float-to-int conversion from g1_memory_per_gpu_gib
bytes_per_token=$(awk "BEGIN {printf \"%d\", ($g1_memory_total_gib * 1024 * 1024 * 1024) / $g1_token_size}")
log "Calculated bytes per token: $bytes_per_token"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Division by zero in KV sizing
bytes_per_token is computed as (...)/ $g1_token_size (line ~249) after parsing g1_token_size from logs. If the log parse yields 0 (or empty) for any reason, this will trigger division by zero / awk error and cascade into invalid cache size calculations.

This needs a guard that g1_token_size is a positive integer before doing the division.

Comment on lines 323 to 333
report_file=$result_dir/$report_name

compute_kv_cache_token_size
local num_filler_prompts=$(( 1 + (kv_cache_token_size / num_filler_tokens) ))

log "Dumping CSV header"
python3 ${install_dir}/openai_chat_client.py --dump_csv_header --out $report_file

log "Launching custom workload with ISLs: $all_isl"
for isl in $(echo $all_isl | tr ',' '\n'); do
log "Launching custom workload with ISL: $isl"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can divide by zero
num_filler_prompts is computed as 1 + (kv_cache_token_size / num_filler_tokens) (line ~326). If num_filler_tokens is 0/empty, this will raise a shell arithmetic error; if kv_cache_token_size extraction fails and remains 0, this silently produces a misleading value.

Validate both kv_cache_token_size and num_filler_tokens are positive integers before this computation.

Comment on lines +506 to +535
def constraint_check(self, tr: TestRun) -> bool:
prefill_worker = tr.test.cmd_args.dynamo.prefill_worker
decode_worker = tr.test.cmd_args.dynamo.decode_worker

prefill_tp = prefill_worker.args.tensor_parallel_size if prefill_worker else 1
decode_tp = decode_worker.args.tensor_parallel_size if decode_worker else 1
prefill_pp = prefill_worker.args.pipeline_parallel_size if prefill_worker else 1
decode_pp = decode_worker.args.pipeline_parallel_size if decode_worker else 1
prefill_nodes = prefill_worker.num_nodes if prefill_worker else 0
decode_nodes = decode_worker.num_nodes if decode_worker else 1

if self.constraints.prefill_tp_le_decode_tp and prefill_tp > decode_tp:
logging.info("constraint_check failed for: prefill_tp_le_decode_tp")
return False
logging.info("constraint_check passed for: prefill_tp_le_decode_tp")

gpus_per_node = tr.test.cmd_args.gpus_per_node
if self.constraints.tp_times_pp_le_gpus_per_node and (
prefill_tp * prefill_pp > gpus_per_node or decode_tp * decode_pp > gpus_per_node
):
logging.info("constraint_check failed for: tp_times_pp_le_gpus_per_node")
return False
logging.info("constraint_check passed for: tp_times_pp_le_gpus_per_node")

num_nodes = tr.test.cmd_args.num_nodes
nodes_check = self.constraints.prefill_decode_nodes_le_total_nodes
if nodes_check and prefill_nodes + decode_nodes > num_nodes:
logging.info("constraint_check failed for: prefill_decode_nodes_le_total_nodes")
return False
logging.info("constraint_check passed for: prefill_decode_nodes_le_total_nodes")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Constraint check type crash
constraint_check() multiplies/compares prefill_tp, prefill_pp, decode_tp, decode_pp, but these fields are declared as int | list[int] | None and can be None (or a list) when unset in TOML. That makes expressions like prefill_tp * prefill_pp and prefill_tp > decode_tp raise TypeError at runtime for common configs.

Normalize these values to integers (with defaults) before doing arithmetic/comparisons.

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 10, 2026

Additional Comments (2)

src andai/workloads/ai_dynamo/openai_chat_client.py
Binary search lower bound bug
In compute_kv_cache_token_size(), minFillerPrompts is set as min(1, args.min_filler_prompts) (line ~185). This always evaluates to <= 1, so if the user sets --min_filler_prompts > 1, the binary search still starts at 1 and violates the intended lower bound.

This should be max(1, args.min_filler_prompts) (and likely clamped to <= max_filler_prompts).


src/cloudai/workloads/ai_dynamo/kubernetes_json_gen_strategy.py
num_nodes may not be int
_set_multinode_if_needed() does if cast(int, worker.num_nodes) > 1: (line ~138), but WorkerConfig.num_nodes is declared as int | list[int] (and scenarios may provide string values). If it’s a list (DSE/unrolled configs) or a string, this > comparison will raise at runtime.

This should branch on the actual type (int vs list/str) before comparing/using it.

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

24 files reviewed, 6 comments

Edit Code Review Agent Settings | Greptile

Comment on lines +182 to +184


def compute_kv_cache_token_size(args: argparse.Namespace, client: OpenAI) -> int:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Broken min filler prompts

In compute_kv_cache_token_size(), minFillerPrompts is initialized as min(1, args.min_filler_prompts), which will always be <= 1 (and becomes 1 for any normal --min_filler_prompts >= 1). This breaks the intended binary-search bounds and can’t honor a user-provided minimum > 1. It should likely be max(1, args.min_filler_prompts) (or just args.min_filler_prompts if you want to allow 0).

Comment on lines +79 to +83
except Exception:
return float("nan")


def main() -> None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Crashes when --input omitted

main() uses input_path = args.input and then builds output_path = args.output or f"{input_path}_summary.csv", but --input isn’t required. If the user forgets --input, this becomes None_summary.csv and then open(input_path, ...) raises TypeError. Make --input required (or handle None with a clear error message).

Comment on lines +201 to +203

log("Sending an initial canary query with 1000 tokens...")
canary_chat = Chat(args.model, args.isl)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Binary-search bounds wrong

compute_kv_cache_token_size() sets minFillerPrompts = min(1, args.min_filler_prompts), which collapses the lower bound to 0/1 and can’t honor --min_filler_prompts > 1. This makes the “min filler prompts” CLI ineffective and changes the search behavior. Likely meant max(1, args.min_filler_prompts) (or just args.min_filler_prompts if 0 is valid).

Comment on lines +91 to +100

rows: List[Dict[str, Any]] = []
with open(input_path, newline="") as f:
reader = csv.DictReader(f)
for r in reader:
rows.append(r)

# Build summaries
summaries: List[Dict[str, Any]] = []

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

--input None causes crash

main() doesn’t require --input/-i, but immediately does output_path = args.output or f"{input_path}_summary.csv" and then open(input_path, ...). If --input is omitted, input_path is None and open(None, ...) raises TypeError. Make --input required or explicitly error out when it’s missing.

Comment on lines +497 to +502
logging.info(f"Workload {workload} not found in workload map")
result = False
continue
report_name = workload_map[workload].report_name
assert report_name is not None
workload_csv_file = output_path / report_name
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

constraint_check TypeError

constraint_check() pulls tensor_parallel_size / pipeline_parallel_size from WorkerConfig.args, but those fields are declared as int | list[int] | None. When unset they’ll be None, and expressions like prefill_tp > decode_tp or prefill_tp * prefill_pp will raise TypeError at runtime. This can happen with common configs that omit TP/PP. Normalize these values to ints (with defaults like 1) before comparisons/multiplication.

Comment on lines +32 to +40
result = list[str]()

mounts = [
f"{dynamo_repo_path}:{dynamo_repo_path}",
f"{self.system.hf_home_path.absolute()}:{td.cmd_args.huggingface_home_container_path}",
f"{td.script.installed_path.absolute()!s}:{td.script.installed_path.absolute()!s}",
]
logging.info(f"hf_home_path: {td.cmd_args.hf_home_path}")
logging.info(f"storage_cache_dir: {td.cmd_args.storage_cache_dir}")
if td.cmd_args.hf_home_path:
result.append(f"{td.cmd_args.hf_home_path}:{td.cmd_args.hf_home_path}")

if td.cmd_args.dynamo.backend == "sglang":
deepep_path = (
dynamo_repo_path / "components/backends/sglang/configs/deepseek_r1/wideep/deepep.json"
).absolute()
sgl_http_server_path = (
dynamo_repo_path / "components/backends/sglang/src/dynamo/sglang/utils/sgl_http_server.py"
).absolute()
mounts.extend(
[
f"{deepep_path!s}:{deepep_path!s}",
f"{sgl_http_server_path!s}:{sgl_http_server_path!s}",
]
)
return mounts
if td.cmd_args.storage_cache_dir:
result.append(f"{td.cmd_args.storage_cache_dir}:{td.cmd_args.storage_cache_dir}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing required container mounts

_container_mounts() used to always mount the installed Dynamo repo + workload script into the container. After this change it returns only optional hf_home_path / storage_cache_dir mounts. But the generated srun command now runs bash {self.container_install_path}/{td.script.src.name} and passes --dynamo-repo {td.dynamo_repo.container_mount}; without mounting/installing those paths, the container won’t find the script/repo and the job will fail to start. Restore the required mounts (install dir/results dir and the dynamo repo mount) so the container runtime can access them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant