-
Notifications
You must be signed in to change notification settings - Fork 0
1.2 Update #12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
1.2 Update #12
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @compose.local.yaml.example:
- Around line 121-127: The opensearch service configuration has a wrong
container_name and typos in cluster/node names: update the opensearch service's
container_name from "staging-wiki-opensearch" to "local-wiki-opensearch" and
correct the environment values cluster.name and node.name from
"mediwiki-cluster" and "mediwiki-node" to "mediawiki-cluster" and
"mediawiki-node" respectively so they match the local compose naming conventions
and fix the spelling.
In @compose.production.yaml.example:
- Around line 75-84: The production compose has wrong container_name and typos:
change the opensearch service's container_name from "staging-wiki-opensearch" to
"wiki-opensearch" to match production naming, and fix the cluster.node names by
replacing "mediwiki-cluster" and "mediwiki-node" with "wiki-cluster" and
"wiki-node" (adjust the cluster.name and node.name environment entries
accordingly) so they match the other production service prefixes.
🧹 Nitpick comments (5)
compose.local.yaml.example (2)
79-79: Consider pinning MariaDB image version.Using
mariadb:latestmay cause unexpected behavior when the image is updated. Consider pinning to a specific version (e.g.,mariadb:11.4) for reproducibility, especially since other services like OpenSearch use pinned versions.
100-101: Consider pinning MinIO image version.Same concern as MariaDB—using
minio/minio:latestmay lead to reproducibility issues. Consider pinning to a specific version.just/docker.just (1)
6-8: Consider optional build flag for faster routine starts.The
startrecipe always uses--build, which triggers a full image rebuild on every start. For routine container restarts after code is already built, this may be unnecessarily slow. Consider making the build optional or adding a separatebuildrecipe.💡 Example: separate start and start-build recipes
+# Start containers (build if needed) +start-build: + #!/usr/bin/env bash + set -euo pipefail + JUSTFILE_DIR="{{justfile_directory()}}" + just _verify-file compose.yaml + cd "$JUSTFILE_DIR" && docker compose up -d --build + # Start all wiki containers start: #!/usr/bin/env bash set -euo pipefail JUSTFILE_DIR="{{justfile_directory()}}" just _verify-file compose.yaml - - cd "$JUSTFILE_DIR" && docker compose up -d --build + cd "$JUSTFILE_DIR" && docker compose up -dDockerfile (2)
156-157: Hardcoded sitemap filename contradicts generic image goal.The sitemap filename
sitemap-index-atl.wiki.xmlis hardcoded, but the PR objective describes creating a "generic, environment-driven" wiki image. Consider making this configurable via a build argument or environment variable.💡 Suggested approach
+ARG SITEMAP_FILENAME=sitemap-index-wiki.xml + RUN mkdir -p /var/www/wiki/mediawiki && \ mkdir -p /var/www/wiki/cache && \ mkdir -p /var/www/wiki/sitemap && \ - touch /var/www/wiki/sitemap/sitemap-index-atl.wiki.xml && \ - ln -s /var/www/wiki/sitemap/sitemap-index-atl.wiki.xml /var/www/wiki/sitemap.xml && \ + touch /var/www/wiki/sitemap/${SITEMAP_FILENAME} && \ + ln -s /var/www/wiki/sitemap/${SITEMAP_FILENAME} /var/www/wiki/sitemap.xml && \
178-179: Track upstream fix for MWCallbackStream return type.The comment indicates this is a temporary workaround. Consider opening an upstream issue (if not already tracked) and adding a TODO with a link so it can be removed when fixed upstream.
Would you like me to help draft an issue for the MediaWiki upstream project, or search for an existing issue tracking this return type declaration problem?
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
.env.exampleDockerfileJustfilecompose.local.yaml.examplecompose.production.yaml.examplecompose.staging.yaml.examplejust/base.justjust/docker.justjust/opensearch.justwiki/configs/50-Extensions_Load.php
🚧 Files skipped from review as they are similar to previous changes (3)
- wiki/configs/50-Extensions_Load.php
- just/base.just
- compose.staging.yaml.example
🧰 Additional context used
🪛 dotenv-linter (4.0.0)
.env.example
[warning] 4-4: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 6-6: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 8-8: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 8-8: [UnorderedKey] The MEDIAWIKI_BRANCH key should go before the MEDIAWIKI_MAJOR_VERSION key
(UnorderedKey)
[warning] 10-10: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 10-10: [UnorderedKey] The CITIZEN_VERSION key should go before the MEDIAWIKI_BRANCH key
(UnorderedKey)
[warning] 12-12: [ExtraBlankLine] Extra blank line detected
(ExtraBlankLine)
[warning] 17-17: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 20-20: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 20-20: [UnorderedKey] The SECRET_KEY key should go before the UPGRADE_KEY key
(UnorderedKey)
[warning] 22-22: [ExtraBlankLine] Extra blank line detected
(ExtraBlankLine)
[warning] 26-26: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 28-28: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 28-28: [UnorderedKey] The DB_NAME key should go before the DB_SERVER key
(UnorderedKey)
[warning] 30-30: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 32-32: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 32-32: [UnorderedKey] The DB_PASSWORD key should go before the DB_SERVER key
(UnorderedKey)
[warning] 34-34: [ExtraBlankLine] Extra blank line detected
(ExtraBlankLine)
[warning] 38-38: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 40-40: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 40-40: [UnorderedKey] The SMTP_DOMAIN key should go before the SMTP_HOST key
(UnorderedKey)
[warning] 42-42: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 44-44: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 46-46: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 46-46: [UnorderedKey] The SMTP_PASSWORD key should go before the SMTP_PORT key
(UnorderedKey)
[warning] 48-48: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 48-48: [UnorderedKey] The EMERGENCY_EMAIL key should go before the SMTP_DOMAIN key
(UnorderedKey)
[warning] 50-50: [ExtraBlankLine] Extra blank line detected
(ExtraBlankLine)
[warning] 54-54: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 56-56: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 56-56: [UnorderedKey] The TURNSTILE_SECRET_KEY key should go before the TURNSTILE_SITE_KEY key
(UnorderedKey)
[warning] 58-58: [ExtraBlankLine] Extra blank line detected
(ExtraBlankLine)
[warning] 62-62: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 64-64: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 66-66: [ExtraBlankLine] Extra blank line detected
(ExtraBlankLine)
[warning] 70-70: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 72-72: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 72-72: [UnorderedKey] The S3_ACCESS_KEY_ID key should go before the S3_ENDPOINT key
(UnorderedKey)
[warning] 74-74: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 76-76: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 76-76: [UnorderedKey] The S3_BUCKET_NAME key should go before the S3_ENDPOINT key
(UnorderedKey)
[warning] 78-78: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 78-78: [UnorderedKey] The S3_BUCKET_DOMAIN key should go before the S3_BUCKET_NAME key
(UnorderedKey)
[warning] 80-80: [ExtraBlankLine] Extra blank line detected
(ExtraBlankLine)
[warning] 84-84: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 86-86: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 88-88: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 88-88: [UnorderedKey] The DISCORD_WEBHOOK_URL key should go before the SITENAME key
(UnorderedKey)
[warning] 90-90: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 90-90: [UnorderedKey] The OPENSEARCH_INITIAL_ADMIN_PASSWORD key should go before the SITENAME key
(UnorderedKey)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Seer Code Review
🔇 Additional comments (8)
.env.example (1)
1-90: Well-organized environment template.The file is logically organized by purpose (Build Arguments, Core Secrets, DB, SMTP, etc.) with clear documentation comments. The static analysis warnings about unordered keys and quote characters in comments are false positives—logical grouping is preferable to alphabetical ordering for developer ergonomics, and the quoted examples in comments aid comprehension.
just/docker.just (1)
54-73: The update recipe is well-designed with proper safeguards.Good use of:
- Destructive warning before proceeding
- Sudo directory check
- Environment detection for correct template selection
- Clean stop before pull to avoid conflicts
One consideration:
git pullon Line 68 pulls from the current branch's upstream. If someone is on a feature branch, this may not be the intended behavior. This is likely acceptable for production/staging deployments but worth documenting.just/opensearch.just (1)
1-36: Well-structured OpenSearch maintenance commands.The recipes follow a consistent pattern with proper pre-flight checks. The two-pass reindex approach on Lines 17-18 (
--skipLinks --indexOnSkipfollowed by--skipParse) is the correct CirrusSearch pattern for full reindexing.Justfile (1)
1-10: Clean modular Justfile organization.Good separation of concerns with imports for docker, base, extra, init, help, and opensearch modules. The default recipe delegating to help is user-friendly for discoverability.
Dockerfile (2)
14-32: Good use of php-extension-installer for cleaner extension management.Using
mlocati/php-extension-installersignificantly simplifies PHP extension installation compared to manualdocker-php-ext-install. The cache mount for extensions is also a nice touch for faster rebuilds.
140-144: Confirm whether python3 and unzip are required at runtime.The
install_extensions.pyscript runs in the mediawiki build stage (line 92), not the final runtime image. However,python3is installed in both the mediawiki stage and the final stage (line 142), suggesting it may be unnecessarily duplicated. Similarly,unzip(line 144) appears only in the final stage with no visible build-time usage.Verify whether these packages are needed for runtime maintenance scripts or other runtime processes. If not required, remove them from the final stage's apk installation to reduce image size.
compose.production.yaml.example (2)
45-46: Verify production port exposure is intentional.Nginx is exposed on port 3000, which is atypical for production. If this is meant to sit behind an external reverse proxy (e.g., Cloudflare, Traefik), this is fine. Otherwise, consider binding to standard HTTP/HTTPS ports or documenting the intended deployment architecture.
1-31: Production compose structure is appropriate.Good separation of concerns with external database (not included, likely managed separately) and the core services needed for MediaWiki operation. Health checks and dependencies are properly configured.
| opensearch: | ||
| image: opensearchproject/opensearch:1.3.20 | ||
| container_name: staging-wiki-opensearch | ||
| environment: | ||
| - discovery.type=single-node | ||
| - cluster.name=mediwiki-cluster | ||
| - node.name=mediwiki-node |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect container name and typos in OpenSearch configuration.
The OpenSearch service has several issues:
- Line 123: Container name is
staging-wiki-opensearchbut this is the local compose file—should belocal-wiki-opensearchfor consistency with other services. - Lines 126-127: Typo in cluster/node names—
mediwikishould bemediawiki.
🔧 Suggested fix
opensearch:
image: opensearchproject/opensearch:1.3.20
- container_name: staging-wiki-opensearch
+ container_name: local-wiki-opensearch
environment:
- discovery.type=single-node
- - cluster.name=mediwiki-cluster
- - node.name=mediwiki-node
+ - cluster.name=mediawiki-cluster
+ - node.name=mediawiki-node
- bootstrap.memory_lock=trueCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @compose.local.yaml.example around lines 121 - 127, The opensearch service
configuration has a wrong container_name and typos in cluster/node names: update
the opensearch service's container_name from "staging-wiki-opensearch" to
"local-wiki-opensearch" and correct the environment values cluster.name and
node.name from "mediwiki-cluster" and "mediwiki-node" to "mediawiki-cluster" and
"mediawiki-node" respectively so they match the local compose naming conventions
and fix the spelling.
compose.production.yaml.example
Outdated
| opensearch: | ||
| image: opensearchproject/opensearch:1.3.20 | ||
| container_name: staging-wiki-opensearch | ||
| environment: | ||
| - discovery.type=single-node | ||
| - cluster.name=mediwiki-cluster | ||
| - node.name=mediwiki-node | ||
| - bootstrap.memory_lock=true | ||
| - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" | ||
| - OPENSEARCH_INITIAL_ADMIN_PASSWORD=${OPENSEARCH_INITIAL_ADMIN_PASSWORD} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same issues as local compose: incorrect container name and typos.
- Line 77: Container name is
staging-wiki-opensearchbut this is the production compose—should bewiki-opensearch(matching thewiki-prefix pattern used for other production services). - Lines 80-81: Same
mediwikitypo as in local compose.
🔧 Suggested fix
opensearch:
image: opensearchproject/opensearch:1.3.20
- container_name: staging-wiki-opensearch
+ container_name: wiki-opensearch
environment:
- discovery.type=single-node
- - cluster.name=mediwiki-cluster
- - node.name=mediwiki-node
+ - cluster.name=mediawiki-cluster
+ - node.name=mediawiki-node
- bootstrap.memory_lock=true📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| opensearch: | |
| image: opensearchproject/opensearch:1.3.20 | |
| container_name: staging-wiki-opensearch | |
| environment: | |
| - discovery.type=single-node | |
| - cluster.name=mediwiki-cluster | |
| - node.name=mediwiki-node | |
| - bootstrap.memory_lock=true | |
| - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" | |
| - OPENSEARCH_INITIAL_ADMIN_PASSWORD=${OPENSEARCH_INITIAL_ADMIN_PASSWORD} | |
| opensearch: | |
| image: opensearchproject/opensearch:1.3.20 | |
| container_name: wiki-opensearch | |
| environment: | |
| - discovery.type=single-node | |
| - cluster.name=mediawiki-cluster | |
| - node.name=mediawiki-node | |
| - bootstrap.memory_lock=true | |
| - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" | |
| - OPENSEARCH_INITIAL_ADMIN_PASSWORD=${OPENSEARCH_INITIAL_ADMIN_PASSWORD} |
🤖 Prompt for AI Agents
In @compose.production.yaml.example around lines 75 - 84, The production compose
has wrong container_name and typos: change the opensearch service's
container_name from "staging-wiki-opensearch" to "wiki-opensearch" to match
production naming, and fix the cluster.node names by replacing
"mediwiki-cluster" and "mediwiki-node" with "wiki-cluster" and "wiki-node"
(adjust the cluster.name and node.name environment entries accordingly) so they
match the other production service prefixes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In @just/base.just:
- Around line 93-112: The interactive prompt in the _destructive-warning
function uses read -p without verifying a TTY, which can hang in non-interactive
environments; add a TTY check (e.g., test -t 0 or tty -s) before prompting and
if stdin is not a TTY, print a clear message and exit non-zero (or skip
prompting) so CI/pipes won’t hang; update the block around the read invocation
and use the existing REPLY/REPLY_LOWER flow when the TTY check passes.
- Around line 52-69: The _copy-file recipe shows the destructive warning before
checking the source exists; change the order in the _copy-file recipe so you
verify SRC exists first (the if [ ! -f "$SRC" ] block) and exit with the same
error when missing, then only call just _destructive-warning "You are about to
overwrite {{dst}}..." and proceed to cp -v "$SRC" "$DST"; keep SRC, DST variable
assignments and the existing echo messages intact.
- Around line 80-91: The _check-sudo-dir recipe uses GNU-only stat (-c '%U')
which breaks on macOS/BSD; replace the stat invocation with a portable fallback
(e.g., try GNU stat then BSD stat, or use ls -ld . | awk '{print $3}') to
populate OWNER, and make the target directory explicit (use justfile_directory()
or $PWD instead of ambiguous '.') so the check runs against the intended
directory; update references to OWNER and the stat call within _check-sudo-dir
accordingly.
- Around line 14-34: The current _detect-env recipe extracts the compose.yaml
name field with a brittle grep/cut/tr pipeline that removes all spaces and fails
on quotes, indentation, or comments; update _detect-env to parse compose.yaml
robustly by using yq to read the top-level name (e.g., name=$(yq e '.name'
"$JUSTFILE_DIR/compose.yaml")) or, if yq is unavailable, replace the grep/cut/tr
logic around the name variable with a more resilient extraction that trims
whitespace only and preserves quoted values and inline comments (avoid tr -d '
'); keep the existing environment-matching checks (the comparisons against
"local-wiki"/"staging-wiki"/"production-wiki") and retain the just _verify-file
compose.yaml validation and error handling.
🧹 Nitpick comments (1)
wiki/composer.local.json (1)
14-16: Consider ordering Elastica before CirrusSearch for clarity.While composer's merge-plugin handles dependencies regardless of order, listing
extensions/Elastica/composer.jsonbeforeextensions/CirrusSearch/composer.jsonwould reflect their dependency relationship more clearly, since CirrusSearch depends on Elastica.📋 Suggested reordering
"extensions/AbuseFilter/composer.json", + "extensions/Elastica/composer.json", - "extensions/CirrusSearch/composer.json", - "extensions/Elastica/composer.json" + "extensions/CirrusSearch/composer.json"
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
just/base.justwiki/composer.local.json
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Seer Code Review
- GitHub Check: docker-build
🔇 Additional comments (3)
just/base.just (3)
1-12: LGTM!The file verification logic is sound with proper error handling and clear error messages.
36-51: LGTM!Container checking logic is correct, including proper template escaping on line 45 for the docker format string.
70-78: LGTM!The privilege check is straightforward and correct.
| # Detect the environment (local/staging/production) from compose.yaml name field | ||
| _detect-env: | ||
| #!/usr/bin/env bash | ||
| set -euo pipefail | ||
| JUSTFILE_DIR="{{justfile_directory()}}" | ||
| just _verify-file compose.yaml >/dev/null 2>&1 | ||
|
|
||
| # Extract name field from compose.yaml | ||
| name=$(grep -m 1 "^name:" "$JUSTFILE_DIR/compose.yaml" | cut -d':' -f2 | tr -d ' ') | ||
|
|
||
| # Match name pattern to determine environment | ||
| if [[ "$name" == *"local-wiki"* ]]; then | ||
| echo "local" | ||
| elif [[ "$name" == *"staging-wiki"* ]]; then | ||
| echo "staging" | ||
| elif [[ "$name" == *"production-wiki"* ]]; then | ||
| echo "production" | ||
| else | ||
| echo -e "\n\033[1;37m\033[41m ERROR \033[0m \033[1;31mCould not detect environment from compose.yaml\033[0m\n" | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fragile YAML parsing could break with valid formatting variations.
Line 22 uses a brittle grep/cut/tr pipeline that assumes specific YAML formatting. tr -d ' ' removes all spaces (not just whitespace trimming), and the approach doesn't handle quoted values, varying indentation, or inline comments.
🔧 Recommended: Use yq for robust YAML parsing
- # Extract name field from compose.yaml
- name=$(grep -m 1 "^name:" "$JUSTFILE_DIR/compose.yaml" | cut -d':' -f2 | tr -d ' ')
+ # Extract name field from compose.yaml using yq
+ name=$(yq eval '.name' "$JUSTFILE_DIR/compose.yaml" 2>/dev/null || echo "")
+
+ if [ -z "$name" ]; then
+ echo -e "\n\033[1;37m\033[41m ERROR \033[0m \033[1;31mCould not read 'name' field from compose.yaml\033[0m\n" >&2
+ exit 1
+ fiIf yq is unavailable, at minimum use proper trimming:
- name=$(grep -m 1 "^name:" "$JUSTFILE_DIR/compose.yaml" | cut -d':' -f2 | tr -d ' ')
+ name=$(grep -m 1 "^name:" "$JUSTFILE_DIR/compose.yaml" | cut -d':' -f2 | sed 's/^[[:space:]]*//;s/[[:space:]]*$//')📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Detect the environment (local/staging/production) from compose.yaml name field | |
| _detect-env: | |
| #!/usr/bin/env bash | |
| set -euo pipefail | |
| JUSTFILE_DIR="{{justfile_directory()}}" | |
| just _verify-file compose.yaml >/dev/null 2>&1 | |
| # Extract name field from compose.yaml | |
| name=$(grep -m 1 "^name:" "$JUSTFILE_DIR/compose.yaml" | cut -d':' -f2 | tr -d ' ') | |
| # Match name pattern to determine environment | |
| if [[ "$name" == *"local-wiki"* ]]; then | |
| echo "local" | |
| elif [[ "$name" == *"staging-wiki"* ]]; then | |
| echo "staging" | |
| elif [[ "$name" == *"production-wiki"* ]]; then | |
| echo "production" | |
| else | |
| echo -e "\n\033[1;37m\033[41m ERROR \033[0m \033[1;31mCould not detect environment from compose.yaml\033[0m\n" | |
| exit 1 | |
| fi | |
| # Detect the environment (local/staging/production) from compose.yaml name field | |
| _detect-env: | |
| #!/usr/bin/env bash | |
| set -euo pipefail | |
| JUSTFILE_DIR="{{justfile_directory()}}" | |
| just _verify-file compose.yaml >/dev/null 2>&1 | |
| # Extract name field from compose.yaml using yq | |
| name=$(yq eval '.name' "$JUSTFILE_DIR/compose.yaml" 2>/dev/null || echo "") | |
| if [ -z "$name" ]; then | |
| echo -e "\n\033[1;37m\033[41m ERROR \033[0m \033[1;31mCould not read 'name' field from compose.yaml\033[0m\n" >&2 | |
| exit 1 | |
| fi | |
| # Match name pattern to determine environment | |
| if [[ "$name" == *"local-wiki"* ]]; then | |
| echo "local" | |
| elif [[ "$name" == *"staging-wiki"* ]]; then | |
| echo "staging" | |
| elif [[ "$name" == *"production-wiki"* ]]; then | |
| echo "production" | |
| else | |
| echo -e "\n\033[1;37m\033[41m ERROR \033[0m \033[1;31mCould not detect environment from compose.yaml\033[0m\n" | |
| exit 1 | |
| fi |
🤖 Prompt for AI Agents
In @just/base.just around lines 14 - 34, The current _detect-env recipe extracts
the compose.yaml name field with a brittle grep/cut/tr pipeline that removes all
spaces and fails on quotes, indentation, or comments; update _detect-env to
parse compose.yaml robustly by using yq to read the top-level name (e.g.,
name=$(yq e '.name' "$JUSTFILE_DIR/compose.yaml")) or, if yq is unavailable,
replace the grep/cut/tr logic around the name variable with a more resilient
extraction that trims whitespace only and preserves quoted values and inline
comments (avoid tr -d ' '); keep the existing environment-matching checks (the
comparisons against "local-wiki"/"staging-wiki"/"production-wiki") and retain
the just _verify-file compose.yaml validation and error handling.
| # Copy a file from source to destination with verification | ||
| _copy-file src dst: | ||
| #!/usr/bin/env bash | ||
| set -euo pipefail | ||
|
|
||
| SRC="{{src}}" | ||
| DST="{{dst}}" | ||
| just _destructive-warning "You are about to overwrite {{dst}} which may overwrite existing data in that file." | ||
|
|
||
| echo "Copying file: $SRC → $DST" | ||
| if [ ! -f "$SRC" ]; then | ||
| echo -e "\n\033[1;37m\033[41m ERROR \033[0m \033[1;31mSource '$SRC' not found\033[0m\n" >&2 | ||
| exit 1 | ||
| fi | ||
|
|
||
| cp -v "$SRC" "$DST" | ||
| echo "Successfully copied $SRC to $DST" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check source file existence before prompting the user.
The destructive warning (line 59) is shown before verifying the source file exists (lines 62-65). If the source is missing, users are prompted unnecessarily and then shown an error.
♻️ Reorder validation before the warning
SRC="{{src}}"
DST="{{dst}}"
- just _destructive-warning "You are about to overwrite {{dst}} which may overwrite existing data in that file."
- echo "Copying file: $SRC → $DST"
if [ ! -f "$SRC" ]; then
echo -e "\n\033[1;37m\033[41m ERROR \033[0m \033[1;31mSource '$SRC' not found\033[0m\n" >&2
exit 1
fi
+
+ just _destructive-warning "You are about to overwrite {{dst}} which may overwrite existing data in that file."
+ echo "Copying file: $SRC → $DST"
cp -v "$SRC" "$DST"
echo "Successfully copied $SRC to $DST"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Copy a file from source to destination with verification | |
| _copy-file src dst: | |
| #!/usr/bin/env bash | |
| set -euo pipefail | |
| SRC="{{src}}" | |
| DST="{{dst}}" | |
| just _destructive-warning "You are about to overwrite {{dst}} which may overwrite existing data in that file." | |
| echo "Copying file: $SRC → $DST" | |
| if [ ! -f "$SRC" ]; then | |
| echo -e "\n\033[1;37m\033[41m ERROR \033[0m \033[1;31mSource '$SRC' not found\033[0m\n" >&2 | |
| exit 1 | |
| fi | |
| cp -v "$SRC" "$DST" | |
| echo "Successfully copied $SRC to $DST" | |
| # Copy a file from source to destination with verification | |
| _copy-file src dst: | |
| #!/usr/bin/env bash | |
| set -euo pipefail | |
| SRC="{{src}}" | |
| DST="{{dst}}" | |
| if [ ! -f "$SRC" ]; then | |
| echo -e "\n\033[1;37m\033[41m ERROR \033[0m \033[1;31mSource '$SRC' not found\033[0m\n" >&2 | |
| exit 1 | |
| fi | |
| just _destructive-warning "You are about to overwrite {{dst}} which may overwrite existing data in that file." | |
| echo "Copying file: $SRC → $DST" | |
| cp -v "$SRC" "$DST" | |
| echo "Successfully copied $SRC to $DST" |
🤖 Prompt for AI Agents
In @just/base.just around lines 52 - 69, The _copy-file recipe shows the
destructive warning before checking the source exists; change the order in the
_copy-file recipe so you verify SRC exists first (the if [ ! -f "$SRC" ] block)
and exit with the same error when missing, then only call just
_destructive-warning "You are about to overwrite {{dst}}..." and proceed to cp
-v "$SRC" "$DST"; keep SRC, DST variable assignments and the existing echo
messages intact.
| # Check if current directory is owned by root and require sudo if so | ||
| _check-sudo-dir: | ||
| #!/usr/bin/env bash | ||
| set -euo pipefail | ||
|
|
||
| # Get directory owner | ||
| OWNER=$(stat -c '%U' .) | ||
|
|
||
| if [ "$OWNER" = "root" ] && [ "$EUID" -ne 0 ]; then | ||
| echo -e "\n\033[1;37m\033[41m ERROR \033[0m \033[1;31mCurrent directory is owned by root. Please run with 'sudo'.\033[0m\n" | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Platform-specific stat command breaks portability.
Line 86 uses stat -c '%U', which is Linux-specific (GNU coreutils). On macOS/BSD, the syntax is stat -f '%Su', causing this check to fail on non-Linux systems.
Additionally, checking . (current directory) in a Justfile context may be ambiguous—clarify whether you intend to check the user's working directory or the justfile_directory().
🔧 Portable stat alternative
_check-sudo-dir:
#!/usr/bin/env bash
set -euo pipefail
- # Get directory owner
- OWNER=$(stat -c '%U' .)
+ # Get directory owner (portable across Linux and macOS)
+ if [[ "$OSTYPE" == "darwin"* ]]; then
+ OWNER=$(stat -f '%Su' .)
+ else
+ OWNER=$(stat -c '%U' .)
+ fi
if [ "$OWNER" = "root" ] && [ "$EUID" -ne 0 ]; then
echo -e "\n\033[1;37m\033[41m ERROR \033[0m \033[1;31mCurrent directory is owned by root. Please run with 'sudo'.\033[0m\n"
exit 1
fiAlternatively, use a fully portable approach with ls:
OWNER=$(ls -ld . | awk '{print $3}')📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Check if current directory is owned by root and require sudo if so | |
| _check-sudo-dir: | |
| #!/usr/bin/env bash | |
| set -euo pipefail | |
| # Get directory owner | |
| OWNER=$(stat -c '%U' .) | |
| if [ "$OWNER" = "root" ] && [ "$EUID" -ne 0 ]; then | |
| echo -e "\n\033[1;37m\033[41m ERROR \033[0m \033[1;31mCurrent directory is owned by root. Please run with 'sudo'.\033[0m\n" | |
| exit 1 | |
| fi | |
| # Check if current directory is owned by root and require sudo if so | |
| _check-sudo-dir: | |
| #!/usr/bin/env bash | |
| set -euo pipefail | |
| # Get directory owner (portable across Linux and macOS) | |
| if [[ "$OSTYPE" == "darwin"* ]]; then | |
| OWNER=$(stat -f '%Su' .) | |
| else | |
| OWNER=$(stat -c '%U' .) | |
| fi | |
| if [ "$OWNER" = "root" ] && [ "$EUID" -ne 0 ]; then | |
| echo -e "\n\033[1;37m\033[41m ERROR \033[0m \033[1;31mCurrent directory is owned by root. Please run with 'sudo'.\033[0m\n" | |
| exit 1 | |
| fi |
🤖 Prompt for AI Agents
In @just/base.just around lines 80 - 91, The _check-sudo-dir recipe uses
GNU-only stat (-c '%U') which breaks on macOS/BSD; replace the stat invocation
with a portable fallback (e.g., try GNU stat then BSD stat, or use ls -ld . |
awk '{print $3}') to populate OWNER, and make the target directory explicit (use
justfile_directory() or $PWD instead of ambiguous '.') so the check runs against
the intended directory; update references to OWNER and the stat call within
_check-sudo-dir accordingly.
| # Display a warning for destructive actions and prompt for confirmation | ||
| _destructive-warning reason="": | ||
| #!/usr/bin/env bash | ||
| set -euo pipefail | ||
|
|
||
| echo -e "\n\033[1;37m\033[41m WARNING \033[0m \033[1;33mThis action is destructive. Proceed with caution.\033[0m" | ||
|
|
||
| if [ -n "{{reason}}" ]; then | ||
| echo -e " \033[3;38;5;208m{{reason}}\033[0m" | ||
| fi | ||
|
|
||
| echo -e "\033[1;33mPlease check the help command or documentation for this script before proceeding to understand the consequences.\033[0m\n" | ||
| read -p "Do you want to continue? (y/n): " -r REPLY | ||
|
|
||
| # Normalize to lowercase for comparison | ||
| REPLY_LOWER=$(echo "$REPLY" | tr '[:upper:]' '[:lower:]') | ||
| if [[ ! "$REPLY_LOWER" =~ ^(y|yes)$ ]]; then | ||
| echo "Aborting." | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add TTY check before interactive prompt.
Line 105 uses read -p without checking if stdin is a terminal. This will fail or hang in non-interactive contexts (CI pipelines, piped input, cron jobs, etc.).
🔧 Guard interactive prompt with TTY check
_destructive-warning reason="":
#!/usr/bin/env bash
set -euo pipefail
echo -e "\n\033[1;37m\033[41m WARNING \033[0m \033[1;33mThis action is destructive. Proceed with caution.\033[0m"
if [ -n "{{reason}}" ]; then
echo -e " \033[3;38;5;208m{{reason}}\033[0m"
fi
echo -e "\033[1;33mPlease check the help command or documentation for this script before proceeding to understand the consequences.\033[0m\n"
+
+ # Check if running in an interactive terminal
+ if [ ! -t 0 ]; then
+ echo -e "\033[1;31mNon-interactive session detected. Cannot prompt for confirmation.\033[0m"
+ exit 1
+ fi
+
read -p "Do you want to continue? (y/n): " -r REPLY
# Normalize to lowercase for comparison
REPLY_LOWER=$(echo "$REPLY" | tr '[:upper:]' '[:lower:]')
if [[ ! "$REPLY_LOWER" =~ ^(y|yes)$ ]]; then
echo "Aborting."
exit 1
fi🤖 Prompt for AI Agents
In @just/base.just around lines 93 - 112, The interactive prompt in the
_destructive-warning function uses read -p without verifying a TTY, which can
hang in non-interactive environments; add a TTY check (e.g., test -t 0 or tty
-s) before prompting and if stdin is not a TTY, print a clear message and exit
non-zero (or skip prompting) so CI/pipes won’t hang; update the block around the
read invocation and use the existing REPLY/REPLY_LOWER flow when the TTY check
passes.
| $wgFileBackends['s3'] = [ | ||
| 'class' => 'AmazonS3FileBackend', | ||
| 'bucket' => $wgAWSBucketName, | ||
| 'region' => 'auto', | ||
| 'endpoint' => $s3Endpoint, | ||
| 'use_path_style_endpoint' => true, | ||
| ]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: The S3 file backend is defined but never activated because $wgLocalFileRepo is not configured to use it. Files will be stored locally instead of in S3.
Severity: CRITICAL
🔍 Detailed Analysis
The configuration for the AWS extension defines an S3 file backend by setting $wgFileBackends['s3']. However, it fails to set $wgLocalFileRepo to point to this 's3' backend. Without this crucial step, MediaWiki defaults to its local filesystem for file storage. As a result, all uploaded files will be stored on the local server instead of the intended S3 bucket, defeating the purpose of the S3 integration.
💡 Suggested Fix
To activate the S3 backend for file storage, explicitly configure $wgLocalFileRepo to use the 's3' backend definition. Add the line $wgLocalFileRepo = $wgFileBackends['s3']; after the $wgFileBackends['s3'] block.
🤖 Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.
Location: wiki/configs/51-Extensions_Config.php#L87-L93
Potential issue: The configuration for the AWS extension defines an S3 file backend by
setting `$wgFileBackends['s3']`. However, it fails to set `$wgLocalFileRepo` to point to
this 's3' backend. Without this crucial step, MediaWiki defaults to its local filesystem
for file storage. As a result, all uploaded files will be stored on the local server
instead of the intended S3 bucket, defeating the purpose of the S3 integration.
Did we get this right? 👍 / 👎 to inform future reviews.
Reference ID: 8401527
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In @compose.local.yaml.example:
- Around line 4-34: The mediawiki service healthcheck currently runs "php-fpm
-t" which only validates config; update the mediawiki healthcheck block so it
verifies php-fpm is actually accepting connections (e.g., check that a listener
exists on 127.0.0.1:9000 or the FPM socket) rather than just configuration
syntax—replace or augment the existing php-fpm -t test with a readiness probe
that first ensures php-fpm config is OK and then confirms a successful
TCP/socket connection (using tools like nc/ss) to the FPM socket/port so
downstream services get a true “ready” signal.
- Around line 35-59: The nginx service healthcheck uses "curl" but the
nginx:stable-alpine image does not include curl, so the healthcheck will fail;
fix by either (A) building a custom nginx image from nginx:stable-alpine that
installs curl/wget (e.g., add apk add --no-cache curl) and update the nginx
service to use that custom image name, or (B) change the nginx service
healthcheck to a shell-based check that uses only /bin/sh (e.g., test network
socket or ps) so no extra packages are required, or (C) document in the README
that a custom image with curl/wget must be provided before using the compose
file; update the nginx service block (healthcheck and image) and README
accordingly.
- Around line 78-99: Update the mariadb service to pin the image to a specific
version tag (e.g., change image: mariadb:latest to a fixed tag like
mariadb:11.4) and stop reusing the same secret for root and app access by
introducing and using a separate environment variable for the root password (use
MARIADB_ROOT_PASSWORD for the root credential and MARIADB_PASSWORD for the
non-root user credential), and ensure the .env.example/.env.local.example is
updated to include distinct entries for MARIADB_ROOT_PASSWORD and
MARIADB_PASSWORD so root and app credentials are not identical.
- Around line 100-120: The MinIO service configuration uses an unstable image
tag, a healthcheck that relies on curl (not present in the official image), and
makes the service unreachable from host by using expose; update the minio
service: pin image to a specific release (replace image: minio/minio:latest with
a fixed RELEASE... tag), remove or relocate the healthcheck block (delete the
healthcheck that uses curl or move the probe to an external sidecar/host
container such as minio/mc), and replace expose: ['9000'] with ports:
["9000:9000"] so the container is accessible from the host.
In @compose.staging.yaml.example:
- Around line 33-56: The nginx service healthcheck uses curl but
nginx:stable-alpine does not include curl; create a custom image (build FROM
nginx:stable-alpine and install curl) or switch the nginx image to a base that
includes curl, then update the nginx service's image reference so the existing
healthcheck (test: ["CMD","curl","-f","http://localhost/"]) will succeed; ensure
the custom image is built/tagged and the compose file's nginx image value is
changed accordingly.
- Around line 75-104: The OpenSearch service healthcheck is using HTTPS with
credentials while plugins.security.disabled=true (no TLS/auth), so update the
healthcheck block (healthcheck.test) in compose.staging.yaml.example (and the
equivalent in compose.local.yaml.example) to use a plain HTTP probe to
http://localhost:9200/_cluster/health without -u/HTTPS/credentials (so remove
OPENSEARCH_INITIAL_ADMIN_PASSWORD usage and -k/-u flags) and keep the existing
curl failure handling (curl -f ... || exit 1) to correctly detect readiness.
🧹 Nitpick comments (2)
.env.example (1)
1-88: Linter warnings are mostly style preferences; current organization is sound.The dotenv-linter warnings are either false positives or reflect style choices that conflict with your architecture:
- QuoteCharacter warnings: False positives. Empty
=""is correct .env template syntax.- UnorderedKey warnings: Keys are grouped by functional area (Build Args, Secrets, DB, SMTP, Turnstile, OpenID, S3, Misc), which is more maintainable than strict alphabetical ordering. This aligns with the modular, feature-driven design in the PR.
- ExtraBlankLine warnings: The blank lines between sections improve readability and logical organization. This is intentional and beneficial.
The file is well-structured with clear, descriptive comments and consistent naming conventions across the new integrations.
If strict dotenv-linter compliance is desired (e.g., for CI/linting passes), consider either:
- Creating a
.dotenv-linter.tomlto suppress these warnings or relax these rules.- Running dotenv-linter with a
--skipfilter to ignoreUnorderedKey,ExtraBlankLine, andQuoteCharacterchecks that conflict with your readability goals.Verify that all environment variables referenced in the configuration files (e.g.,
wiki/configs/51-Extensions_Config.php,.env.local.example) are documented here. Run a search across config files to catch any missing or undocumented variables.compose.staging.yaml.example (1)
4-32:depends_on.condition/health semantics: consider depending on Valkey health, not just “started”.If Valkey startup or AOF replay is slow, “started” can still mean “not ready”. Aligning on health can reduce flaky bring-up.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
.env.examplecompose.local.yaml.examplecompose.production.yaml.examplecompose.staging.yaml.example
🚧 Files skipped from review as they are similar to previous changes (1)
- compose.production.yaml.example
🧰 Additional context used
🪛 dotenv-linter (4.0.0)
.env.example
[warning] 4-4: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 6-6: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 8-8: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 8-8: [UnorderedKey] The MEDIAWIKI_BRANCH key should go before the MEDIAWIKI_MAJOR_VERSION key
(UnorderedKey)
[warning] 10-10: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 10-10: [UnorderedKey] The CITIZEN_VERSION key should go before the MEDIAWIKI_BRANCH key
(UnorderedKey)
[warning] 12-12: [ExtraBlankLine] Extra blank line detected
(ExtraBlankLine)
[warning] 17-17: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 20-20: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 20-20: [UnorderedKey] The SECRET_KEY key should go before the UPGRADE_KEY key
(UnorderedKey)
[warning] 22-22: [ExtraBlankLine] Extra blank line detected
(ExtraBlankLine)
[warning] 26-26: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 28-28: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 28-28: [UnorderedKey] The DB_NAME key should go before the DB_SERVER key
(UnorderedKey)
[warning] 30-30: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 32-32: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 32-32: [UnorderedKey] The DB_PASSWORD key should go before the DB_SERVER key
(UnorderedKey)
[warning] 34-34: [ExtraBlankLine] Extra blank line detected
(ExtraBlankLine)
[warning] 38-38: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 40-40: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 40-40: [UnorderedKey] The SMTP_DOMAIN key should go before the SMTP_HOST key
(UnorderedKey)
[warning] 42-42: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 44-44: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 46-46: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 46-46: [UnorderedKey] The SMTP_PASSWORD key should go before the SMTP_PORT key
(UnorderedKey)
[warning] 48-48: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 48-48: [UnorderedKey] The EMERGENCY_EMAIL key should go before the SMTP_DOMAIN key
(UnorderedKey)
[warning] 50-50: [ExtraBlankLine] Extra blank line detected
(ExtraBlankLine)
[warning] 54-54: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 56-56: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 56-56: [UnorderedKey] The TURNSTILE_SECRET_KEY key should go before the TURNSTILE_SITE_KEY key
(UnorderedKey)
[warning] 58-58: [ExtraBlankLine] Extra blank line detected
(ExtraBlankLine)
[warning] 62-62: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 64-64: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 66-66: [ExtraBlankLine] Extra blank line detected
(ExtraBlankLine)
[warning] 70-70: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 72-72: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 72-72: [UnorderedKey] The S3_ACCESS_KEY_ID key should go before the S3_ENDPOINT key
(UnorderedKey)
[warning] 74-74: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 76-76: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 76-76: [UnorderedKey] The S3_BUCKET_NAME key should go before the S3_ENDPOINT key
(UnorderedKey)
[warning] 78-78: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 78-78: [UnorderedKey] The S3_BUCKET_DOMAIN key should go before the S3_BUCKET_NAME key
(UnorderedKey)
[warning] 80-80: [ExtraBlankLine] Extra blank line detected
(ExtraBlankLine)
[warning] 84-84: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 86-86: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 88-88: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 88-88: [UnorderedKey] The DISCORD_WEBHOOK_URL key should go before the SITENAME key
(UnorderedKey)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Seer Code Review
🔇 Additional comments (5)
compose.local.yaml.example (3)
1-1: Consider avoiding hard-codingname:if you need broader Compose compatibility.Top-level
name:anddepends_on.conditionbehavior varies across Compose implementations; if Just targets Docker Compose v2 this is fine, but if anyone uses legacydocker-compose, this may break. If compatibility matters, prefer setting the project name via--project-name/COMPOSE_PROJECT_NAMEin the Just scripts.
60-77: Valkey service looks fine for local dev.
151-161: Networks/volumes section looks good.compose.staging.yaml.example (2)
1-1: Same Compose compatibility note forname:applies here.
105-115: Networks/volumes section looks good.
| mediawiki: | ||
| build: | ||
| context: . | ||
| dockerfile: Dockerfile | ||
| args: | ||
| MEDIAWIKI_MAJOR_VERSION: ${MEDIAWIKI_MAJOR_VERSION} | ||
| MEDIAWIKI_VERSION: ${MEDIAWIKI_VERSION} | ||
| MEDIAWIKI_BRANCH: ${MEDIAWIKI_BRANCH} | ||
| CITIZEN_VERSION: ${CITIZEN_VERSION} | ||
| container_name: local-wiki-mediawiki | ||
| depends_on: | ||
| valkey: | ||
| condition: service_started | ||
| mariadb: | ||
| condition: service_healthy | ||
| volumes: | ||
| - wiki-webroot:/var/www/wiki | ||
| - .env:/var/www/wiki/.env:ro | ||
| networks: | ||
| - wiki-network | ||
| restart: unless-stopped | ||
| healthcheck: | ||
| test: [ "CMD", "php-fpm", "-t" ] | ||
| interval: 30s | ||
| timeout: 10s | ||
| start_period: 60s | ||
| retries: 3 | ||
| user: "1000:1000" | ||
| expose: | ||
| - '9000' | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MediaWiki healthcheck likely doesn’t validate “ready” state; consider a real readiness probe.
php-fpm -t validates config but doesn’t guarantee FPM is accepting connections. If nginx depends on MediaWiki being “healthy”, you’ll get false positives.
Proposed adjustment (socket/ping style readiness)
services:
mediawiki:
@@
healthcheck:
- test: [ "CMD", "php-fpm", "-t" ]
+ # Prefer an actual readiness check (adjust path/socket to your image config)
+ test: [ "CMD-SHELL", "php-fpm -t && (cgi-fcgi -bind -connect 127.0.0.1:9000 >/dev/null 2>&1)" ]
interval: 30s
timeout: 10s
start_period: 60s
retries: 3📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| mediawiki: | |
| build: | |
| context: . | |
| dockerfile: Dockerfile | |
| args: | |
| MEDIAWIKI_MAJOR_VERSION: ${MEDIAWIKI_MAJOR_VERSION} | |
| MEDIAWIKI_VERSION: ${MEDIAWIKI_VERSION} | |
| MEDIAWIKI_BRANCH: ${MEDIAWIKI_BRANCH} | |
| CITIZEN_VERSION: ${CITIZEN_VERSION} | |
| container_name: local-wiki-mediawiki | |
| depends_on: | |
| valkey: | |
| condition: service_started | |
| mariadb: | |
| condition: service_healthy | |
| volumes: | |
| - wiki-webroot:/var/www/wiki | |
| - .env:/var/www/wiki/.env:ro | |
| networks: | |
| - wiki-network | |
| restart: unless-stopped | |
| healthcheck: | |
| test: [ "CMD", "php-fpm", "-t" ] | |
| interval: 30s | |
| timeout: 10s | |
| start_period: 60s | |
| retries: 3 | |
| user: "1000:1000" | |
| expose: | |
| - '9000' | |
| mediawiki: | |
| build: | |
| context: . | |
| dockerfile: Dockerfile | |
| args: | |
| MEDIAWIKI_MAJOR_VERSION: ${MEDIAWIKI_MAJOR_VERSION} | |
| MEDIAWIKI_VERSION: ${MEDIAWIKI_VERSION} | |
| MEDIAWIKI_BRANCH: ${MEDIAWIKI_BRANCH} | |
| CITIZEN_VERSION: ${CITIZEN_VERSION} | |
| container_name: local-wiki-mediawiki | |
| depends_on: | |
| valkey: | |
| condition: service_started | |
| mariadb: | |
| condition: service_healthy | |
| volumes: | |
| - wiki-webroot:/var/www/wiki | |
| - .env:/var/www/wiki/.env:ro | |
| networks: | |
| - wiki-network | |
| restart: unless-stopped | |
| healthcheck: | |
| # Prefer an actual readiness check (adjust path/socket to your image config) | |
| test: [ "CMD-SHELL", "php-fpm -t && (cgi-fcgi -bind -connect 127.0.0.1:9000 >/dev/null 2>&1)" ] | |
| interval: 30s | |
| timeout: 10s | |
| start_period: 60s | |
| retries: 3 | |
| user: "1000:1000" | |
| expose: | |
| - '9000' |
🤖 Prompt for AI Agents
In @compose.local.yaml.example around lines 4 - 34, The mediawiki service
healthcheck currently runs "php-fpm -t" which only validates config; update the
mediawiki healthcheck block so it verifies php-fpm is actually accepting
connections (e.g., check that a listener exists on 127.0.0.1:9000 or the FPM
socket) rather than just configuration syntax—replace or augment the existing
php-fpm -t test with a readiness probe that first ensures php-fpm config is OK
and then confirms a successful TCP/socket connection (using tools like nc/ss) to
the FPM socket/port so downstream services get a true “ready” signal.
| nginx: | ||
| image: nginx:stable-alpine | ||
| container_name: local-wiki-nginx | ||
| depends_on: | ||
| mediawiki: | ||
| condition: service_healthy | ||
| environment: | ||
| NGINX_SERVER_NAME: ${SITENAME} | ||
| NGINX_ENVSUBST_OUTPUT_DIR: /etc/nginx/conf.d | ||
| volumes: | ||
| - wiki-webroot:/var/www/wiki:ro | ||
| - ./wiki/mediawiki.conf:/etc/nginx/templates/mediawiki.conf.template:ro | ||
| - ./wiki/default.conf:/etc/nginx/conf.d/default.conf:ro | ||
| ports: | ||
| - '3000:80' | ||
| networks: | ||
| - wiki-network | ||
| restart: unless-stopped | ||
| healthcheck: | ||
| test: [ "CMD", "curl", "-f", "http://localhost/" ] | ||
| interval: 30s | ||
| timeout: 10s | ||
| start_period: 30s | ||
| retries: 3 | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❓ Verification inconclusive
nginx:stable-alpine curl available tools
Short answer: nginx:stable-alpine is a minimal Alpine-based image and does not include curl by default. Install it with apk.
Commands:
- One-off container: docker run --rm -it nginx:stable-alpine sh -c "apk add --no-cache curl && curl --version". [1][2]
- Dockerfile: FROM nginx:stable-alpine
RUN apk add --no-cache curl
Notes: Alpine-based nginx images intentionally omit many user tools (bash, git, curl, etc.) to stay small — add what you need via apk. [1][3]
nginx stable-alpine docker image contents
The official nginx:stable-alpine image is a minimal Alpine‑based build that includes the nginx server and the usual nginx filesystem layout, but not many extra user tools. Key contents and characteristics:
- Based on Alpine Linux (small musl-based base). [1][2]
- Installs the nginx binary and modules; config files under /etc/nginx. [1][2]
- Default web root/static files under /usr/share/nginx/html (default index.html) and default conf.d/*.conf. [1][2]
- Common runtime packages included (installed via apk): openssl/ca-certificates, pcre, zlib (for SSL, PCRE, compression support). [1][3]
- Small image size (Alpine variant is much smaller than Debian-based images) and usually excludes interactive tooling like bash/git by design — add what you need via apk in your Dockerfile. [2]
- Variant notes: there are also -perl and -slim Alpine variants if you need the perl module or slimmer builds. [2]
Sources: official nginx dockerfiles and docs (nginx/docker-nginx), Docker Hub nginx image page, and NGINX Docker docs. [1][2][3]
References
- nginx/docker-nginx (official Dockerfiles). [GitHub].
- nginx — Official Image (Docker Hub).
- NGINX Docker documentation / Alpine Dockerfile snippets.
nginx:stable-alpine healthcheck curl wget
Short answer: use curl -f or wget --spider in a HEALTHCHECK, and ensure the tool exists in nginx:stable-alpine (install with apk if needed).
Examples:
-
Dockerfile (install wget, use wget):
FROM nginx:stable-alpine
RUN apk add --no-cache wget
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3
CMD wget --spider --quiet --tries=1 --timeout=5 http://localhost/ || exit 1
[1][3] -
Dockerfile (install curl, use curl):
FROM nginx:stable-alpine
RUN apk add --no-cache curl
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3
CMD curl -f http://localhost/ || exit 1
[2][3] -
docker-compose (no install; only works if image already has curl/wget):
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/"]
interval: 30s
timeout: 10s
retries: 3
[2]
Notes:
- Alpine-based nginx images may not include curl/wget by default — explicitly install with apk to avoid “not found” failures. [2][4]
- Use curl -f to fail on non-2xx, or wget --spider (with --tries=1) as the curl replacement on Alpine. [1][3][8]
Sources: examples and guidance from online Docker/Nginx healthcheck docs and community posts. [1][2][3][4]
nginx healthcheck uses curl, which is missing from nginx:stable-alpine.
The nginx:stable-alpine image is a minimal Alpine-based build that does not include curl by default. This healthcheck will fail with a "command not found" error and block the stack startup.
The proposed fix using wget has the same problem—wget is also not included in nginx:stable-alpine by default. Both tools require explicit installation via apk add --no-cache in a custom Dockerfile.
To resolve, either:
- Build a custom
nginximage that installscurlorwget:Then update the service to reference the custom image.FROM nginx:stable-alpine RUN apk add --no-cache curl
- Document in the README that a custom image is required before using this compose file.
- Use a shell-based healthcheck that relies only on
sh(e.g., checking the nginx process or listening port without external tools).
🤖 Prompt for AI Agents
In @compose.local.yaml.example around lines 35 - 59, The nginx service
healthcheck uses "curl" but the nginx:stable-alpine image does not include curl,
so the healthcheck will fail; fix by either (A) building a custom nginx image
from nginx:stable-alpine that installs curl/wget (e.g., add apk add --no-cache
curl) and update the nginx service to use that custom image name, or (B) change
the nginx service healthcheck to a shell-based check that uses only /bin/sh
(e.g., test network socket or ps) so no extra packages are required, or (C)
document in the README that a custom image with curl/wget must be provided
before using the compose file; update the nginx service block (healthcheck and
image) and README accordingly.
| mariadb: | ||
| image: mariadb:latest | ||
| container_name: local-wiki-mariadb | ||
| environment: | ||
| MARIADB_ROOT_PASSWORD: ${MARIADB_PASSWORD} | ||
| MARIADB_DATABASE: ${MARIADB_NAME} | ||
| MARIADB_USER: ${MARIADB_USER} | ||
| MARIADB_PASSWORD: ${MARIADB_PASSWORD} | ||
| volumes: | ||
| - ./database:/var/lib/mysql | ||
| networks: | ||
| - wiki-network | ||
| restart: unless-stopped | ||
| healthcheck: | ||
| test: ["CMD", "healthcheck.sh", "--connect", "--innodb_initialized"] | ||
| interval: 30s | ||
| timeout: 10s | ||
| start_period: 60s | ||
| retries: 3 | ||
| expose: | ||
| - '3306' | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, find the compose.local.yaml.example file
find . -name "compose.local.yaml.example" -o -name "compose.local.yaml" | head -20Repository: allthingslinux/atl-wiki
Length of output: 95
🏁 Script executed:
# List yaml/compose files in the repo
fd -e yaml -e yml | grep -i compose | head -20Repository: allthingslinux/atl-wiki
Length of output: 49
🏁 Script executed:
# Read the compose.local.yaml.example file, focusing on lines 78-99
cat -n ./compose.local.yaml.example | sed -n '75,105p'Repository: allthingslinux/atl-wiki
Length of output: 1058
🏁 Script executed:
# Look for environment configuration files
fd -e env -e 'env.example' -e 'env.local' | head -20Repository: allthingslinux/atl-wiki
Length of output: 49
🏁 Script executed:
# Check the entire compose file structure to understand context
wc -l ./compose.local.yaml.exampleRepository: allthingslinux/atl-wiki
Length of output: 99
🏁 Script executed:
# Look for any documentation about environment setup
fd -i readme -o -i contributing | head -10Repository: allthingslinux/atl-wiki
Length of output: 183
🏁 Script executed:
# Look for documentation files
find . -maxdepth 2 -type f \( -name "README*" -o -name "SETUP*" -o -name "*.md" \) | head -20Repository: allthingslinux/atl-wiki
Length of output: 78
🏁 Script executed:
# Check for any env configuration or setup scripts
find . -maxdepth 2 -type f \( -name ".env*" -o -name "*env*" \) | head -20Repository: allthingslinux/atl-wiki
Length of output: 102
🏁 Script executed:
# Review the full compose file to see the pattern of credential handling
cat -n ./compose.local.yaml.example | head -40Repository: allthingslinux/atl-wiki
Length of output: 1359
🏁 Script executed:
# Check the .env.example file to see credential structure
cat -n ./.env.exampleRepository: allthingslinux/atl-wiki
Length of output: 2989
🏁 Script executed:
# Check .env.local.example as well
cat -n ./.env.local.exampleRepository: allthingslinux/atl-wiki
Length of output: 3790
🏁 Script executed:
# Check other services in compose file for image version patterns
grep -n "image:" ./compose.local.yaml.example | head -20Repository: allthingslinux/atl-wiki
Length of output: 249
Pin mariadb image version and separate root vs. user credentials.
The compose file uses mariadb:latest (line 79), which can break between runs due to unpredictable updates. Additionally, both MARIADB_ROOT_PASSWORD and MARIADB_PASSWORD are set to ${MARIADB_PASSWORD} (lines 82 and 85), reusing the same credential for root and app access—unnecessarily exposing root privilege.
Separate the credentials by introducing a MARIADB_ROOT_PASSWORD variable in .env.local.example and pin the mariadb version to a specific release or tag (e.g., 11.4) rather than latest.
Proposed hardening
mariadb:
- image: mariadb:latest
+ image: mariadb:11.4
container_name: local-wiki-mariadb
environment:
- MARIADB_ROOT_PASSWORD: ${MARIADB_PASSWORD}
+ MARIADB_ROOT_PASSWORD: ${MARIADB_ROOT_PASSWORD}
MARIADB_DATABASE: ${MARIADB_NAME}
MARIADB_USER: ${MARIADB_USER}
MARIADB_PASSWORD: ${MARIADB_PASSWORD}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @compose.local.yaml.example around lines 78 - 99, Update the mariadb service
to pin the image to a specific version tag (e.g., change image: mariadb:latest
to a fixed tag like mariadb:11.4) and stop reusing the same secret for root and
app access by introducing and using a separate environment variable for the root
password (use MARIADB_ROOT_PASSWORD for the root credential and MARIADB_PASSWORD
for the non-root user credential), and ensure the
.env.example/.env.local.example is updated to include distinct entries for
MARIADB_ROOT_PASSWORD and MARIADB_PASSWORD so root and app credentials are not
identical.
| minio: | ||
| image: minio/minio:latest | ||
| container_name: local-wiki-minio | ||
| environment: | ||
| MINIO_ROOT_USER: ${MINIO_USER} | ||
| MINIO_ROOT_PASSWORD: ${MINIO_PASSWORD} | ||
| volumes: | ||
| - ./images:/data | ||
| networks: | ||
| - wiki-network | ||
| command: server /data | ||
| restart: unless-stopped | ||
| healthcheck: | ||
| test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"] | ||
| interval: 30s | ||
| timeout: 10s | ||
| start_period: 30s | ||
| retries: 3 | ||
| expose: | ||
| - '9000' | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Search for the compose.local.yaml.example file
find . -name "compose.local.yaml.example" -type fRepository: allthingslinux/atl-wiki
Length of output: 95
🏁 Script executed:
# Also search for any compose files to understand the repo structure
find . -name "compose*.yaml*" -o -name "docker-compose*.yaml*" | head -20Repository: allthingslinux/atl-wiki
Length of output: 160
🏁 Script executed:
# Check the file size first
wc -l ./compose.local.yaml.exampleRepository: allthingslinux/atl-wiki
Length of output: 99
🏁 Script executed:
# Read lines around 100-120 to verify the snippet
sed -n '95,125p' ./compose.local.yaml.exampleRepository: allthingslinux/atl-wiki
Length of output: 794
🌐 Web query:
MinIO minio/minio latest image curl available healthcheck
💡 Result:
Short answer: The official minio/minio (latest) image no longer includes curl, so a Docker healthcheck that calls curl inside that container will fail. Use the MinIO HTTP health endpoints or an external tool/container (minio/mc or a custom image containing curl) for health checks. [1][2][3]
Details and options:
- The image removal of curl is discussed in the MinIO repo (curl not present -> curl-based healthchecks fail). [1]
- MinIO exposes unauthenticated health endpoints you can probe: /minio/health/live and /minio/health/ready (HTTP 200 = healthy). Use those endpoints from outside the container (host, sidecar) or from a helper image. [2]
- You can also use mc admin commands (minio/mc) to check readiness (e.g., mc admin info) from a sidecar/init container or in your healthcheck container. [3]
Example approaches:
- Run a host/sidecar healthcheck: curl -f http://localhost:9000/minio/health/ready (runs outside the minio container). [2]
- Use a helper container in docker-compose (image: minio/mc) to probe with mc admin info or mc alias + mc ls. [3]
- Build a small custom image FROM minio/minio and add curl/wget if you must run health check inside the same container.
Sources:
[1] MinIO GitHub issue describing curl removal and healthcheck failure.
[2] MinIO docs — Healthcheck API (/minio/health/live, /minio/health/ready).
[3] MinIO docs — mc admin info (useful for programmatic checks).
🌐 Web query:
MinIO docker container default tools curl wget available
💡 Result:
Short answer: The official minio/minio Docker image does NOT include curl or wget by default. Use a helper image (curl/wget), the minio/mc client image, or a distro variant (e.g., Bitnami) if you need those tools. [1][2][3]
Sources:
- Report/issue: curl removed from official MinIO image. [2]
- Community Q/A: curl not available in standard minio container. [1]
- Official MinIO Docker Hub page (official image behavior/usage). [3]
🌐 Web query:
Docker Compose expose vs ports difference local development
💡 Result:
Short answer
- ports: publishes container ports to the host (host:container mapping) so you can reach the service from localhost or other machines. Use when you need host/external access. [1][2]
- expose: only marks (documents) ports as exposed to other containers on the same Docker network (no host publishing). It's metadata; it does not open a host port. Use when you only want inter-service communication. [1][2][3]
Quick examples
- ports (compose):
web:
ports:
- "8080:80" # host localhost:8080 -> container:80. [1] - expose (compose):
db:
expose:
- "5432" # reachable by other containers on the network at container:5432, not from host. [2]
Notes / gotchas
- You can map any container port with ports even if the image doesn't EXPOSE it. EXPOSE (Dockerfile) is mainly documentation and is used by tooling (and -P / publish-all) but doesn’t publish to host by itself. [2][3]
- Published ports are typically bound to 0.0.0.0 on the host; container apps must listen on 0.0.0.0 (not 127.0.0.1) to be reachable through the published host port. [2][3]
Sources
[1] Docker docs — Publishing and exposing ports.
[2] Stack Overflow — difference between ports and expose in docker-compose.
[3] How-To-Geek — exposing vs publishing Docker ports.
MinIO: :latest + broken healthcheck + no host port access.
minio/minio:latestcan introduce breaking changes unexpectedly.- The
curl-based healthcheck will fail—the official MinIO image does not includecurl(orwget). - Using
expose:instead ofports:means the service is unreachable from the host, limiting local development access.
To fix:
- Pin the image to a specific release (e.g.,
minio/minio:RELEASE.2025-01-20T00-00-00Z). - Replace
expose: ['9000']withports: ["9000:9000"]for host access. - Either remove the
healthcheckblock entirely, use an external sidecar healthcheck container (e.g.,minio/mc), or move the health probe to run from the host/another container instead of inside the MinIO container.
🤖 Prompt for AI Agents
In @compose.local.yaml.example around lines 100 - 120, The MinIO service
configuration uses an unstable image tag, a healthcheck that relies on curl (not
present in the official image), and makes the service unreachable from host by
using expose; update the minio service: pin image to a specific release (replace
image: minio/minio:latest with a fixed RELEASE... tag), remove or relocate the
healthcheck block (delete the healthcheck that uses curl or move the probe to an
external sidecar/host container such as minio/mc), and replace expose: ['9000']
with ports: ["9000:9000"] so the container is accessible from the host.
| opensearch: | ||
| image: opensearchproject/opensearch:1.3.20 | ||
| container_name: staging-wiki-opensearch | ||
| environment: | ||
| - discovery.type=single-node | ||
| - cluster.name=mediwiki-cluster | ||
| - node.name=mediwiki-node | ||
| - bootstrap.memory_lock=true | ||
| - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" | ||
| - plugins.security.disabled=true | ||
| ulimits: | ||
| memlock: | ||
| soft: -1 | ||
| hard: -1 | ||
| nofile: | ||
| soft: 65536 | ||
| hard: 65536 | ||
| volumes: | ||
| - opensearch-data:/usr/share/opensearch/data | ||
| networks: | ||
| - wiki-network | ||
| healthcheck: | ||
| test: ["CMD-SHELL", "curl -k -u admin:${OPENSEARCH_INITIAL_ADMIN_PASSWORD} https://localhost:9200/_cluster/health || exit 1"] | ||
| interval: 30s | ||
| timeout: 10s | ||
| retries: 3 | ||
| start_period: 30s | ||
| expose: | ||
| - '9200' | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
find . -name "compose*.yaml*" -o -name "compose*.yml*" | head -20Repository: allthingslinux/atl-wiki
Length of output: 160
🏁 Script executed:
cat -n compose.local.yaml.example | sed -n '115,155p'Repository: allthingslinux/atl-wiki
Length of output: 1318
🏁 Script executed:
rg "container_name.*opensearch" -A 5 -B 5Repository: allthingslinux/atl-wiki
Length of output: 1910
🏁 Script executed:
cat -n compose.staging.yaml.example | grep -A 8 "healthcheck:"Repository: allthingslinux/atl-wiki
Length of output: 1135
🏁 Script executed:
cat -n compose.production.yaml.example | grep -A 8 "healthcheck:"Repository: allthingslinux/atl-wiki
Length of output: 1135
🏁 Script executed:
cat -n compose.staging.yaml.example | grep "plugins.security"Repository: allthingslinux/atl-wiki
Length of output: 112
🏁 Script executed:
cat -n compose.production.yaml.example | grep "plugins.security"Repository: allthingslinux/atl-wiki
Length of output: 112
🌐 Web query:
OpenSearch 1.3 security disabled HTTP HTTPS healthcheck
💡 Result:
Short answer:
-
If you disable the Security plugin (opensearch.yml: plugins.security.disabled: true) the plugin’s REST endpoints are removed/disabled; don’t rely on _plugins/_security/health when security is disabled. [1][2]
-
For a basic node health probe use the standard OpenSearch cluster REST endpoints (example: GET /_cluster/health or GET /) — these work regardless of the Security plugin being enabled. If you have REST/TLS enabled for the HTTP layer you must call HTTPS and satisfy certificate checks. If REST TLS is not enabled you must use HTTP. [2][3]
Example probes:
- HTTP (no TLS):
curl -sS http://HOST:9200/_cluster/health - HTTPS (TLS enabled):
curl -sS --cacert /path/to/ca.pem https://HOST:9200/_cluster/health
References:
- How to disable the Security plugin (plugins.security.disabled). [1]
- Security plugin Health API and cluster health endpoints. [2]
- HTTP/REST TLS configuration for OpenSearch (REST layer TLS optional; transport TLS required). [3]
OpenSearch misconfiguration across all environments: insecure healthcheck + container name collisions.
All three compose files (local, staging, production) have the same critical issues:
- Healthcheck fails with disabled security: The healthcheck uses HTTPS with authentication, but
plugins.security.disabled=truedisables the Security plugin entirely. Since there's no TLS configuration, OpenSearch expects plain HTTP. The current check will fail and keep the service unhealthy. - Identical container name in all environments: All three files use
container_name: staging-wiki-opensearch, causing collisions when multiple environments run on the same host.
Fix required:
- Replace HTTPS + auth healthcheck with plain HTTP:
curl -sf http://localhost:9200/_cluster/health >/dev/null || exit 1 - Differentiate container names by environment:
local-wiki-opensearch,staging-wiki-opensearch,prod-wiki-opensearch
Suggested fix
opensearch:
image: opensearchproject/opensearch:1.3.20
- container_name: staging-wiki-opensearch
+ container_name: local-wiki-opensearch
environment:
- discovery.type=single-node
- cluster.name=mediwiki-cluster
- node.name=mediwiki-node
- bootstrap.memory_lock=true
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
- plugins.security.disabled=true
healthcheck:
- test: ["CMD-SHELL", "curl -k -u admin:${OPENSEARCH_INITIAL_ADMIN_PASSWORD} https://localhost:9200/_cluster/health || exit 1"]
+ test: ["CMD-SHELL", "curl -sf http://localhost:9200/_cluster/health >/dev/null || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30sApply the same fixes to compose.staging.yaml.example and compose.production.yaml.example with their respective container names.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| opensearch: | |
| image: opensearchproject/opensearch:1.3.20 | |
| container_name: staging-wiki-opensearch | |
| environment: | |
| - discovery.type=single-node | |
| - cluster.name=mediwiki-cluster | |
| - node.name=mediwiki-node | |
| - bootstrap.memory_lock=true | |
| - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" | |
| - plugins.security.disabled=true | |
| ulimits: | |
| memlock: | |
| soft: -1 | |
| hard: -1 | |
| nofile: | |
| soft: 65536 | |
| hard: 65536 | |
| volumes: | |
| - opensearch-data:/usr/share/opensearch/data | |
| networks: | |
| - wiki-network | |
| healthcheck: | |
| test: ["CMD-SHELL", "curl -k -u admin:${OPENSEARCH_INITIAL_ADMIN_PASSWORD} https://localhost:9200/_cluster/health || exit 1"] | |
| interval: 30s | |
| timeout: 10s | |
| retries: 3 | |
| start_period: 30s | |
| expose: | |
| - '9200' | |
| opensearch: | |
| image: opensearchproject/opensearch:1.3.20 | |
| container_name: local-wiki-opensearch | |
| environment: | |
| - discovery.type=single-node | |
| - cluster.name=mediwiki-cluster | |
| - node.name=mediwiki-node | |
| - bootstrap.memory_lock=true | |
| - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" | |
| - plugins.security.disabled=true | |
| ulimits: | |
| memlock: | |
| soft: -1 | |
| hard: -1 | |
| nofile: | |
| soft: 65536 | |
| hard: 65536 | |
| volumes: | |
| - opensearch-data:/usr/share/opensearch/data | |
| networks: | |
| - wiki-network | |
| healthcheck: | |
| test: ["CMD-SHELL", "curl -sf http://localhost:9200/_cluster/health >/dev/null || exit 1"] | |
| interval: 30s | |
| timeout: 10s | |
| retries: 3 | |
| start_period: 30s | |
| expose: | |
| - '9200' |
| nginx: | ||
| image: nginx:stable-alpine | ||
| container_name: staging-wiki-nginx | ||
| depends_on: | ||
| mediawiki: | ||
| condition: service_healthy | ||
| environment: | ||
| NGINX_SERVER_NAME: ${SITENAME} | ||
| volumes: | ||
| - wiki-webroot:/var/www/wiki:ro | ||
| - ./wiki/mediawiki.conf:/etc/nginx/templates/mediawiki.conf.template:ro | ||
| - ./wiki/default.conf:/etc/nginx/conf.d/default.conf:ro | ||
| ports: | ||
| - '3001:80' | ||
| networks: | ||
| - wiki-network | ||
| restart: unless-stopped | ||
| healthcheck: | ||
| test: [ "CMD", "curl", "-f", "http://localhost/" ] | ||
| interval: 30s | ||
| timeout: 10s | ||
| start_period: 30s | ||
| retries: 3 | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
nginx:stable-alpine docker image curl package
💡 Result:
Yes — nginx:stable-alpine is based on Alpine and does not include curl by default. Install curl with apk. Example Dockerfile:
FROM nginx:stable-alpine
RUN apk --no-cache add curl
Or interactively in a running container:
apk --no-cache add curl
References: [1][2][3][4]
Sources
- ShellHacks — Install cURL on Alpine.
- Stack Overflow — examples installing curl in Alpine-based images.
- nginx.org — Alpine prerequisites include curl.
- Alpine Linux wiki — nginx on Alpine / apk usage.
🏁 Script executed:
# Check the nginx:stable-alpine image details from Docker Hub or Alpine package info
curl -s https://api.github.com/repos/nginxinc/docker-nginx/contents/stable/alpine/Dockerfile | jq -r '.content' | base64 -d 2>/dev/null || echo "Direct access failed, will try alternative"Repository: allthingslinux/atl-wiki
Length of output: 70
🏁 Script executed:
# Alternative: search for nginx Dockerfile to see base packages
rg "nginx:stable-alpine|curl|wget" --type dockerfile --max-count=10Repository: allthingslinux/atl-wiki
Length of output: 101
🏁 Script executed:
# Search for any custom nginx Dockerfile in the repository
fd -t f -e dockerfile -o -e Dockerfile | head -20Repository: allthingslinux/atl-wiki
Length of output: 183
🏁 Script executed:
# Check if there are any other compose files or docker-related setup
fd -t f \( -name "compose*.yaml" -o -name "compose*.yml" -o -name "Dockerfile*" \)Repository: allthingslinux/atl-wiki
Length of output: 297
🏁 Script executed:
# Search for mentions of nginx image building or customization
rg "nginx:stable-alpine|FROM nginx|apk.*curl" -iRepository: allthingslinux/atl-wiki
Length of output: 247
🌐 Web query:
Alpine Linux wget package available nginx:stable-alpine
💡 Result:
Short answer: wget is not guaranteed preinstalled in the official nginx:stable-alpine image — but the image is Alpine-based, so you can add it with apk.
Install command (in Dockerfile or container):
- dockerfile: RUN apk add --no-cache wget
- interactive: docker run --rm -it nginx:stable-alpine sh && apk add --no-cache wget. [1][2]
Sources:
- NGINX Dockerfiles / docs showing Alpine-based images and apk usage. [1]
- Alpine package registry entry for wget (package available in Alpine repositories). [2]
🏁 Script executed:
# Check if there are any other healthcheck approaches or fixes mentioned in the repo
rg "healthcheck|wget|curl" -i -A 2 -B 2Repository: allthingslinux/atl-wiki
Length of output: 6218
nginx healthcheck will fail—curl is not included in nginx:stable-alpine. This affects staging, production, and local compose files identically.
Replace with a custom nginx image that includes curl, or use a different base image. Switching to wget is not a sufficient alternative since it's also missing from the base image by default.
🤖 Prompt for AI Agents
In @compose.staging.yaml.example around lines 33 - 56, The nginx service
healthcheck uses curl but nginx:stable-alpine does not include curl; create a
custom image (build FROM nginx:stable-alpine and install curl) or switch the
nginx image to a base that includes curl, then update the nginx service's image
reference so the existing healthcheck (test:
["CMD","curl","-f","http://localhost/"]) will succeed; ensure the custom image
is built/tagged and the compose file's nginx image value is changed accordingly.
| opensearch: | ||
| image: opensearchproject/opensearch:1.3.20 | ||
| container_name: staging-wiki-opensearch | ||
| environment: | ||
| - discovery.type=single-node | ||
| - cluster.name=mediwiki-cluster | ||
| - node.name=mediwiki-node | ||
| - bootstrap.memory_lock=true | ||
| - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" | ||
| - plugins.security.disabled=true | ||
| ulimits: | ||
| memlock: | ||
| soft: -1 | ||
| hard: -1 | ||
| nofile: | ||
| soft: 65536 | ||
| hard: 65536 | ||
| volumes: | ||
| - opensearch-data:/usr/share/opensearch/data | ||
| networks: | ||
| - wiki-network | ||
| healthcheck: | ||
| test: ["CMD-SHELL", "curl -k -u admin:${OPENSEARCH_INITIAL_ADMIN_PASSWORD} https://localhost:9200/_cluster/health || exit 1"] | ||
| interval: 30s | ||
| timeout: 10s | ||
| retries: 3 | ||
| start_period: 30s | ||
| expose: | ||
| - '9200' | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
OpenSearch plugins.security.disabled=true healthcheck HTTPS compatibility
💡 Result:
Short answer: Setting plugins.security.disabled: true only disables the Security plugin (authN/authZ/audit/protected-index enforcement) — it does NOT automatically turn off TLS/HTTPS. If you still have REST-layer TLS enabled (plugins.security.ssl.http.enabled: true / keystore or PEM settings), the cluster will continue to require HTTPS for REST requests (including health checks). So your healthcheck must use HTTPS (and validate/trust the cert) or call an HTTP listener that’s still available (if you explicitly configured a non‑TLS REST listener). [1][2]
Practical notes:
- Use the same endpoint a client would: GET https://:9200/_cluster/health. If TLS is enabled, the health probe must trust the node certificate (or skip verification if your probe supports that — not recommended in prod). [2]
- Disabling security exposes the security config index and removes auth; protect access by network controls if you disable it. OpenSearch Dashboards requires a secure cluster — remove the Dashboards security plugin if you disable server-side security and still want Dashboards. [1][3]
Sources:
[1] OpenSearch — Disabling / Disable Security plugin.
[2] OpenSearch — Configuring TLS / REST-layer TLS (plugins.security.ssl.http.*).
[3] OpenSearch — Security settings (plugins.security.disabled notes).
🌐 Web query:
OpenSearch security plugin disabled behavior curl healthcheck
💡 Result:
Short answer: If you set plugins.security.disabled: true and restart the cluster, the Security plugin is disabled so the cluster no longer enforces the plugin’s authentication/authorization on HTTP endpoints — a simple curl to the health API (/_cluster/health) will return the usual 200 JSON response without needing the security credentials (assuming no other network-level auth is in front of OpenSearch). The docs also warn that disabling security exposes the security configuration index and requires a full restart. [1][2]
Example:
- After adding plugins.security.disabled: true in opensearch.yml and restarting:
curl -s http://localhost:9200/_cluster/health
Notes / caveats:
- Disabling is cluster-wide and requires full restart. [1]
- The configuration index (and any stored secrets) becomes exposed — protect or delete it if needed. [1][2]
- OpenSearch Dashboards expects a secure cluster; you may need to remove its security plugin if you disable OpenSearch security. [2]
Sources:
- OpenSearch docs — Disabling and enabling the Security plugin. [1]
- OpenSearch docs — Disable security (version-specific pages / warnings). [2]
🏁 Script executed:
find . -name "compose.staging.yaml.example" -o -name "compose.staging.yaml" | head -5Repository: allthingslinux/atl-wiki
Length of output: 97
🏁 Script executed:
git ls-files | grep -i compose | grep -i stagingRepository: allthingslinux/atl-wiki
Length of output: 95
🏁 Script executed:
cat -n compose.staging.yaml.example | head -150Repository: allthingslinux/atl-wiki
Length of output: 3759
🏁 Script executed:
grep -r "OPENSEARCH_INITIAL_ADMIN_PASSWORD" . --include="*.env*" --include="*.yaml" --include="*.yml" | head -20Repository: allthingslinux/atl-wiki
Length of output: 49
🏁 Script executed:
grep -r "opensearch" . --include="*.md" -i | head -30Repository: allthingslinux/atl-wiki
Length of output: 49
🏁 Script executed:
grep -r "plugins.security" . | head -20Repository: allthingslinux/atl-wiki
Length of output: 277
🏁 Script executed:
grep -A 10 "healthcheck:" compose.local.yaml.example | grep -A 10 "opensearch" -B 20Repository: allthingslinux/atl-wiki
Length of output: 814
🏁 Script executed:
grep -B 5 -A 10 "test:.*curl.*opensearch\|opensearch.*-A 15" compose.local.yaml.exampleRepository: allthingslinux/atl-wiki
Length of output: 49
🏁 Script executed:
cat -n compose.local.yaml.example | sed -n '75,105p'Repository: allthingslinux/atl-wiki
Length of output: 1058
🏁 Script executed:
grep -n "opensearch:" compose.local.yaml.exampleRepository: allthingslinux/atl-wiki
Length of output: 135
🏁 Script executed:
sed -n '120,160p' compose.local.yaml.exampleRepository: allthingslinux/atl-wiki
Length of output: 1030
🏁 Script executed:
sed -n '121,155p' compose.local.yaml.exampleRepository: allthingslinux/atl-wiki
Length of output: 943
Fix OpenSearch healthcheck: use HTTP without credentials when security is disabled.
With plugins.security.disabled=true and no explicit TLS configuration, OpenSearch listens on HTTP by default. The healthcheck tries to connect via HTTPS with credentials, which will fail. Change the healthcheck to:
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:9200/_cluster/health || exit 1"]When security is disabled, credentials are unnecessary and TLS is not enabled by default. This same issue affects both compose.local.yaml.example and compose.staging.yaml.example.
🤖 Prompt for AI Agents
In @compose.staging.yaml.example around lines 75 - 104, The OpenSearch service
healthcheck is using HTTPS with credentials while plugins.security.disabled=true
(no TLS/auth), so update the healthcheck block (healthcheck.test) in
compose.staging.yaml.example (and the equivalent in compose.local.yaml.example)
to use a plain HTTP probe to http://localhost:9200/_cluster/health without
-u/HTTPS/credentials (so remove OPENSEARCH_INITIAL_ADMIN_PASSWORD usage and
-k/-u flags) and keep the existing curl failure handling (curl -f ... || exit 1)
to correctly detect readiness.
Summary by Sourcery
Restructure the MediaWiki deployment into a generic, env‑driven containerized wiki image with modular configs and task automation, and document how to deploy and operate it.
New Features:
Enhancements:
Build:
CI:
Documentation: