Skip to content

Simplify docker network#41

Closed
BenRoe wants to merge 1 commit intorspamd:mainfrom
BenRoe:patch-2
Closed

Simplify docker network#41
BenRoe wants to merge 1 commit intorspamd:mainfrom
BenRoe:patch-2

Conversation

@BenRoe
Copy link
Contributor

@BenRoe BenRoe commented Jul 4, 2025

Remove the rspamd network and the static ip's in the compose file, because they are not mandatory. Static ip in compose file is not a best practice.
Instead use the container name in RSPAMD_DNS_SERVERS environment variable.

Remove the rspamd network and the static ip's in the compose file, because they are not mandatory.
Static ip in compose file is not a best practice.
Instead use the container name in `RSPAMD_DNS_SERVERS` environment variable.
@fatalbanana
Copy link
Member

It looks like it breaks DNS resolution at Rspamd

@BenRoe
Copy link
Contributor Author

BenRoe commented Jul 4, 2025

It looks like it breaks DNS resolution at Rspamd

How do you tested this? Will check it too.

@fatalbanana
Copy link
Member

How do you tested this? Will check it too.

Something like rspamc -F foo@example.net -i 8.8.8.8 /dev/null in rspamd container

@BenRoe
Copy link
Contributor Author

BenRoe commented Jul 4, 2025

Seems to work.
Bildschirmfoto 2025-07-04 um 14 04 01

@vstakhov
Copy link
Member

It won't work - it will use resolv.conf in this case and something strange, such as 8.8.8.8 as a resolver.

@vstakhov
Copy link
Member

In fact, it is a problem in Rspamd that it does not resolve DNS resolvers names using gethostbyname - initially it was like a chicken-egg problem from my sight, but it seems it is necessary for Docker environments. @dragoangel should also have some insights iirc.

@dragoangel
Copy link

@vstakhov hi, @BenRoe this PR is wrong. Vsevolod is right, DNS in software configured always as IP. The way it configured is same as in mailcow, mailu, mailinthebox or anything similar.

The only change possible to be done here os to set dns resolver not in conf of rspamd, but via docker compose, but! it will be ip too, not a container name ;)

Even when we speak about stuff like k8s, when you configure upstream in nodelocaldns or core dns, you can't use internal svc names there and you forced to use ips.

@vstakhov
Copy link
Member

@dragoangel I mean it might be better to adress this issue in Rspamd, as we are expected to run in docker environment and lack of the ability to resolve nameserver addresses using system resolving API (gethostbyname) is not very good.

@dragoangel
Copy link

dragoangel commented Aug 29, 2025

@dragoangel I mean it might be better to adress this issue in Rspamd, as we are expected to run in docker environment and lack of the ability to resolve nameserver addresses using system resolving API (gethostbyname) is not very good.

@vstakhov I do not see anything bad in adding that ability but it will be useful not for all containerized setups, mostly it will do a job for compose in this particular case. In k8s you most likely will stick to nodelocaldns (and not touch rspamd dns at all - use ip provided to container by k8s) which speak to some upstream root resolver like unbound. And even if not - it will be definitely service with static IP.

Also question is - what if (gethostbyname) will fail? How much retries we expect? What if response will change over time? How often we will query it's resolving or we will check it only at startup. What if none of mentioned dns names of resolvers will not work at start and/or at runtime, etc.

Technically DNS resolvers in every system always something static and pointed by IP.

@dragoangel
Copy link

dragoangel commented Aug 29, 2025

Reason I saying all that because now it asked for small particular use case, but in half year there will be a guy who will create issue why his rspamd fail to resolve dns (failed startup or lost resolution in the middle of the working process) when he pointed his dns to domain name without real need :) not in Container env at all

@polarathene
Copy link

polarathene commented Nov 15, 2025

Hello, I don't use this project directly but I do know this topic fairly well :)

Collapsed for brevity

You have a few options for adjusting DNS configuration for Docker Compose. Worth noting is Docker (CLI without Compose) has defaulted to it's own legacy bridge network with differences from "user-defined" networks that Docker Compose uses by default.

This also affects DNS, where the user-defined networks leverage an embedded/internal DNS service from Docker at 127.0.0.11:53 and that is configured in /etc/resolv.conf as a bind mount, which prevents any atomic edits that would try to replace the file with a separate file, the inode must remain the same.

I have some detailed insights here along with advice on how to handle that if you need to give priority to your own DNS service instead, otherwise Docker's internal DNS will have priority at resolving anything that'd match it's own internal networks DNS entries, hostnames, network aliases, etc (That also means PTR records, which was an issue I encountered 😅).

If you'd like to modify /etc/hostname, /etc/resolv.conf, or /etc/hosts; I provide clearer instructions here (focused on /etc/hosts but same approach), along with a gotcha I ran into with /etc/hosts and glibc/NSS DNS hostname lookup when the software expected an FQDN.

If you give your DNS server priority but still want to leverage the convenience of Docker Compose DNS for connecting services, you can do as I demonstrated with CoreDNS (I've not used Unbound) and add a fallback to forward requests to 127.0.0.11:53. In that example I have specific domains that would only resolve locally (due to .test or .internal TLD) and assign the containers hostname setting in compose.yaml, where a subdomain can be used to keep it simple.

services:
  rspamd:
    hostname: rspamd.example.internal

  redis:
    hostname: redis.example.internal

  unbound:
    hostname: dns.example.internal

Alternatively you could avoid the hostname addition if you'd rather hard-code the service names in the DNS instead of a shared domain name as the route condition.

You then need moreutils package for the sponge program, then you can have your image entrypoint perform the following:

# Only relevant when using Docker with user-defined networks (Default network type for Docker Compose)
# Modify the file to swap the Docker DNS (127.0.0.11) to resolve your own preferred DNS server (172.20.42.10)
export WITH_NAMESERVER=172.20.42.10
sed "s/127.0.0.11/$${WITH_NAMESERVER}/" /etc/resolv.conf | sponge /etc/resolv.conf

It won't work - it will use resolv.conf in this case and something strange, such as 8.8.8.8 as a resolver.

In the past the daemon DNS fallback has been fixed, but it should be flexible to change that now, the user/admin must handle that on their system however.

Same settings as what Docker Compose should be using by default AFAIK, other than user-defined networks having that extra 127.0.0.11:53 DNS server in front.

I believe these days rather than 8.8.8.8 it should be delegated to the host OS, that may differ when using Docker Desktop (or something else that's not Docker like Podman or Nerdctl which can ingest Docker Compose configs too but do their own thing with networking). For linux hosts there's additional logic to deal with systemd when it's stub resolver is running on the host at 127.0.0.53:53 IIRC.


Instead use the container name in RSPAMD_DNS_SERVERS environment variable.

This should work AFAIK if Rspamd is fully in control of DNS queries from within the image, otherwise you'll get mixed behaviour. The DNS name can be queried to 127.0.0.11:53 and you'll get the container resolved which is why it worked for the PR author.

If you weren't using Docker Compose (eg: docker run ... with it's default bridge network), or a different container engine that doesn't have the same internal DNS support, you may run into issues. IIRC for docker run for example there is very limited implicit DNS names that will resolve between containers. I don't use podman or nerdctl enough to comment on what they're like, and I know that kubernetes is limited (no ability to configure/assign a hostname or network alias to a specific container last I checked, all had to go through ingress/gateway for it's own internal CoreDNS to handle).

Something like rspamc -F foo@example.net -i 8.8.8.8 /dev/null in rspamd container

This would be skipping the internal DNS obviously, so any private DNS records aren't going to be available. Even with my suggested modification of /etc/resolv.conf you're overriding to query public DNS resolver, so of course that'd be brittle?

In fact, it is a problem in Rspamd that it does not resolve DNS resolvers names using gethostbyname - initially it was like a chicken-egg problem from my sight, but it seems it is necessary for Docker environments

Oh, I think I misunderstood from earlier then if you've got a different fallback for what DNS service is called.

Just be mindful that I've observed issues with gethostbyname() in containers with software relying on that such as fetchmail and swaks, where the /etc/hosts entry has the container IP with both the hostname --short and hostname --fqdn, but the first value of the entry is selected by gethostbyname().

  • They would fail if a hostname with an FQDN wasn't assigned to the container and fetchmail/swaks was configured to perform an action that didn't set an explicit FQDN, gethostbyname() was used to get the hostname associated to the container and it'd get the short hostname (single label, random assigned hexadecimal value).
  • While the opposite situation of having a hostname configured that overlapped with a public FQDN also meant the container's IP would be resolved instead, which for some software would fail if an external IP had a rDNS lookup that was expected to resolve to the FQDN the client claimed, but instead was deemed suspicious due to different IP in /etc/hosts for the same FQDN (my recall is a bit vague but it was an issue encountered with Postfix that I reproduced).

Should you encounter that issue, you may want to patch /etc/hosts:

# Relevant when the container lacks an FQDN hostname and services try to use it.
# WARNING: With an FQDN set via `--hostname` rDNS on a public IP with PTR to the same FQDN
# will fail resolving the FQDN back to the public IP, instead resolving the private container IP.
#
# Prepend preferred FQDN for services to resolve as the hostname `--fqdn` / `--domain`,
# Rather than the kernel hostname of this UTS namespace:
# https://docs.docker.com/reference/cli/docker/container/run/#uts
# https://docs.docker.com/reference/compose-file/services/#uts
export WITH_HOSTNAME='hello.world.test'
sed -E "s|($(cat /proc/sys/kernel/hostname))|${WITH_HOSTNAME} \1|" /etc/hosts | sponge /etc/hosts

Technically DNS resolvers in every system always something static and pointed by IP.

User doesn't have to always configure that though, such as DHCP handing over an IP for DNS to query IIRC? There's also mDNS deployments which can kinda be setup to resolve beyond a single unique DNS label (non-standard).

The only change possible to be done here os to set dns resolver not in conf of rspamd, but via docker compose, but! it will be ip too, not a container name ;)

As described earlier, if you can make a DNS query that would use /etc/resolv.conf which for Docker bridge networks would be 127.0.0.11 you'd be able to resolve the service/container names to an IP, but it would be brittle since you could also deploy a container with host-mode networking where that won't work.

My personal preference is to configure DNS at the appropriate places, rarely is that for services. I wasn't even aware that Rspamd had support for that 😅

Rspamd shouldn't need to be too concerned about it, but you should be mindful of the caveats I've shared above. If the DNS queries Rspamd needs to perform should bypass the internal DNS from Docker, then it's simpler to configure this in Rspamd's config since it supports this? So long as nothing else in it's container is performing DNS requests where it'd be inconsistent.

Unbound can then redirect requests as appropriate to 127.0.0.11:53 for DNS queries that should resolve to specific containers, but I don't know how compatible that would be outside of Docker Compose (which I assume isn't a concern here).

@polarathene

This comment was marked as outdated.

@dragoangel
Copy link

dragoangel commented Nov 15, 2025

otherwise Docker's internal DNS will have priority at resolving anything that'd match it's own internal networks DNS entries, hostnames, network aliases, etc (That also means PTR records, which was an issue I encountered 😅).

Yes, and this have to work that way. If you have issues due to this - fix them in some other way, because such behavior is consistent for any dockized environment, docker, containerd, compose & k8s and there no need to reinvert the wheel and trying to workaround it. Your rspamd MUST be able to resolve internal docker names if he living inside of docker network - because rspamd not just standalone system that resolving public names, it system that can rely on another components in same docker environment and due to that it must resolve their names...

@polarathene I don't want to be or sound offensive but you creating own monster docker-antipattern mail system by shipping everything in one container which totally destroying idea of containerization. You need to check best practices and other projects that - none of have issues you describe because they build in correct way.

@dragoangel
Copy link

dragoangel commented Nov 15, 2025

Also instead of the separate Dockerfiles, since these are quite simple, you can replace the dockerfile setting in compose.yaml for dockerfile_inline

What a horrible idea, sorry 🙈, no. @vstakhov I propose close this PR, it goes to nowhere.

@BenRoe
Copy link
Contributor Author

BenRoe commented Nov 15, 2025

Didn't know that the topic is that complicated.
Feel free to close it. I don't mind.

@polarathene
Copy link

polarathene commented Nov 16, 2025

Collapsed for brevity

Yes, and this have to work that way.
If you have issues due to this - fix them in some other way, because such behavior is consistent for any dockized environment, docker, containerd, compose & k8s and there no need to reinvert the wheel and trying to workaround it.

I was simply sharing for the benefit of anyone here not aware of the various niche bits of knowledge on the topic, and valid ways to fix those should they be issues.

Sometimes you may delegate containers DNS to another resolver, and you may have the expectation that you'd have the ability to supply records that would be used, not intercepted by Docker first, all I suggested was inversion.

I don't even know why you're claiming that this is consistent with such environments when it's not. I already explained that /etc/resolv.conf in the container is only configured with 127.0.0.11 when you have user-defined networks. The latest Docker v29 still uses DNS from the host when you use docker run and it's default bridge network, which in this case is using DNS from /etc/systemd/resolved.conf since as I mentioned there's logic by Docker to do this when /etc/resolv.conf on the host has 127.0.0.53 for DNS.

As a result, no you don't have the same network consistency with containers across environments. k8s won't even let you set a hostname to a container, thus running software I cited is susceptible to the issue I mentioned when an FQDN was expected to be returned.


Your rspamd MUST be able to resolve internal docker names if he living inside of docker network - because rspamd not just standalone system that resolving public names, it system that can rely on another components in same docker environment and due to that it must resolve their names...

Where am I encouraging you to remove that ability?

I simply said having Unbound route queries to 127.0.0.11:53 in this compose example would allow either the Rspamd container or Rspamd itself directly to query Unbound without any friction of the internal DNS in the way.

Modifying /etc/hosts entries when the first value would be invalid for software that uses gethostbyname() is kind of neccessary if you can't prevent the software from doing that and it expects to resolve an FQDN:

Postfix can also be configured with a security restriction that connecting clients are required to provide FQDNs for their HELO. In the case of fetchmail that would fail when it presents a non-qualified hostname, which is an issue with k8s in my experience as it lacks the ability to set a hostname for the container to return when the hostname from the kernel UTS namespace (via /proc/sys/kernel/hostname).

What do you propose to do in situations like that? The solutions I've described aren't bad when you understand the problems and what you're doing. Obviously if you don't have these problems in the first place with your software/deployment, then it's not something you should do, especially when there's better alternatives available (in the case of swaks you'd just explicitly set --helo with it's CLI tool for example when the container doesn't support setting a hostname, such as with k8s).


Another k8s specific issue I've had to accommodate support for is with preserving the original client IP from ingress/gateway. My experience with k8s is limited, but from what I was informed, we had to support external traffic with Proxy Protocol which the ingress controller such as Traefik would be configured for, but internal traffic within a pod containers would connect directly, with no ability to route through Traefik to append the expected ProxyProtocol headers, thus services such as Postfix and Dovecot needed to offer variations of their ports with ProxyProtocol enabled/disabled, such that both internal+external traffic was able to connect successfully.

That wasn't an issue with Docker Compose, where I could easily control the internal DNS via a separate CoreDNS container. Another gripe I've had with k8s was lack of control over ulimits (but my upstream fixes to both Docker and containerd has resolved my issue with that now that containerd 2.0+ is seeing broader adoption).

Then another caveat with Docker until recently was it's default enabled userland-proxy would rewrite the client IP of public IPv6 connections to IPv4 only containers presenting them with the IP of the docker network gateway IP, which for any software that blindly trusts all private range IPs by default (cough rspamd cough cough), these client connections would be treated with relaxed restrictions (which in the case of rspamd I guess is settings like skip_local = true or similar).

That IPv6 => IPv4 caveat was a bit of a concern for the ProxyProtocol support when it comes to Postfix and Dovecot, should anyone configure a wider subnet instead of only the specific trusted IPs of containers where ProxyProtocol headers are expected, or those that choose to relax security checks on private subnets for convenience of containers sending mail to bypass some DNS checks, etc.


@polarathene I don't want to be or sound offensive but you creating own monster docker-antipattern mail system by shipping everything in one container which totally destroying idea of containerization.

You're nitpicking on one project I contribute to 🤷‍♂️

I have cleaned up CoreDNS Dockerfile, and assisted with various issues with it's container support that users ran into. I've done the same for testssl.sh, and various other OSS projects where I've shared my expertise either via contribution or review feedback. This is my area of expertise.

DMS bundling multiple services into a single image is not an anti-pattern. That's commonly misunderstand and parroted. There's nothing wrong with multi-process images, a main process may spawn multiple child processes under the hood and it would have very little relevance when that's a single service right?

Your concern is more about the use of a service manager as the main process managing other processes that represent distinct services. However the advice you're actually trying to point out is not recklessly bundle disjoint services into monolith images, which I would 100% agree with you on.

DMS represents a logical service (a mail-server). It's a high-level component comprised of smaller individual services that are integrated to run within the single container. That has it's pros and cons, but as you can see from it's stars, it's one of the most popular containerized mail projects out there for that convenience. Very little is gained from splitting it apart into individual images, for which other competitors already offer (again with pros/cons).

We do bundle Rspamd (which sadly we can't pin the version of in our builds because upstream insists against this, despite my offer to assist in adding that support in an easy to maintain manner that bring no real added burden to the project, but ok 🤷‍♂️), and while we also support running Redis within the image (and volume mounting for persistence), we also support using external container for Redis.

If we were to offer a separate image for rspamd, we then have to add unnecessary complexity to support the configuration changes that DMS provides for the benefit of the user. There are various things I would change in the project if I could, but I'm not the only maintainer there and certain preferences I have contrast to the opinion of other maintainers, where none of us are paid to work on the project (none of my OSS work is financially supported, it's all voluntary), my goals there are only in supporting the users and making improvements when I can afford the time.

The fact that I can't talk sense into upstream projects at times adds friction. I had no involvement in deciding to bring in rspamd, but I do my best to support it within DMS and to the users that need assistance with it, and as a result I've sometimes engaged with the project with only good intentions that tend to get dismissed, usually without valid reasoning (but I rather remain on good terms, so I try to respect/accept such decisions).

So please don't judge my input from bias with my involvement on DMS just because it's one of those rare images that does things differently. My experience and contributions are not solely tied to that project, nor is there anything inherently wrong about it's approach to provide a single logical service.


You need to check best practices and other projects that - none of have issues you describe because they build in correct way.

I think I've provided plenty of insights already where DMS is irrelevant in this context and you can still hit problems (regardless of Docker or k8s).

I'm aware of best practices, but I'm also experienced enough to know better where appropriate.

  • Just like with TLS where some compliance today finds RSA 2048-bit unacceptable, raising the minimum to RSA 3072-bit or beyond (ECDSA/Ed25519 aside) and as "best practice" that information can be parroted and is trusted that NIST knows better by those that don't know this area of security that well. RSA 2048-bit is about 110-bit of symmetric strength, not too far off the equivalent energy requirements to boil all the oceans on earth to attack that keyspace successfully.
  • Likewise with passwords where you can be perfectly secure with a passphrase like detailed snail summons slim lab coat (6 words, lowercase letters with space only), provided the entropy is decent and the passphrase is selected without bias. Those less experienced on the topic would cite best practices stating that secure passwords can't possibly be just a few words lowercase.

I've seen plenty of projects with hiccups when they tack on container support. You can still mess things up regardless (take a look at Deno's official images where their Alpine image copies their binaries to the Alpine base, but as it was built with glibc, they copy that over and use LD_LIBRARY_PATH as a global ENV instead of using rpath on the binary, as a result attempting to use anything in Alpine that links to musl libc will fail). Others have been known to unintentionally reduce security (ironically with the intent of improving security) because they're adopting "best practices" or adopting common solutions vs resolving them properly, the use of setcap to support rootful containers with non-root runtime users comes to mind.

If you want to downplay my expertise and willingness to try assist by clarifying the cause of earlier observations conflicting, where I share knowledge to save the time of others having to figure such out as they're not always easy to learn about/discover, that's on you :\

@dragoangel
Copy link

I simply said having Unbound route queries to 127.0.0.11:53 in this compose example would allow either the Rspamd container or Rspamd itself directly to query Unbound without any friction of the internal DNS in the way.

you still can configure rspamd directly use correct dns without any interception by settings dns in options, I'm not understand what is stopping you?

About other issues you describe - this is quite offtopic, and PR's is not a place for a broad chat from my view. But if try answer shortly: proxy proto is common way to provide srcip, yes - backend app should have dedicated port with proxy protocol and without proxy protocol, this is must have when you running any app behind L4 TCP proxy and you need get origin IP. About IPv6 traffic reaching IPv4 only container - unfortunately do not have IPv6 in k8s, but skip_local = true is your lowest problem possibly, as postfix f.e. can at all utilize trust network and become open relay. IP in context of mail is important and breaking network can end with serious consequences... But from logical standpoint it should be quite obvious that you can't accept ipv6 package in ipv4 only container so it will be routed via "network gateway".

@polarathene
Copy link

It looks like it breaks DNS resolution at Rspamd
Something like rspamc -F foo@example.net -i 8.8.8.8 /dev/null in rspamd container

UPDATE: Actually the difference with ASN seems racey. I could repeat the command and ASN would sometimes show and sometimes not. I can get both responses shown regardless of the three DNS configs.

What exactly is the not broken output meant to look like for reference? Because when you do it with the existing apparently working compose.yaml this is the result:

# NOTE: I added jq into the container:
$ rspamadm configdump --json | jq .options.dns
{
  "timeout": 1,
  "sockets": 16,
  "retransmits": 5,
  "nameserver": "round-robin:192.0.2.254:53"
}

$ rspamc -F foo@example.net -i 8.8.8.8 /dev/null
Results for file: /dev/null (1.54 seconds)
[Metric: default]
Action: reject
Spam: true
Score: 16.00 / 15.00
Symbol: COMPLETELY_EMPTY (15.00)
Symbol: R_SPF_FAIL (1.00)[-all:c]

With the PR's proposed container name instead of IP for Rspamd DNS config (or also if you ignore this and just configure the container with the Compose dns setting instead of Rspamds dns), you get this:

$ rspamadm configdump --json | jq .options.dns
{
  "timeout": 1,
  "sockets": 16,
  "retransmits": 5
}

$ rspamc -F foo@example.net -i 8.8.8.8 /dev/null
Results for file: /dev/null (3.21 seconds)
[Metric: default]
Action: reject
Spam: true
Score: 16.00 / 15.00
Symbol: ASN (0.00)[asn:15169, ipnet:8.8.8.0/24, country:US]
Symbol: COMPLETELY_EMPTY (15.00)
Symbol: R_SPF_FAIL (1.00)[-all]

The only difference is the ASN symbol is present, which looking at the module docs doesn't seem like something to indicate that it's broken? You queried rspamc with the --ip / -i option and 8.8.8.8, it responds to let you know that the IP belongs to the 8.8.8.0/24 subnet 🤷‍♂️

Initially I misunderstood from the IP choice and assumed -i was adjusting the DNS server to Google's 8.8.8.8, especially with the follow-up comment:

It won't work - it will use resolv.conf in this case and something strange, such as 8.8.8.8 as a resolver.

/etc/resolv.conf is 127.0.0.11, and remains that way if the Compose dns setting is configured to that.

More context is needed for what the "correct" output is expected if it's wrong?

@polarathene
Copy link

I simply said having Unbound route queries to 127.0.0.11:53 in this compose example would allow either the Rspamd container or Rspamd itself directly to query Unbound without any friction of the internal DNS in the way.

you still can configure rspamd directly use correct dns without any interception by settings dns in options, I'm not understand what is stopping you?

I am aware of this. I thought I made that rather clear 🤷‍♂️

I was referring to if the apparent "breaks DNS resolution in Rspamd" statement was actually the case, that it coupled with that Unbound change to support 127.0.0.11 you would either set the Unbound IP at either:

  • Rspamd's dns option (Rspamd directly queries the configured DNS resolver)
  • Container-level (/etc/resolv.conf modification that you disprove of, but would be necessary if anything else made DNS queries outside the influence of the Rspamd process(es) were to have similar issues).

If you do not have any DNS issues caused by the internal 127.0.0.11 DNS service from Docker, then none of that is necessary. I'm not the one claiming the PR is broken, and as above I can't see any difference from the "proof", I'm merely trying to advise with what little context has been shared about the supposed issue.


The rest of my response is collapsed for brevity, expand if you care for more details:

  • A service can listen on a port that accepts both ProxyProtocol headers and without, and still remain secure (despite the HAProxy author saying otherwise, I've proven it).
  • I mentioned Postfix directly after Rspamd. I know about Open Relay risks I maintain DMS.
  • Disable userland-proxy to prevent IPv6 to IPv4 routing. Shouldn't be an issue either way in modern Docker releases.
Personal feedback

About other issues you describe - this is quite offtopic, and PR's is not a place for a broad chat from my view.

Most were about network caveats, or added context when responding to your decision to bring up my association to DMS and how you disapprove of the image not following "best practices" blindly. After responding to that, instead of acknowledging what I had said has some valid points, you'd rather dismiss it as off-topic (why bring it up in the first place?).

Rather than a productive conversation on the issue at hand, you've been responding in a manner that only seems interested in criticizing my input along with 👎 reacts. Meanwhile anything I say that contradicts your criticisms is simply ignored. I recognize this type of discourse, and am better off avoiding engaging any further tbh as it does neither of us any good.

I don't particularly appreciate feedback like "What a horrible idea, sorry 🙈, no." without any context as to why it's so horrible 😕 Maybe try some constructive feedback instead of just shutting me down?


Response to answers

proxy proto is common way to provide srcip, yes - backend app should have dedicated port with proxy protocol and without proxy protocol, this is must have when you running any app behind L4 TCP proxy and you need get origin IP

No separate ports are not mandatory. Only when services are implemented in a way that they lack the ability to handle both correctly on a single port.

I've already had this argument with the HAProxy dev that authored the protocol, who became dismissive and quiet when I demonstrated proof of how it can be done securely despite his claims (since he had chimed in to discourage allowing such in another project (Caddy) on the basis of some vague security concern that when fleshed out was proven false).

Feel free to dismiss me on that too. I have ample evidence on these topics available publicly, I am happy to be proven wrong with tangible facts but that's rare when I've invested enough time on a topic that I'm confident enough with reproductions to back it up.


About IPv6 traffic reaching IPv4 only container - unfortunately do not have IPv6 in k8s, but skip_local = true is your lowest problem possibly, as postfix f.e. can at all utilize trust network and become open relay.

Uhh yes... directly after calling out Rspamd's default configs I did bring up Postfix + Dovecot in the next paragraph. Considering I maintain DMS and I'm often involved in it's security those are things I try to be informed in. Well aware of the Open Relay risks, DMS doesn't have such wide trust by default, users would have to explicitly do that themselves.


IP in context of mail is important and breaking network can end with serious consequences... But from logical standpoint it should be quite obvious that you can't accept ipv6 package in ipv4 only container so it will be routed via "network gateway".

If you disable the userland-proxy in Docker and that concern is avoided entirely.

IPv6 connections wouldn't be routed, just refused instead. Regardless of userland-proxy setting, you can give the container an IPv6 network (and thus IP), which is what DMS docs detail on that concern with a ULA subnet.

Since Docker v27 I think IPv6 left experimental status and would be enabled by default if the host had it available, along with an IPv6 subnet pool, so shouldn't be an issue in modern environments.

@polarathene

This comment was marked as outdated.

@dragoangel
Copy link

dragoangel commented Nov 16, 2025

Care to explain why it's a horrible idea?

For obvious reasons - you need to have way to build image without compose?

It was just a suggestion, reducing the number of files involved (which isn't a problem if you git clone the whole repo obviously).

I totally do not understand what you are trying to accomplish. You are fighting with shadows from my view.

@polarathene
Copy link

polarathene commented Nov 16, 2025

For obvious reasons - you need to have way to build image without compose?

It's literally a Docker Compose example, if that's your argument you might as well tell them to build the image separately instead of using Compose build in compose.yaml 🤷‍♂️

Apart from the fact that the custom image build is pointless, it's only to bundle some config, something you can do just as easily with or without Compose via a bind volume mount.

I totally do not understand what you are trying to accomplish. You are fighting with shadows from my view.

That's ok, I could say the same to you as you're often not making sense when you dismiss anything I have to say, even when I provide information that invalidates your statements 🤦‍♂️

I'm still waiting on a reply for what possible use / value you're getting from forcing an implicit anonymous volume via VOLUME that you're so confident in shutting me down over btw. My bet is you don't have anything to justify it, but were more than happy to dismiss my contribution intended to benefit others.

It seems all you can do is 👎 me and are unwilling to appreciate / acknowledge anything I say that contradicts your own statements, even once it's evident how misinformed you are. What does that say about you?

@BenRoe BenRoe closed this by deleting the head repository Nov 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants