Skip to content

Conversation

@timmc
Copy link

@timmc timmc commented Jan 3, 2026

Proposed changes

This clarifies the interaction of proxy_buffer_size and response headers. When the buffer size is too small for the response headers, nginx returns a 502 and logs "upstream sent too big header while reading response header from upstream".

Reference: https://github.com/nginx/nginx/blob/6a67f71a4a78edb662c190af93ac6d3d680e107a/src/http/ngx_http_upstream.c#L2547

Checklist

Before creating a PR, run through this checklist and mark each as complete:

This clarifies the interaction of `proxy_buffer_size` and response headers.
When the buffer size is too small for the response headers, nginx returns
a 502 and logs "upstream sent too big header while reading response header
from upstream".

Reference: https://github.com/nginx/nginx/blob/6a67f71a4a78edb662c190af93ac6d3d680e107a/src/http/ngx_http_upstream.c#L2547
@timmc
Copy link
Author

timmc commented Jan 3, 2026

I have hereby read the F5 CLA and agree to its terms

@timmc
Copy link
Author

timmc commented Jan 3, 2026

FYI, the PR template has a broken link to the contribution guidelines. I believe it's meant to link to https://github.com/f5/f5-cla/blob/main/CONTRIBUTING.md

@pluknet
Copy link
Collaborator

pluknet commented Jan 27, 2026

Thanks for the patch. I don't think it worth special clarification.
The description is quite clear on this, note "used for reading" and "contains a small response header".
A response to treat as invalid naturally follows from that it violates the expected header size.
Further, the default is sane enough to not make this happen in practice,
so the clarification (ignoring its not-so-successful wording) would barely have a practical value.

(There are other nuances not covered which might make sense to document such as that
proxy_buffer_size is used to include a cache header and key when reading cached responses.
On the other side, I'd prefer to keep it as clear and generic, in order to not clutter with such details.)

@timmc
Copy link
Author

timmc commented Jan 27, 2026

I do think it needs to be clarified -- in trying to debug 503s from our servers, I found a lot of discussions and blog posts about this setting, and many of them disagreed on what exactly this setting does and how it relates to the others.

I understand a desire to keep it generic, but I'm not sure what it needs to cover. Does it just control whether response headers can be read from the upstream in HTTP, or are other protocols also affected? How can we tell what the effect is on other protocols? Examples are also a great way of clarifying the intent of a generic statement. (I also suspect that the majority of users will be interested in the HTTP case specifically.)

As for size, I agree that 4k is a lot, but regardless it is a limit we ran into while doing something not totally unreasonable. :-) Our login endpoint returns several JWTs and other cookies for use by microfrontends, and it sometimes adds up to just over 4k. And again based on the number of discussions I've seen on the topic, it's not uncommon for this to happen on login endpoints.

I'm also surprised to learn about the cache key thing! I'd advocate for including these details to reduce the pain of debugging unexpected behavior from nginx. These are important things for users to understand.

@timmc
Copy link
Author

timmc commented Jan 27, 2026

To put a different spin on it: There's no indication here of what the consequences are of misconfiguring this field. Does it just have performance impact (like many of the other buffer settings) or will it cause runtime errors?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants