Re: verifier: variable offset stack access question

Yonghong Song

On Fri, Dec 25, 2020 at 5:41 PM Andrei Matei <andreimatei1@...> wrote:

For posterity, I think I can now answer my own question. I suspect
things were different in 2018 (because otherwise I don’t see how the
referenced exchange makes sense); here’s my understanding about the
verifier’s rules for stack accesses today:

There’s two distinct aspects relevant to the use of variable stack offsets:

1) “Direct” stack access with variable offset. This is simply
forbidden; you can’t read or write from a dynamic offset in the stack
because, in the case of reads, the verifier doesn’t know what type of
memory would be returned (is it “misc” data? Is it a spilled
register?) and, in the case of writes, what stack slot’s memory type
should be updated.
Separately, when reading from the stack with a fixed offset, the
respective memory needs to have been initialized (i.e. written to)

2) Passing pointers to the stack to helper functions which will write
through the pointer (such as bpf_probe_read_user()). Here, if the
stack offset is variable, then all the memory that falls within the
possible bounds has to be initialized.
If the offset is fixed, then the memory doesn’t necessarily need to be
initialized (at least not if the helper’s argument is of type
ARG_PTR_TO_UNINIT_MEM). Why the restriction in the variable offset
case? Because, in that case, it cannot be known what memory the helper
will end up initializing; if the verifier pretended that all the
memory within the offset bounds would be initialized then further
reads could leak uninitialized stack memory.
I think your above assessment is kind of correct. For any read/write
to stack in bpf programs, the stack offset must be known so the
verifier knows exactly what the program tries to do. For helpers,
variable length of stack is permitted and the verifier will do
analysis to ensure the stack meets the memory (esp. initialized
memory) requirement as stated in helper proto definition.

Join to automatically receive all group messages.