Nicolas Pitre <nico@...>

On Sun, 19 Dec 2021, Abramo Bagnara wrote:

Il 19/12/21 02:43, Nicolas Pitre ha scritto:
On Fri, 17 Dec 2021, Abramo Bagnara wrote:

In the specific of your PR: have you considered that if, instead to keep
increasing buf, you increase a buffer offset you would avoid to do ugly
and NULL subtraction?
Well... I took this as a challenge and attempted it.

And it didn't work satisfactorily.

The problem is that the buf pointer accomplishes two roles: it serves as
a buffer offset as well as a memory boundary indicator used for argument
alignment. Going with a buffer base and an explicit offset doesn't allow
for proper alignment without also taking the buffer base into account.
If the buffer base is NULL then you need yet another variable to hold
the misalignment. This complexifies the code, increases register
pressure, makes resulting binary bigger, and doesn't look any better in
the end.

So my efforts resulted in the following PR instead which I hope would be
an acceptable middle ground.
As long as there is still an undefined behavior in the code, I'd like you
resign yourself to the fact that this cannot be accepted.
No. I disagree to such dogmatic stance.

But let's start with a few facts of life.

Did you know it is illegal to change a light bulb yourself if you live
in Victoria, Australia?

Did you know that the Queen Elizabeth hotel in Montreal is required by
law to provide a stall and food to its clients' horses?

Do you really think those laws are actively enforced?

Are you the kind of person to never drive faster than 70km/h on the
highway because that's the posted speed limit and therefore the law
despite surrounding traffic going at 115km/h? Obviously it is safer to
drive at 70km/h in theory, but in practice, some circumstances would
make it a safety hazard.

The easy resume is that you are stating that pointer arithmetic with NULL
pointer, despite being declared undefined behavior in all C standards and
despite not being a documented GCC extension is safe.
Yes. And let me precise: with a typed pointer. I fully agree that:

long foo(int *x) { return x - NULL; }

is undefined and without sensible semantic. But, on the other hand:

long foo(int *x) { int *y = NULL; return x - y; }

is totally sensible.

You give proofs of that showing generated assemblers of pointer differences
and showing this has been translated as you expect.

You seem to have forgot that in some obsolete compilers the offsetof macro was
implemented in this way and that this has been changed ages ago due to such
undefined behavior (GCC has __builtin_offsetof exactly for this reason). This
is a sensible proof that also GCC authors does not rely on the fact that code
generation would have worked correctly to implement such macro using null
pointer arithmetic.
I disagree: gcc authors are in a perfect position to know and ensure for
themselves that using a NULL pointer is fine. But they certainly could
have provided a wrapper just to give the C-standard orthodoxy some peace
of mind.

But let's have a look at the most influential and prevalent Open Source
project out there for a minute. Linux does this:

#undef offsetof
#ifdef __compiler_offsetof
#define offsetof(TYPE, MEMBER) __compiler_offsetof(TYPE, MEMBER)
#define offsetof(TYPE, MEMBER) ((size_t)&((TYPE *)0)->MEMBER)

And elsewhere:

#define __compiler_offsetof(a, b) __builtin_offsetof(a, b)

So far so good. Yet we can find these:

#define typeof_member(T, m) typeof(((T*)0)->m)

#define container_of(ptr, type, member) ({ \
void *__mptr = (void *)(ptr); \
static_assert(__same_type(*(ptr), ((type *)0)->member) || \
__same_type(*(ptr), void), \
"pointer type mismatch in container_of()"); \
((type *)(__mptr - offsetof(type, member))); })

Isn't it ironic that the extra statement used to improve code
correctness is itself relying on an "undefined" behavior?

But suppose you still don't trust me and then try to reason on similar cases.
Let consider signed addition, as an another example of possible source of
undefined behavior.

If you write:

int add(int x, int y) { return x + y; }

and check the generated code in almost any contemporary target you will see
that it does wrapping.

Following the same reasoning you do above we might deduce that it is harmless
and I can count on wrapping signed arithmetic.

But let try something different:

int is_max_int(int x) {
return x + 1 < x;

If signed arithmetic does wrapping on this compiler (as I've "proved" with the
test above) then this implementation shall do what I expect, i.e. return
truish for the maximum representable int.

But if you take a look to generated code you will see that it always return 0.
Perfect! And it makes sense because producing a different result would
happen only 0.00000002% of the time. Making an addition beyond the type
range "undefined" by the standard allows for compiler implementers to
perform optimizations that wouldn't be possible otherwise due to that
0.00000002% where the optimization would be wrong if we take into
account the arguably ambiguous wrapping nature of a two's complement
representation in that context.

Yet, being "undefined" doesn't mean that another compiler implementation
is forbidden to do the full operation anyway and return true if provided
with INT_MAX. Granted, that would be semantically illogical.

And it can be tested in practice.

Whereas, with:

long foo(int *x) { int *y = NULL; return x - y; }

all compilers capable of compiling Zephyr I have access to do produce
the right result, *all* the time. It is true whether or not the pointer
being NULL is known at compile time (when it is the compiler also does
constant propagation optimizations on it). It is not because something
is "undefined" that it is necessarily "forbidden".

And it also can be tested in practice.

The fact that such an operation remains "undefined" by the standard is
analogous to requiring hotels to stock hay. Maybe it made sense on
antique CPU architectures where a NULL didn't match the value 0 but some
special magic value the hardware would trigger a fault with. But I
challenge you to find an architecture whose ABI doesn't define NULL as
zero today.

Saying that "it is impossible to be sure if not sanctioned by the
standard" is a strawman argument like driving 70km/h on the highway.

In practice it doesn't make sense not to support it on contemporary CPU
architectures irrespective of the C standard, and compiler implementers
already did what's semantically logical with it. There are simply no
confusion possible when the pointer is typed, and it doesn't come in the
way of any other optimizations.

It is also impossible to use NULL today in ways that could behave
unexpectedly. As an example of that last point, let's consider this

#include <stddef.h>
long foo(int *x) { return x - NULL; }

The result is:

In function ‘foo’:
2:29: error: invalid operands to binary - (have ‘int *’ and ‘void *’)

In that case the operation is clearly without proper semantic because
the NULL constant isn't typed, and therefore the concept of object
pointer difference cannot apply.

Now... after all is said...

I also understand that you have a task to accomplish and it is evaluated
in terms of standard violation occurrence reduction. I'm therefore
proposing this change on top of my latest patch (yes, I'm adding more

-#define BUF_OFFSET (size_t)(buf - buf0)
+#define BUF_OFFSET ((uintptr_t)buf - (uintptr_t)buf0)

This should fool the conformance scanning tool just as well as many of
the other conformance changes being proposed.

Would that make your life easier?


Join to automatically receive all group messages.