Abramo Bagnara

Who in its right mind will believe that sprinkling typecasts around
would make the code any safer and understandable? And that's not
mentioning those _double_ typecasts being added too!!! We should instead
aim for cast _reduction_ to let the compiler warn more about type
mismatches and air the code, not add new ones!

Note: Sometimes double casts are necessary, but we do hide them behind
special accessors.
In the specific, double casts are needed in two situations:

1) A char argument to ctype predicates should be first converted to
unsigned char (as the C standard requires) and then converted to
int to avoid mixing of signed and unsigned. Said that, nothing
prevents us from using helpers that accept char argument or to
embed such double casts in, e.g.,

#define CTYPE_CHAR(c) ((int)(unsigned char)(c))

2) A narrower computation is assigned to a wider or differently
signed result, e.g.,

u64 = u32_a - u32_b;

In this case we should write

u64 = (uint64_t)(uint32_t)(u32_a - u32_b);

This is used to make it explicit that (1) the type in which the
difference should be computed is uint32_t and (2) that the
(possibly wrapped) result should be converted to the destination

Without that the risk is writing something like, e.g.,

u64 = u32_a + u32_b;

without realizing that an unwanted wrapping may happen.
Of course, if we want an unsigned 64-bit addition, we should write

u64 = (uint64_t)u32_a + u32_b;

Again, it is possible to encapsulate the cast used to express the
"deliberately done" property in a macro.

Another possibility to avoid the double cast and remain MISRA
compliant is to use an intermediate variable:

u32_c = u32_a - u32_b;
u64 = u32_c;

but I find that is not so nice in some situations and it does not fit
well in macros.

Why do we have to make all loop index vars unsigned? It is a very common
idiom to test for "i < 0" in loops. Making some signed and some unsigned
is just adding to the inconsistency for no gain.

Why do we need to test against an unsigned zero everywhere? Does anyone
know a about a compiler implementation where a signed 0 vs an unsigned
0U makes any difference?
s32 < 0U and s32 < 0 makes a *lot* of difference: this is one of the
reasons why MISRA insists so much to not mix signed and unsigned.

And what is that obsession about qualifying every literal values anyway!
When signed and unsigned expressions are combined, the result is
promoted to unsigned already. So I do ask you frankly which of the
following is the safest and most intuitive:

#define FOO 5
#define BAR 5U

[ ... code far away ... ]

bool foo(int x) { return (x < FOO); }
bool bar(int x) { return (x < BAR); }

Notice how foo(-1) and bar(-1) don't give the same answer. Try it if you
don't believe me.
This is exactly the reason why signed and unsigned should not be mixed.

The mistakes above would be marked as non-compliant by any MISRA checker and would not go unnoticed (differently from the situation where such mixing is spread in the code).


bool foo(unsigned int x) { return (x < FOO); }
bool bar(unsigned int x) { return (x < BAR); }

Here foo(UINT_MAX) and bar(UINT_MAX) both do give the proper result.

So qualifying literals everywhere when it is not absolutely required is
actually evil and opens the possibility for *more* bugs not less.
We might consider deviating for signed constants whose value is unchanged by promotion and usual arithmetic conversion. Would this be ok for you?

And why casting pointer difference to a size_t? The official type for a
pointer difference is ptrdiff_t and the standard offers no explicit
guarantee that sizeof(ptrdiff_t) equals sizeof(size_t). People who live
by the standard will tell you that size_t _could_ be smaller and
therefore silently truncate the result.
Actually, a standard C implementation must ensure that the maximum
(positive) value for ptrdiff_t must be representable as

What's the point of casting to a char before assigning to a char when
the cast is implicit already?
Implicit casts can (and very often do) go unnoticed. This is the very
reason MISRA asks to make them explicit for the sake of code readability.

[Off topic: IMHO we would also have great benefits from introducing
BARR-C:2018 Rule 1.6.a, which asks that *all* non-value-preserving
casts are accompanied by an explicative comment.]

And a real kicker is this change:

- * @return 0 on success, nonzero on failure
+ * @return false on success, true on failure

Who really thinks that the above is idiomatic and an improvement in code
understandability? It is not that MISRA does require it in this
particular case, but it certainly did skew the mindset of the person who
did that change.
Here I really don't follow you: if the function was *already* returning
a Boolean, why is this change troublesome?

Said that, if the intention was *not* to return a Boolean, then the
signature should not have been changed and the returned value should
have been computed in a different way. The author of this change had
no way to deduce the original intention with a 100% chance of success.
What is needed is simply urbane feedback of the form "this function signature is deliberately returning an integer for reason X, please amend the MISRA compliance changes taking in account that."

Many embedded libraries and SDK's out there already maintain a MISRA
compliance exception list. I think this would be a sane thing to do and
that wouldn't be a first.
This is definitely the intention, code quality is *always* the greater

Nonetheless I think that the right mindset is that missing adherence
to MISRA rules should be justified carefully and not the opposite
(i.e., by only following the rules that change nothing or that we are
used to).

This will help the whole Zephyr community: the part that needs
qualifiable/certifiable code and the part that just needs better/safer

Abramo Bagnara


Join { to automatically receive all group messages.