Re: [RFC] MPU support for debugging


Piotr Mienkowski
 


On 14.03.2017 20:08, Boie, Andrew P wrote:
On Tue, 2017-03-14 at 17:29 +0100, Piotr Mienkowski wrote:
I would like to add one more point to the discussion. It is maybe not directly
related to the topic but should likely be considered when designing MPU
support.

Occasionally, mainly in case of device drivers, on MCUs that have cache it is
required to use the so called non-cacheable RAM regions. A memory region for
which caching has been turned off. This task is typically done by MPU/MMU and
Zephyr MPU architecture should also support it. I.e. as a developer I would
like to have a possibility to place a specific variable / set of variables in
a non-cacheable RAM region.
This is a great topic to bring up. In addition to an MPU policy to protect
threads for debugging, we do need to specify a system-level policy that get
applied at boot, even if we are not protecting individual threads.

Are you thinking that this would be something declared at the SOC level? I think
because of the size and alignment constraints of MPU regions, we may want to
configure these reasons in a central area. You may be interested to look at
Vincenzo's patches, which define MPU regions for a few ARM SOCs at boot:

https://gerrit.zephyrproject.org/r/#/q/topic:zephyr_mpu
I was indeed thinking that non-cachable RAM region would be declared at the SoC level. It's simple and efficient. However, probably not compatible with per thread memory protection as a security feature model.

Thanks for the link. That looks interesting indeed, though to support non-cachable RAM region we would also need to modify the linker script. It would probably be best to do it at the same time or after we touch the linker script to add support for all the features we are talking about here.
Looking at this another way, maybe we need to consider different levels of
memory protection support, each building on top of the previous level. What
level any given board target supports will be determined by the available memory
protection hardware and its capabilities, as well as how much extra RAM we can
waste to accommodate the alignment constraints of the MPU hardware:

1) No memory protection

2) System-wide memory protection policy, set at boot by board or SOC code.

3) Per-thread stack overflow protection. We configure the MPU, on a per-thread
basis, to trigger an exception if the thread tries to write past its available
stack space. I think this should only require 1 extra region, just a sentinel
area immediately before the thread's stack to catch writes, with the struct
k_thread stored elsewhere. I think this is something simple we can do which will
make a lot of people happy given how painful stack overflows can be to debug if
you don't know they are happening.

4) Per-thread memory protection. User threads can only write to their own stack
+ additional runtime-configurable memory regions. System calls to interact with
the kernel, whose memory is otherwise untouchable. Basically what we have been
talking about so far.

5) Virtualized memory (MMU). Application and kernel run in different virtualized
memory spaces. Introduce the possibility of different isolated zephyr processes.
That all sounds very reasonable. Every next level of memory protection support builds upon the effort spent and experience gained at the previous one. The only danger with having multiple protection levels is that it may become opaque to the end user.

I have question to the per-thread memory protection model. Maybe the answer is obvious. What about data passed between threads, like data buffers passed through FIFOs? E.g. our networking stack supports zero copy mechanism. A pointer to the data buffer that was filled in by a data link layer driver (working typically in the interrupt context) is passed to the RX thread, user application thread, maybe TX thread. Are such data buffers going to live in a memory region that is accessible to all?

Regards,
Piotr

Join devel@lists.zephyrproject.org to automatically receive all group messages.