Date   

Re: Zephyr DFU protocol

David Brown
 

On Tue, Aug 29, 2017 at 09:14:31AM +0000, Cufi, Carles wrote:

One other protocol I just realized is already out there is lwm2m.
There is starting to be some support for it in Zephyr, it works over
other transports, supports device management, and has support for
firmware update.
I just read through the highlights of the spec and indeed this
matches relatively closely the concept we are trying to push here
with a "management" protocol. After looking through it a bit here are
the problems I see:

a) Complexity: By reading through the specification[i] this looks
like a pretty complex protocol to me, which in many cases might be a
drawback to users wanting to reduce ROM and RAM size. This is
particularly important for very constrained devices that only need to
send some sensor data over BLE for example
I had a conversation with Sterling Hughes yesterday, and he explained
that this was pretty much the primary reason for developing the news
manager protocol instead of just using lwm2m.

b) Suitability for other transports: The specification clearly states
2 main transports: UDP and SMS. While adapting this to other
transports would likely be feasible, the protocol doesn't look
designed for it
c) Model: the protocol seems to rest on the basis of a "pull" model,
where clients are the target devices. For the reasons stated before,
this might not be suitable to simple UART, BLE or USB CDC ACM
usecases.
This is the other main reason for its infeasibility.

That said, the protocol does match the Newt Manager Protocol quite
closely when it comes to supported functionality and purpose. My vote
here would be to have support for both, because I do not think
running LWM2M over UART or BLE is a good match for tiny constrained
applications that only require simple firmware updates.
Agreed. I think that lwm2m is going to end up needing to be
implemented because there will be environments that will require that
specific protocol. But, we will want something like newtmgr for other
cases, and situations where less code is desired.

It is also possible for newtmgr to be layered differently, depending
on the situation. For serial, it can either be used directly, or in a
console friendly manner (with escape characters and base-64 encoding).
It is possible to leave minicom or picocom running, and have newtmgr
connect to the serial port to exchange packets.

On BLE, it can be transported directly over GATT.

And for network interfaces, layering it over COAP or COAPS makes
sense.

David


Re: Zephyr DFU protocol

Richard Peters <mail@...>
 

Hi,

i am looking for a solution like that and just want to contribute my
requirements.

i would like to use zephyr with an external bootloader like the Nordic
DFU, where i can update firmware via bluetooth.
Unfortunately I doubt this is easy to achieve, due to the way the Nordic DFU bootloader expects the SoftDevice to be present in flash, something that is not the case when using Zephyr instead. The Nordic DFU procedure is also closely tied to the image format of the Nordic SDK (and SoftDevice).
However, Zephyr is indeed compatible with a bootloader, mcuboot, and we are currently discussing adding support for a DFU (over BLE and other transports) to Zephyr in another thread in this mailing list. You are welcome to contribute to that thread with your requirements and comments, and as soon as we've chosen a protocol we'll start working towards implementing the DFU procedure.
My devices are in a BLE mesh network with no direct internet
connectivity to the outer world.
The user can connect with a smartphone ot tablet to one of the devices
in the mesh over BLE.
There is an App, which downloads the latest firmware for the devices to
the smartphone.

A firmware update will be transfered via BLE to the connected device and
then spread to all devices in the mesh that need this update.

I think there are two possible ways to achieve this:

1.) The update gets transferred to the target devices via bluetooth.
This happend in the zephyr application and gets stored in a filesystem
(on internal or external flash memory). The bootloader performs the
update from the filesystem after a reboot.

2.) The bootloader starts and receives the firmware (on the fly) from
the next device in the mesh network (which is running zephyr, too).

The whole process should be optimized for the memory usage.

Regards,
Richard


Re: Using Zephyr with DFU bootloader from Nordic SDK

Carles Cufi
 

Hi Richard,

-----Original Message-----
From: zephyr-devel-bounces@lists.zephyrproject.org [mailto:zephyr-devel-
bounces@lists.zephyrproject.org] On Behalf Of Richard Peters
Sent: 29 August 2017 14:04
To: zephyr-devel@lists.zephyrproject.org
Subject: [Zephyr-devel] Using Zephyr with DFU bootloader from Nordic SDK

Hi Community,

i would like to use zephyr with an external bootloader like the Nordic
DFU, where i can update firmware via bluetooth.

There is no documentation in zephyr, how to achieve this.
Do i need some special config options?
Can i just build a zephyr-image and flash it via this DFU bootloader?
Unfortunately I doubt this is easy to achieve, due to the way the Nordic DFU bootloader expects the SoftDevice to be present in flash, something that is not the case when using Zephyr instead. The Nordic DFU procedure is also closely tied to the image format of the Nordic SDK (and SoftDevice).
However, Zephyr is indeed compatible with a bootloader, mcuboot, and we are currently discussing adding support for a DFU (over BLE and other transports) to Zephyr in another thread in this mailing list. You are welcome to contribute to that thread with your requirements and comments, and as soon as we've chosen a protocol we'll start working towards implementing the DFU procedure.

Regards,

Carles


Using Zephyr with DFU bootloader from Nordic SDK

Richard Peters <mail@...>
 

Hi Community,

i would like to use zephyr with an external bootloader like the Nordic
DFU, where i can update firmware via bluetooth.

There is no documentation in zephyr, how to achieve this.
Do i need some special config options?
Can i just build a zephyr-image and flash it via this DFU bootloader?

Thanks for sharing your experience!
Richard


Re: bitfields

Paul Sokolovsky
 

Hello,

On Mon, 28 Aug 2017 12:42:50 -0700
Andy Ross <andrew.j.ross@intel.com> wrote:

[]

I wouldn't view this as a critical danger for an single-arch-specific
driver that will only ever be built by gcc, but for anything with
portability needs beyond that we should probably be writing proper C.
Well, in the parallel thread we discuss converting from Kbuild to
CMake, partly because people want to use various "non-default"
compilers. So, I agree this is not "critical", but should definitely be
user-storied for complete fixing, at most for the 1.11 release IMHO.

P.S. The latest screw-up from the C standard committee I've hit:
variable-length arrays aka VLAs aka pretty neat feature of C99.
Excluded from C11. But why? Turns out, C99 doesn't specify how storage
for them would be allocated. And some vendors of "various different
compilers" managed to implement the allocation of clearly automatic
variables using malloc(). Users wailed, then in C11, the same vendors
handed-up: "we can't implement VLAs in our various different compilers
properly", and it was excluded.


Andy

[1] And presumably clang, though I didn't check.
--
Best Regards,
Paul

Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linaro
http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog


Re: bitfields

Carles Cufi
 

Hi Andrew,

-----Original Message-----
From: zephyr-devel-bounces@lists.zephyrproject.org [mailto:zephyr-devel-
bounces@lists.zephyrproject.org] On Behalf Of Boie, Andrew P
Sent: 28 August 2017 18:52
To: Piotr Mienkowski <piotr.mienkowski@gmail.com>; zephyr-
devel@lists.zephyrproject.org
Subject: Re: [Zephyr-devel] bitfields

I believe you are talking about a solution used currently by Zephyr's
I2C API.
There we have:

union dev_config {
u32_t raw;
struct __bits {
u32_t use_10_bit_addr : 1;
u32_t speed : 3;
u32_t is_master_device : 1;
u32_t reserved : 26;
} bits;
};

This is however incorrect. C99 §6.7.2.1, paragraph 10 says: "The order
of allocation of bit-fields within a unit (high-order to low-order or
low-order to
high-order) is implementation-defined.". I.e. - using union dev_config
as an example - compiler is free to map use_10_bit_addr either to MSB
or to LSB. The two methods of specifying bit fields are not equivalent
and should not be mixed.

I think we have a fair number of drivers that define structs that use
the above technique, to have fields in the struct which correspond to
particular bits in a register.

Or see the data structures like in

arch/x86/include/mmustructs.h
include/arch/x86/segmentation.h

Which have complex data structures to define segment descriptors and
page table entries.

You are saying that these are all incorrect? Should we change them?
GCC at least, seems to always do low order to high order.
We have exactly the same in all of our Link Layer definitions in Bluetooth, which all use bitfields for the packet structures (and which must obviously be endianness independent):
https://github.com/zephyrproject-rtos/zephyr/blob/master/subsys/bluetooth/controller/ll_sw/pdu.h#L344

Coincidentally while looking into a firmware update protocol I saw that the Newt Manager Protocol does indeed compile the bitfields differently based on endianness:

https://github.com/apache/mynewt-core/blob/master/mgmt/mgmt/include/mgmt/mgmt.h#L72

Regards,

Carles


Re: Zephyr DFU protocol

Carles Cufi
 

Hi David,

-----Original Message-----
From: David Brown [mailto:david.brown@linaro.org]
Sent: 28 August 2017 17:59
To: Cufi, Carles <Carles.Cufi@nordicsemi.no>
Cc: zephyr-devel@lists.zephyrproject.org
Subject: Re: [Zephyr-devel] Zephyr DFU protocol

On Mon, Aug 28, 2017 at 02:43:27PM +0000, Cufi, Carles wrote:

As I already stated, this is not about choosing the only protocol
available for updating images in the target device, so implementing
standard USB DFU is definitely something that we want as well. That
said, I would also be in favour of having the future management
protocol run over CDC ACM, for 2 reasons:
One other protocol I just realized is already out there is lwm2m.
There is starting to be some support for it in Zephyr, it works over
other transports, supports device management, and has support for
firmware update.

The eclipse foundation has a couple of implementations (wakaama in C,
and Leshan in Java).
I just read through the highlights of the spec and indeed this matches relatively closely the concept we are trying to push here with a "management" protocol. After looking through it a bit here are the problems I see:

a) Complexity: By reading through the specification[i] this looks like a pretty complex protocol to me, which in many cases might be a drawback to users wanting to reduce ROM and RAM size. This is particularly important for very constrained devices that only need to send some sensor data over BLE for example
b) Suitability for other transports: The specification clearly states 2 main transports: UDP and SMS. While adapting this to other transports would likely be feasible, the protocol doesn't look designed for it
c) Model: the protocol seems to rest on the basis of a "pull" model, where clients are the target devices. For the reasons stated before, this might not be suitable to simple UART, BLE or USB CDC ACM usecases.

That said, the protocol does match the Newt Manager Protocol quite closely when it comes to supported functionality and purpose. My vote here would be to have support for both, because I do not think running LWM2M over UART or BLE is a good match for tiny constrained applications that only require simple firmware updates.


The approach that was demoed at the last Linaro Connect was pull
based, and essentially had the firmware living on an http server. It
had the advantage of being fairly easy to implement with existing
code in Zephyr.
The problem with using a pull-based protocol is that it is less
portable to non TCP/IP transports as I see it, since it requires the
target device to initiate the transaction. Are you implying that you'd
prefer the Zephyr "management" protocol to be pull-based? We're
definitely open to discuss that at length. Also see the section about
the Newt Management Protocol over OIC/OCF, which implements "push"
over TCP/IP, which could be an alternative for the Linaro usecase.
I agree that the pull approach isn't really all that great, but it was
easy to implement. It does make management of the upgrade server a
little easier, though. I'm not sure how well a push-based protocol
scales to a large number of devices.
I honestly have no idea whether the push model would scale for large deployments, but this brings an interesting question, which is what sort of device are we targeting here:

1) The simple device which is never connected to the internet directly and does not even have a TCP/IP stack, but rather only a GATT-based BLE connection to a mobile phone, table or computer. Among those there are mice, wearables, sensors and monitors, etc.
2) The slightly more complex device with (almost) always-on TCP/IP connection to the outside world, perhaps over 15.4, Thread, BLE over IPSP or any other technology

I think the Newt Manager Protocol was designed for devices closer to the 1) model. LWM2M and similar protocols target rather b). Those are quite different in nature because a) requires a device to specifically connect to it and send an image with that purpose (say for example a sports band such as Fitbit or similar), whereas devices of the b) kind can keep polling regularly to determine whether a firmware update is available. I do not think we can realistically cover both with a single protocol unless we "force" one of the 2 models to work in the other circumstances.


Any idea how Android or iOS handle this? I would guess that both are
pull based, since that would otherwise require the vendor to have a
server that keeps track of every device out there.
Not sure about Android, but I am pretty sure that iOS devices keep a TCP connection permanently connected to an Apple server, through which all "push" notifications are sent, be it software updates or messaging notifications.


I agree that newtmgr protocol seems to be the best fit for us. It's
serial model would even fit in fairly well with the Zephyr shell,
since it wraps the packets with a control-character + base-64 packet
+ control character, which the shell seems to have partial support
for already.

It does, I didn't mention that to avoid extending myself too much, but
it's also a very nice feature I find.
One other thing we should consider is the security of the upgrade
protocol. Mcuboot has signatures to validate images, so that would
prevent rogue upgrades, but if we have a management protocol, that
should probably also be secured via some means.

It is likely that something like lwm2m is going to be implemented for
Zephyr (code is there, and work seems to be happening on it), so we
should decide if we want to push the newt management protocol as well.
Agreed, and it's not an easy call. The two options I see after your remarks and looking a little bit more into LWM2M are:

- Use LWM2M for everything, including DTLS for security and adapt it somehow to the simple "push" model
- Use LWM2M for the "pull" model, and then Newt Manager Protocol for the simple "push" one, with security in the latter being provided by the transport itself (SMP in BLE, and the simple fact that you need to manipulate the device physically for UART).

While having one single protocol would definitely be a boon, I am not sure LWM2M will fit the bill in terms of RAM and ROM requirements, and we still need something for the UART recovery mode in the bootloader, which will probably end up being the Newt Manager Protocol since I don't think we can fit LWM2M into a bootloader.

Additional thoughts welcome.

[i] http://www.openmobilealliance.org/release/LightweightM2M/V1_0-20170208-A/OMA-TS-LightweightM2M-V1_0-20170208-A.pdf


Re: bitfields

Puzdrowski, Andrzej
 

I believe you are talking about a solution used currently by Zephyr's I2C API.
There we have:

union dev_config {
u32_t raw;
struct __bits {
u32_t use_10_bit_addr : 1;
u32_t speed : 3;
u32_t is_master_device : 1;
u32_t reserved : 26;
} bits;
};

This is however incorrect. C99 §6.7.2.1, paragraph 10 says: "The order
of allocation of bit-fields within a unit (high-order to low-order or
low-order to
high-order) is implementation-defined.". I.e. - using union dev_config
as an example - compiler is free to map use_10_bit_addr either to MSB
or to LSB. The two methods of specifying bit fields are not equivalent and should not be mixed.
I think we have a fair number of drivers that define structs that use the above technique, to have fields in the struct which correspond to particular bits in a register.

Yeah, this is sorta wrong. GCC[1] is AFAIK consistent and sane within a single architecture, but the packing order varies between endianness conventions and officially they document this (section 4.9 in the texinfo output I looked at) as being defined by the relevant platform ABI. So officially they punt too.

I wouldn't view this as a critical danger for an single-arch-specific driver that will only ever be built by gcc, but for anything with portability needs beyond that we should probably be writing proper C.

Andy

[1] And presumably clang, though I didn't check.

It is still possible to assert bit-field implementation (static assert). Then in case of incompatibilities we will have an early porting alarm. This could be done even once for a SoC.

Andrzej


Re: bitfields

Andy Ross
 

Boie, Andrew P wrote:
I think we have a fair number of drivers that define structs that
use the above technique, to have fields in the struct which
correspond to particular bits in a register.

[...]

You are saying that these are all incorrect? Should we change them?
GCC at least, seems to always do low order to high order.
Yeah, this is sorta wrong. GCC[1] is AFAIK consistent and sane within
a single architecture, but the packing order varies between endianness
conventions and officially they document this (section 4.9 in the
texinfo output I looked at) as being defined by the relevant platform
ABI. So officially they punt too.

I wouldn't view this as a critical danger for an single-arch-specific
driver that will only ever be built by gcc, but for anything with
portability needs beyond that we should probably be writing proper C.

Andy

[1] And presumably clang, though I didn't check.


Re: bitfields

Boie, Andrew P
 

I believe you are talking about a solution used currently by Zephyr's I2C API.
There we have:

union dev_config {
u32_t raw;
struct __bits {
u32_t use_10_bit_addr : 1;
u32_t speed : 3;
u32_t is_master_device : 1;
u32_t reserved : 26;
} bits;
};

This is however incorrect. C99 §6.7.2.1, paragraph 10 says: "The order of
allocation of bit-fields within a unit (high-order to low-order or low-order to
high-order) is implementation-defined.". I.e. - using union dev_config as an
example - compiler is free to map use_10_bit_addr either to MSB or to LSB. The
two methods of specifying bit fields are not equivalent and should not be mixed.
I think we have a fair number of drivers that define structs that use the above technique, to have fields in the struct which correspond to particular bits in a register.

Or see the data structures like in

arch/x86/include/mmustructs.h
include/arch/x86/segmentation.h

Which have complex data structures to define segment descriptors and page table entries.

You are saying that these are all incorrect? Should we change them?
GCC at least, seems to always do low order to high order.

Andrew


Re: Zephyr DFU protocol

David Brown
 

On Mon, Aug 28, 2017 at 02:43:27PM +0000, Cufi, Carles wrote:

As I already stated, this is not about choosing the only protocol
available for updating images in the target device, so implementing
standard USB DFU is definitely something that we want as well. That
said, I would also be in favour of having the future management
protocol run over CDC ACM, for 2 reasons:
One other protocol I just realized is already out there is lwm2m.
There is starting to be some support for it in Zephyr, it works over
other transports, supports device management, and has support for
firmware update.

The eclipse foundation has a couple of implementations (wakaama in C,
and Leshan in Java).

The approach that was demoed at the last Linaro Connect was pull based,
and essentially had the firmware living on an http server. It had the
advantage of being fairly easy to implement with existing code in
Zephyr.
The problem with using a pull-based protocol is that it is less
portable to non TCP/IP transports as I see it, since it requires the
target device to initiate the transaction. Are you implying that
you'd prefer the Zephyr "management" protocol to be pull-based? We're
definitely open to discuss that at length. Also see the section about
the Newt Management Protocol over OIC/OCF, which implements "push"
over TCP/IP, which could be an alternative for the Linaro usecase.
I agree that the pull approach isn't really all that great, but it was
easy to implement. It does make management of the upgrade server a
little easier, though. I'm not sure how well a push-based protocol
scales to a large number of devices.

Any idea how Android or iOS handle this? I would guess that both are
pull based, since that would otherwise require the vendor to have a
server that keeps track of every device out there.

I agree that newtmgr protocol seems to be the best fit for us. It's
serial model would even fit in fairly well with the Zephyr shell, since
it wraps the packets with a control-character + base-64 packet + control
character, which the shell seems to have partial support for already.
It does, I didn't mention that to avoid extending myself too much,
but it's also a very nice feature I find.
One other thing we should consider is the security of the upgrade
protocol. Mcuboot has signatures to validate images, so that would
prevent rogue upgrades, but if we have a management protocol, that
should probably also be secured via some means.

It is likely that something like lwm2m is going to be implemented for
Zephyr (code is there, and work seems to be happening on it), so we
should decide if we want to push the newt management protocol as well.

David


Re: Are device trees used in the STM32?

Andy Gross
 

I believe the answer is that the transition to using the DTS will occur when the structure generation support is solidified.

If you look at the outdir/<STM board>/include/generated/ directory you will find the generated DTS include files.  The
contents of those are what is available for use right now, which is just the #define information.


On 28 August 2017 at 09:53, massimiliano cialdi <massimiliano.cialdi@powersoft.it> wrote:
surfing in the sources tree I can find dts/arm/st/stm32f412-pinctrl.dtsi

usart3_pins_a: usart3@0 {
        rx_tx {
                rx = <STM32_PIN_PB11 (STM32_PINMUX_ALT_FUNC_7 | STM32_PUSHPULL_PULLUP)>;
                tx = <STM32_PIN_PB10 (STM32_PINMUX_ALT_FUNC_7 | STM32_PUSHPULL_PULLUP)>;
        };
};
usart3_pins_b: usart3@1 {
        rx_tx {
                rx = <STM32_PIN_PD9 (STM32_PINMUX_ALT_FUNC_7 | STM32_PUSHPULL_PULLUP)>;
                tx = <STM32_PIN_PD8 (STM32_PINMUX_ALT_FUNC_7 | STM32_PUSHPULL_PULLUP)>;
        };
};

then dts/arm/nucleo_f412zg.dts

/ {
        model = "STMicroelectronics STM32F412ZG-NUCLEO board";
        compatible = "st,stm32f412zg-nucleo", "st,stm32f412";

        chosen {
                zephyr,console = &usart3;
                zephyr,sram = &sram0;
                zephyr,flash = &flash0;
        };
};

&usart3 {
        current-speed = <115200>;
        pinctrl-0 = <&usart3_pins_b>;
        pinctrl-names = "default";
        status = "ok";
};


and finally drivers/pinmux/stm32/pinmux_board_nucleo_f412zg.c where the pinmux settings are repeated:

/* pin assignments for NUCLEO-F412ZG board */
static const struct pin_config pinconf[] = {
#ifdef CONFIG_UART_STM32_PORT_3
        {STM32_PIN_PD8, STM32F4_PINMUX_FUNC_PD8_USART3_TX},
        {STM32_PIN_PD9, STM32F4_PINMUX_FUNC_PD9_USART3_RX},
#endif /* #ifdef CONFIG_UART_STM32_PORT_3 */
#ifdef CONFIG_PWM_STM32_2
        {STM32_PIN_PA0, STM32F4_PINMUX_FUNC_PA0_PWM2_CH1},
#endif /* CONFIG_PWM_STM32_2 */
};


so I wonder why the pinmux informqations are duplicated.
In particular from dts file are geneated some defines, I wonder if these defines are used in the source code.
I suspect that DTS files are currently unused

best egards
Max
_______________________________________________
Zephyr-devel mailing list
Zephyr-devel@...ct.org
https://lists.zephyrproject.org/mailman/listinfo/zephyr-devel


Are device trees used in the STM32?

Massimiliano Cialdi
 

surfing in the sources tree I can find dts/arm/st/stm32f412-pinctrl.dtsi

usart3_pins_a: usart3@0 {
rx_tx {
rx = <STM32_PIN_PB11 (STM32_PINMUX_ALT_FUNC_7 | STM32_PUSHPULL_PULLUP)>;
tx = <STM32_PIN_PB10 (STM32_PINMUX_ALT_FUNC_7 | STM32_PUSHPULL_PULLUP)>;
};
};
usart3_pins_b: usart3@1 {
rx_tx {
rx = <STM32_PIN_PD9 (STM32_PINMUX_ALT_FUNC_7 | STM32_PUSHPULL_PULLUP)>;
tx = <STM32_PIN_PD8 (STM32_PINMUX_ALT_FUNC_7 | STM32_PUSHPULL_PULLUP)>;
};
};

then dts/arm/nucleo_f412zg.dts

/ {
model = "STMicroelectronics STM32F412ZG-NUCLEO board";
compatible = "st,stm32f412zg-nucleo", "st,stm32f412";

chosen {
zephyr,console = &usart3;
zephyr,sram = &sram0;
zephyr,flash = &flash0;
};
};

&usart3 {
current-speed = <115200>;
pinctrl-0 = <&usart3_pins_b>;
pinctrl-names = "default";
status = "ok";
};


and finally drivers/pinmux/stm32/pinmux_board_nucleo_f412zg.c where the pinmux settings are repeated:

/* pin assignments for NUCLEO-F412ZG board */
static const struct pin_config pinconf[] = {
#ifdef CONFIG_UART_STM32_PORT_3
{STM32_PIN_PD8, STM32F4_PINMUX_FUNC_PD8_USART3_TX},
{STM32_PIN_PD9, STM32F4_PINMUX_FUNC_PD9_USART3_RX},
#endif /* #ifdef CONFIG_UART_STM32_PORT_3 */
#ifdef CONFIG_PWM_STM32_2
{STM32_PIN_PA0, STM32F4_PINMUX_FUNC_PA0_PWM2_CH1},
#endif /* CONFIG_PWM_STM32_2 */
};


so I wonder why the pinmux informqations are duplicated.
In particular from dts file are geneated some defines, I wonder if these defines are used in the source code.
I suspect that DTS files are currently unused

best egards
Max


Re: Zephyr DFU protocol

Carles Cufi
 

Hi David,

Thanks for the feedback.

-----Original Message-----
From: David Brown [mailto:david.brown@linaro.org]
Sent: 28 August 2017 16:22
To: Cufi, Carles <Carles.Cufi@nordicsemi.no>
Cc: zephyr-devel@lists.zephyrproject.org
Subject: Re: [Zephyr-devel] Zephyr DFU protocol

On Mon, Aug 28, 2017 at 12:45:37PM +0000, Cufi, Carles wrote:

As you might already know, we've been working on the introduction of
DFU (Device Firmware Upgrade) to Zephyr. Several Pull Requests have
been posted dealing with the low-level flash and image access modules
required to store a received image and then boot into it, but that
leaves out one of the key items in the system: the update protocol that
allows an existing running image to obtain an updated one over a
transport mechanism.
My first suggestion. Unless we are stricly implementing the USB DFU
protocol, we really should call this something else. DFU is defined by
USB standards, and is a very specific protocol with a very specific
purpose. If what we're looking is for something general across other
transports, we should call it a different name to avoid confusion.
I have no problem changing the name from "DFU protocol" to something else. In fact the one we recommend is called a "Management Protocol" because it does much more than just DFU. If people agree with reusing that moniker then we could go with that. Which reminds me that I've spoken to Mynewt developers and they don't have anything against renaming "Newt Manager Protocol" to something less tied to Mynewt. Perhaps something akin to mcuboot would be in order, like "mcumgmt" or similar?


There are several fundamental requirements for such a protocol if we
want it to be future-proof, extensible and practical for embedded
devices:

- Must be packet-based and transport-agnostic
Although this makes sense for non-usb, it also precludes using existing
tools for the update when we do have USB as our transport.

My suggestion would be to support DFU for USB, and device another
protocol for the other transports.
As I already stated, this is not about choosing the only protocol available for updating images in the target device, so implementing standard USB DFU is definitely something that we want as well. That said, I would also be in favour of having the future management protocol run over CDC ACM, for 2 reasons:

1) Being able to benefit from the additional "management" functionality on top of updating images with a single tool
2) Being able to update devices that only offer a USB connection (with no debugger IC bridging) before we actually implement USB DFU, since our efforts would be initially concentrated on the "management" protocol


- Must be extensible and flexible
- The server-side implementation (assuming a request/response model)
must be relatively simple and require little resources
- Must be compatible with the mcuboot project and model
- At the very least the following transports must be supported: BLE,
UART, IP, USB
- A client-side tool (assuming a request/response model) must either
exist already or be easily implementable
So this is solved for USB DFU. We would probably have to create tools
for other transports.
Well, let me clarify a point here: The "protocol" as I use this word is the sequence of packets that allow you to update images (and perform other operations) on the target device. The "transport" is just a thin layer that is capable of transmitting and receiving those protocol packets over a physical medium. The tool I'm talking about for the management protocol would support multiple transports with a single protocol.


With that in mind we proceeded to analyze a few of the existing
protocols out there (the ones we knew about), in order to consider
whether reusing an existing effort was a better approach than designing
and implementing a new protocol from scratch:

1) USB DFU specification[1]
2) Nordic Secure DFU protocol (included in the Nordic SDK)[2]
3) Newt Manager Protocol (part of Mynewt)[3]
4) Distributed DFU over CoAP used in Nordic's Thread SDK[4]

Note: I will use the word "source" to identify the device that contains
the new image, and "target" to identify the one that receives it,
flashes it and the boots into it.

The USB DFU specification does not seem to be a good fit since it maps
specifically to particular USB endpoints and classes, making it not
suitable for other transports without extensive modification.
Using a standard USB class such as CDC ACM as transport, we could
instead map the chosen protocol over a USB physical link.
This is fairly intentional. As I mention above, I would suggest
implementing DFU regardless of other protocols used.
Agree, see my comment above.


We also see 2 very different image distribution models. In protocols 1,
2 and 3 the source (client) "pushes" an image to the target
(server) after checking that it's applicable based on version checking
and other verifications. In protocol 4 however, the source acts instead
as a server and the targets act as clients that "pulls"
images from the source (server) whenever they are available. I believe
that the Linaro DFU implementation also follows the "pull"
paradigm of protocol 4.
They also serve different purposes. DFU (the real one on USB) works
similar to a recovery mode. You put the target into DFU mode, and the
USB endpoint is a different kind of device than it usually is. The
other upgrade protocols are intended to upgrade live devices. This is
also a good reason to support USB's DFU in addition to whatever other
protocol we come up.

We believe that the right approach for the sort of ecosystem that
Zephyr targets is the "push" approach, to minimize traffic, reduce
power consumption and also make it possible to use with all transports.
That said, it is important to note that although we are trying to
decide on a default DFU mechanism for Zephyr, all layers (including the
image management) will be independent of it, and it should therefore be
entirely possible to implement an additional protocol for our users.
Furthermore we don't exclude the possibility of extending the chosen
protocol to support a "pull" model as well, something that should be
entirely feasible as long as the protocol of choice is flexible.
The approach that was demoed at the last Linaro Connect was pull based,
and essentially had the firmware living on an http server. It had the
advantage of being fairly easy to implement with existing code in
Zephyr.
The problem with using a pull-based protocol is that it is less portable to non TCP/IP transports as I see it, since it requires the target device to initiate the transaction. Are you implying that you'd prefer the Zephyr "management" protocol to be pull-based? We're definitely open to discuss that at length. Also see the section about the Newt Management Protocol over OIC/OCF, which implements "push" over TCP/IP, which could be an alternative for the Linaro usecase.


After analyzing the different options available, we believe the Newt
Manager Protocol (NMP) to be the better suited option for our current
needs, for reasons outlined below:

- It is proven to work with mcuboot, the default bootloader for Zephyr
- The current mcuboot repository already contains an implementation of
NMP for serial recovery
- It uses a "push" model
- It is very simple but also easily extensible
- Uses a simple packet format combining an 8-byte header followed by
CBOR[5]-encoded data
- Supports additional functionality on top of basic DFU: stats,
filesystem access, date and time setting, etc.
- Already supports the BLE and serial transports
- A command-line tool exists to send images over both BLE and Serial
(both Go and JS/Node versions are available)
- It is open source and licensed under the APLv2
- There are commercial products using it already [6]
I agree that newtmgr protocol seems to be the best fit for us. It's
serial model would even fit in fairly well with the Zephyr shell, since
it wraps the packets with a control-character + base-64 packet + control
character, which the shell seems to have partial support for already.
It does, I didn't mention that to avoid extending myself too much, but it's also a very nice feature I find.

Thanks again for the feedback, it seems that we are pretty much in line.

Regards,

Carles


Re: Zephyr DFU protocol

David Brown
 

On Mon, Aug 28, 2017 at 12:45:37PM +0000, Cufi, Carles wrote:

As you might already know, we've been working on the introduction of
DFU (Device Firmware Upgrade) to Zephyr. Several Pull Requests have
been posted dealing with the low-level flash and image access modules
required to store a received image and then boot into it, but that
leaves out one of the key items in the system: the update protocol
that allows an existing running image to obtain an updated one over a
transport mechanism.
My first suggestion. Unless we are stricly implementing the USB DFU
protocol, we really should call this something else. DFU is defined
by USB standards, and is a very specific protocol with a very specific
purpose. If what we're looking is for something general across other
transports, we should call it a different name to avoid confusion.

There are several fundamental requirements for such a protocol if we
want it to be future-proof, extensible and practical for embedded
devices:

- Must be packet-based and transport-agnostic
Although this makes sense for non-usb, it also precludes using
existing tools for the update when we do have USB as our transport.

My suggestion would be to support DFU for USB, and device another
protocol for the other transports.

- Must be extensible and flexible
- The server-side implementation (assuming a request/response model) must be relatively simple and require little resources
- Must be compatible with the mcuboot project and model
- At the very least the following transports must be supported: BLE, UART, IP, USB
- A client-side tool (assuming a request/response model) must either exist already or be easily implementable
So this is solved for USB DFU. We would probably have to create tools
for other transports.

With that in mind we proceeded to analyze a few of the existing
protocols out there (the ones we knew about), in order to consider
whether reusing an existing effort was a better approach than
designing and implementing a new protocol from scratch:

1) USB DFU specification[1]
2) Nordic Secure DFU protocol (included in the Nordic SDK)[2]
3) Newt Manager Protocol (part of Mynewt)[3]
4) Distributed DFU over CoAP used in Nordic's Thread SDK[4]

Note: I will use the word "source" to identify the device that
contains the new image, and "target" to identify the one that
receives it, flashes it and the boots into it.

The USB DFU specification does not seem to be a good fit since it
maps specifically to particular USB endpoints and classes, making it
not suitable for other transports without extensive modification.
Using a standard USB class such as CDC ACM as transport, we could
instead map the chosen protocol over a USB physical link.
This is fairly intentional. As I mention above, I would suggest
implementing DFU regardless of other protocols used.

We also see 2 very different image distribution models. In protocols
1, 2 and 3 the source (client) "pushes" an image to the target
(server) after checking that it's applicable based on version
checking and other verifications. In protocol 4 however, the source
acts instead as a server and the targets act as clients that "pulls"
images from the source (server) whenever they are available. I
believe that the Linaro DFU implementation also follows the "pull"
paradigm of protocol 4.
They also serve different purposes. DFU (the real one on USB) works
similar to a recovery mode. You put the target into DFU mode, and the
USB endpoint is a different kind of device than it usually is. The
other upgrade protocols are intended to upgrade live devices. This is
also a good reason to support USB's DFU in addition to whatever other
protocol we come up.

We believe that the right approach for the sort of ecosystem that
Zephyr targets is the "push" approach, to minimize traffic, reduce
power consumption and also make it possible to use with all
transports. That said, it is important to note that although we are
trying to decide on a default DFU mechanism for Zephyr, all layers
(including the image management) will be independent of it, and it
should therefore be entirely possible to implement an additional
protocol for our users. Furthermore we don't exclude the possibility
of extending the chosen protocol to support a "pull" model as well,
something that should be entirely feasible as long as the protocol of
choice is flexible.
The approach that was demoed at the last Linaro Connect was pull
based, and essentially had the firmware living on an http server. It
had the advantage of being fairly easy to implement with existing code
in Zephyr.

After analyzing the different options available, we believe the Newt
Manager Protocol (NMP) to be the better suited option for our current
needs, for reasons outlined below:

- It is proven to work with mcuboot, the default bootloader for Zephyr
- The current mcuboot repository already contains an implementation of NMP for serial recovery
- It uses a "push" model
- It is very simple but also easily extensible
- Uses a simple packet format combining an 8-byte header followed by CBOR[5]-encoded data
- Supports additional functionality on top of basic DFU: stats, filesystem access, date and time setting, etc.
- Already supports the BLE and serial transports
- A command-line tool exists to send images over both BLE and Serial (both Go and JS/Node versions are available)
- It is open source and licensed under the APLv2
- There are commercial products using it already [6]
I agree that newtmgr protocol seems to be the best fit for us. It's
serial model would even fit in fairly well with the Zephyr shell,
since it wraps the packets with a control-character + base-64 packet +
control character, which the shell seems to have partial support for
already.

David


Re: Zephyr DFU protocol

Carles Cufi
 

Hi Johann,

Thanks for the feedback.

-----Original Message-----
From: zephyr-devel-bounces@lists.zephyrproject.org [mailto:zephyr-devel-
bounces@lists.zephyrproject.org] On Behalf Of Johann Fischer
Sent: 28 August 2017 15:35
To: zephyr-devel@lists.zephyrproject.org
Subject: Re: [Zephyr-devel] Zephyr DFU protocol

Hi,

On 28.08.2017 14:45, Cufi, Carles wrote:

The USB DFU specification does not seem to be a good fit since it maps
specifically to particular USB endpoints and classes, making it not
suitable for other transports without extensive modification. Using a
standard USB class such as CDC ACM as transport, we could instead map
the chosen protocol over a USB physical link.

That surprised me a little, can you describe it in more detail what you
mean with "it maps specifically to particular USB endpoints and
classes". I think if you have USB, then USB DFU is the most elegant
solution for update. Or is it about using the same update tool for UART
and USB?
Yes, the whole point here is to find a protocol and therefore set of update command-line tools for all transports, so that the only difference among them is an adaption layer for them. That however does *not* prevent Zephyr from also supporting USB DFU or any other DFU mechanism which is widely used and already has a well-established toolset. It is just that I would not recommend using the USB DFU protocol over any other transport as a "universal default protocol".

Regards,

Carles


Re: Zephyr DFU protocol

Johann Fischer
 

Hi,

On 28.08.2017 14:45, Cufi, Carles wrote:
The USB DFU specification does not seem to be a good fit since it maps specifically to particular USB endpoints and classes, making it not suitable for other transports without extensive modification. Using a standard USB class such as CDC ACM as transport, we could instead map the chosen protocol over a USB physical link.
That surprised me a little, can you describe it in more detail what you mean with "it maps specifically to particular USB endpoints and classes". I think if you have USB, then USB DFU is the most elegant solution for update. Or is it about using the same update tool for UART and USB?

--
Best Regards,
Johann Fischer


Zephyr DFU protocol

Carles Cufi
 

Hi all,

As you might already know, we've been working on the introduction of DFU (Device Firmware Upgrade) to Zephyr. Several Pull Requests have been posted dealing with the low-level flash and image access modules required to store a received image and then boot into it, but that leaves out one of the key items in the system: the update protocol that allows an existing running image to obtain an updated one over a transport mechanism.

There are several fundamental requirements for such a protocol if we want it to be future-proof, extensible and practical for embedded devices:

- Must be packet-based and transport-agnostic
- Must be extensible and flexible
- The server-side implementation (assuming a request/response model) must be relatively simple and require little resources
- Must be compatible with the mcuboot project and model
- At the very least the following transports must be supported: BLE, UART, IP, USB
- A client-side tool (assuming a request/response model) must either exist already or be easily implementable

With that in mind we proceeded to analyze a few of the existing protocols out there (the ones we knew about), in order to consider whether reusing an existing effort was a better approach than designing and implementing a new protocol from scratch:

1) USB DFU specification[1]
2) Nordic Secure DFU protocol (included in the Nordic SDK)[2]
3) Newt Manager Protocol (part of Mynewt)[3]
4) Distributed DFU over CoAP used in Nordic's Thread SDK[4]

Note: I will use the word "source" to identify the device that contains the new image, and "target" to identify the one that receives it, flashes it and the boots into it.

The USB DFU specification does not seem to be a good fit since it maps specifically to particular USB endpoints and classes, making it not suitable for other transports without extensive modification. Using a standard USB class such as CDC ACM as transport, we could instead map the chosen protocol over a USB physical link.
The Nordic Secure DFU protocol is also very tightly mapped to the Nordic software architecture, including assumptions that the Bluetooth Protocol Stack is decoupled from the bootloader and application images and is permanently available through a set of system calls.

We also see 2 very different image distribution models. In protocols 1, 2 and 3 the source (client) "pushes" an image to the target (server) after checking that it's applicable based on version checking and other verifications. In protocol 4 however, the source acts instead as a server and the targets act as clients that "pulls" images from the source (server) whenever they are available. I believe that the Linaro DFU implementation also follows the "pull" paradigm of protocol 4.

We believe that the right approach for the sort of ecosystem that Zephyr targets is the "push" approach, to minimize traffic, reduce power consumption and also make it possible to use with all transports. That said, it is important to note that although we are trying to decide on a default DFU mechanism for Zephyr, all layers (including the image management) will be independent of it, and it should therefore be entirely possible to implement an additional protocol for our users. Furthermore we don't exclude the possibility of extending the chosen protocol to support a "pull" model as well, something that should be entirely feasible as long as the protocol of choice is flexible.

After analyzing the different options available, we believe the Newt Manager Protocol (NMP) to be the better suited option for our current needs, for reasons outlined below:

- It is proven to work with mcuboot, the default bootloader for Zephyr
- The current mcuboot repository already contains an implementation of NMP for serial recovery
- It uses a "push" model
- It is very simple but also easily extensible
- Uses a simple packet format combining an 8-byte header followed by CBOR[5]-encoded data
- Supports additional functionality on top of basic DFU: stats, filesystem access, date and time setting, etc.
- Already supports the BLE and serial transports
- A command-line tool exists to send images over both BLE and Serial (both Go and JS/Node versions are available)
- It is open source and licensed under the APLv2
- There are commercial products using it already [6]

The protocol itself consists of two different entities, the client sending requests and the server replying with responses.
The client side is typically a higher specced device running a full operating system (computer or portable device), whereas the server is the target of the DFU procedure and receives the image, stores it and then boots into it.
Additionally, the protocol also supports an OIC (now OCF) variant where the target/server exposes a discoverable server resource through the OCF framework over IPv6 and CoAP, making it possible to use it in a "distributed push" model where a single client can discover multiple servers and push an image to them.[7] This is an interesting feature since it enables DFU over IPv6 and CoAP out of the box, even without having to switch to a "pull" model.

Unfortunately the protocol itself is not documented in a specification, and instead the source code of the different implementations must currently be used to examine and understand the protocol. In terms of currently available implementations, there are the following:

- client/source side:
- newtmgr: Written in Go, this is the official Newt Manager Protocol client. Supports both the standard (over BLE and serial) and OIC (over IP) variants and all additional features [8]
- node-newtmgr: Unofficial NodeJS reimplementation of newtmgr, supports the standard variant over BLE and serial [9]
- Adafruit Mynewt Manager iOS application [10]
- server/target side:
- Mynewt Newt Manager Protocol implementation. Supports both variants and all transports [11]

There's also the choice, not discussed so far, to implement a brand new protocol completely tailored for Zephyr and designed from scratch. Although this has some advantages, such as being able to define it completely and adapt it to the particularities of Zephyr and let everybody contribute to the protocol choices, format and standards to use. That said, and given the fact that a protocol already exists that has been proven to work with an operating system similar to Zephyr, clients are already available for both desktop and iOS, and that it could potentially save a lot of development time to reuse an existing component like we did with mcuboot, we have not pursued this option further for now.

We are eager to hear from everybody regarding the preliminary choice, including whether you know other, alternative protocols that are not known to us, whether there are requirements that are not met by our proposal or in general opinions and questions.

Regards,

Nordic Team

[1] http://www.usb.org/developers/docs/devclass_docs/DFU_1.1.pdf
[2] http://infocenter.nordicsemi.com/index.jsp?topic=%2Fcom.nordic.infocenter.sdk5.v14.0.0%2Flib_bootloader_dfu.html&cp=4_0_0_3_5_1
[3] http://mynewt.apache.org/latest/os/modules/devmgmt/newtmgr/
[4] http://infocenter.nordicsemi.com/index.jsp?topic=%2Fcom.nordic.infocenter.threadsdk.v0.10.0%2Fthread_example_dfu.html&cp=4_2_0_2_3
[5] https://tools.ietf.org/html/rfc7049
[6] https://www.adafruit.com/product/3574
[7] http://mynewt.apache.org/latest/os/modules/devmgmt/oicmgr/
[8] https://github.com/apache/mynewt-newtmgr
[9] https://github.com/jacobrosenthal/node-newtmgr
[10] https://learn.adafruit.com/adafruit-nrf52-pro-feather/adafruit-mynewt-manager
[11] https://github.com/apache/mynewt-core/tree/master/mgmt


Re: RFC: Replacing Make/Kbuild with CMake

Carles Cufi
 

Hi Martï,

 

Sorry for the huge delay on this, it slipped through the cracks!

 

Regarding Meson, I actually don’t think this is a bad idea at all, the list you mention:

 

- is cross-platform (Windows / Mac / Linux)
- supports cross-compilation
- generates a build system (e.g. for Ninja and IDEs like Visual Studio and XCode)
- only has a hard dependency on Python 3 (but the build files are not written in Python)
- includes converter scripts to help start a transition from Make, CMake, and other build systems
- has a modern build file language ("real" data types, immutable variables, not Turing complete)

 

gives a good overview of features that are comparable to CMake, so here is my take on this after looking a little bit more into the Meson doc:

 

Plus sides of using Meson:

 

+ Only has Python 3 as a dependency (we already require it for other parts of the build)

+ The build file language is definitely very clear, concise and perhaps cleaner than CMake’s

+ The project is run in a very similar fashion to Zephyr (mailing list, IRC, open governance)

+ Extensive and clear documentation (I was really impressed by this)

+ It is modern and designed from scratch with the benefit of the experience of CMake and make use

 

And here are the negative sides of it:

 

-          Very young project with a limited set of users, so unknown future

-          Mainly used on Linux for big GNOME and similar projects, very different usecase to ours

-          Cross-compilation is officially supported, but I could not find large bare-metal (i..e not targeting Linux) projects using it, so there might be unexpected hurdles there

-          Integration with Visual Studio seems to require adding a script in VS itself, whereas in CMake you only need to provide the VS version when you generate. It also does only seem to support VS2015

-          Speed is unknown? We know that CMake+Ninja is extremely fast

-          I could not find documentation for cross-platform basic file operations (coppy, move, delete, MD5, etc..). Is one supposed to do this in Python?

 

Thanks again for this, and I do believe there are significant plus sides to Meson, but ultimately it will be the opinion of the majority of our current users and developers that should decide for one or the other.

 

Thanks,

 

Carles

 

From: Marti Bolivar [mailto:marti.bolivar@...]
Sent: 12 April 2017 16:11
To: Cufi, Carles <Carles.Cufi@...>
Cc: Luiz Augusto von Dentz <luiz.dentz@...>; Andersson, Joakim <Joakim.Andersson@...>; devel@...
Subject: Re: [Zephyr-devel] RFC: Replacing Make/Kbuild with CMake

 

Hi Carles,

 

On 12 April 2017 at 08:23, Cufi, Carles <Carles.Cufi@...> wrote:


[snip]

I'd like to add a bit of background here. When it comes to cross-platform (i.e. running natively on Windows) build systems for C/C++ projects, there only seems to be a few well-known and active solutions out there:

* CMake
* SCons

SCons has the advantage of being pure Python code, and given that we already require Python for other build utilities and there's talks of porting Kconfig to Python this would reduce the number of required software packages to build Zephyr. However it doesn't really have proper IDE generation and is slower, so we opted for prototyping with CMake instead.

The other alternative is to build our own, suited to our particular needs, a bit like Mynewt did with their newt tool. But this is in general frowned upon since it would mean writing yet another piece of software instead of reusing a well-tested one, so for now the idea has remained in the sidelines.

 

I wasn't present at the conversations where the build system change was discussed, but was Meson (http://mesonbuild.com/) considered?

I don't have personal experience using it, but from its documentation, Meson:


- is cross-platform (Windows / Mac / Linux)
- supports cross-compilation
- generates a build system (e.g. for Ninja and IDEs like Visual Studio and XCode)
- only has a hard dependency on Python 3 (but the build files are not written in Python)
- includes converter scripts to help start a transition from Make, CMake, and other build systems
- has a modern build file language ("real" data types, immutable variables, not Turing complete)

The main downside I see relative to CMake is that Meson is less mature. However, it is actively developed and various projects (e.g. GNOME, X, Wayland/Weston) are looking seriously at a move to Meson [1], [2].

Any pointers on how the list was narrowed to SCons and CMake would be appreciated.

 

Thanks,


Re: bitfields

Piotr Mienkowski
 

Hi Jukka,

maybe a bit of a weird coding style question, but for CAN support I
need
a CAN ID "struct". The CAN ID is a 11 or 29 bit ID, a flags that says
it
is 29 or 11 bit, a RTR flag and possible a ERROR flag. Which totals
to
exactly 32 bit.

In Linux canid_t is just a typedef for a u32_t, and macros/defines
are
used to access the flags and mask the ID's. something like;

typedef u32_t canid_t;

#define CAN_EFF_FLAG 0x80000000U /* EFF/SFF is set in the MSB */
#define CAN_RTR_FLAG 0x40000000U /* remote transmission request */
#define CAN_ERR_FLAG 0x20000000U /* error message frame */

#define CAN_SFF_MASK 0x000007FFU /* standard frame format (SFF) */
#define CAN_EFF_MASK 0x1FFFFFFFU /* extended frame format (EFF) */

In Zephyr I also seen some use (dma for example) of the "u32_t
flag:1;"
constructs. So a canid could be something like;

struct canid {
u32_t id:29;
u32_t eid:1;
u32_t rtr:1;
u32_t err:1;
};

Is there a preference for either of these constructs to encode
bitfields?
I have no prererence here, using the bit field values is usually quite
convenient but it really depends how you are using these values.
You could have both ways if you put a union inside struct canid.
I believe you are talking about a solution used currently by Zephyr's
I2C API. There we have:

union dev_config {
u32_t raw;
struct __bits {
u32_t use_10_bit_addr : 1;
u32_t speed : 3;
u32_t is_master_device : 1;
u32_t reserved : 26;
} bits;
};

This is however incorrect. C99 §6.7.2.1, paragraph 10 says: "The order
of allocation of bit-fields within a unit (high-order to low-order or
low-order to high-order) is implementation-defined.". I.e. - using union
dev_config as an example - compiler is free to map use_10_bit_addr
either to MSB or to LSB. The two methods of specifying bit fields are
not equivalent and should not be mixed.

I will submit a bug report to remove this usage from I2C API.

- Piotr

5261 - 5280 of 8513