Date   

Re: [RFC]PWM API Update

Liu, Baohong
 

-----Original Message-----
From: Briano, Ivan
Sent: Monday, September 26, 2016 1:02 PM
To: Leung, Daniel <daniel.leung(a)intel.com>
Cc: Liu, Baohong <baohong.liu(a)intel.com>; Tomasz Bursztyka
<tomasz.bursztyka(a)linux.intel.com>; devel(a)lists.zephyrproject.org
Subject: Re: [devel] Re: Re: Re: Re: Re: Re: Re: [RFC]PWM API Update

On Mon, 26 Sep 2016 12:48:07 -0700, Daniel Leung wrote:
On Mon, Sep 26, 2016 at 05:58:04PM +0000, Liu, Baohong wrote:


-----Original Message-----
From: Tomasz Bursztyka [mailto:tomasz.bursztyka(a)linux.intel.com]
Sent: Monday, September 26, 2016 12:02 AM
To: devel(a)lists.zephyrproject.org
Subject: [devel] Re: Re: Re: Re: Re: [RFC]PWM API Update

Hi Baohong,

static inline int pwm_pin_set(struct device *dev, uint32_t pwm,
uint32_t period_cycles, uint32_t pulse_cycles);

static inline int pwm_pin_set_usec(struct device *dev, uint32_t pwm,
uint32_t period_usec, uint32_t pulse_usec);

Note: implementation of pwm_pin_set_usec API shall convert the
input (in usec) to cycles and then call pwm_pin_set.

I felt that get_cycles_per_sec() is not needed since there are
already constant definition for this. (e.g.
CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC
in boards/arduino_101_sss/arduino_101_sss_defconfig).
Actually, this might be true if the PWM device is on the SoC
and/or it's controller clocked at same speed. But I think this is
still needed if the PWM device is external, and clocked
differently. (external clock source for it, etc
etc...) Don't you think so?
That's a good point to think about. Only concern adding such an API
(e.g get_cycles_per_sec) which we do not really need for now is that
it can become a maintaining overhead over time in order for it to
support different SoCs/boards. Also without understanding a real
usage case, I would suggest for now we continue using existing
constant definitions as it is currently and re-visit this later when
we have real usage cases as we can always add this API, if needed.
To provide you with another data point, the SAM3X on Arduino Due
allows you to change the clock divider at runtime. This means the
clock cannot be regarded as constant after boot. Quark SE, on the
other hand, drives the PWMs/comparators at constant 32MHz (if I
remember correctly).

No, the PWM at least will run based on the system clock, which can be
changed to lower values than 32MHz during runtime.
Please share the document which has the details of varying PWM clock for Quark SE.


Re: [RFC]PWM API Update

Iván Briano <ivan.briano at intel.com...>
 

On Mon, 26 Sep 2016 12:48:07 -0700, Daniel Leung wrote:
On Mon, Sep 26, 2016 at 05:58:04PM +0000, Liu, Baohong wrote:


-----Original Message-----
From: Tomasz Bursztyka [mailto:tomasz.bursztyka(a)linux.intel.com]
Sent: Monday, September 26, 2016 12:02 AM
To: devel(a)lists.zephyrproject.org
Subject: [devel] Re: Re: Re: Re: Re: [RFC]PWM API Update

Hi Baohong,

static inline int pwm_pin_set(struct device *dev, uint32_t pwm,
uint32_t period_cycles, uint32_t pulse_cycles);

static inline int pwm_pin_set_usec(struct device *dev, uint32_t pwm,
uint32_t period_usec, uint32_t pulse_usec);

Note: implementation of pwm_pin_set_usec API shall convert the input
(in usec) to cycles and then call pwm_pin_set.

I felt that get_cycles_per_sec() is not needed since there are already
constant definition for this. (e.g.
CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC
in boards/arduino_101_sss/arduino_101_sss_defconfig).
Actually, this might be true if the PWM device is on the SoC and/or it's
controller clocked at same speed. But I think this is still needed if the PWM
device is external, and clocked differently. (external clock source for it, etc
etc...) Don't you think so?
That's a good point to think about. Only concern adding such an API
(e.g get_cycles_per_sec) which we do not really need for now is that it can
become a maintaining overhead over time in order for it to support different
SoCs/boards. Also without understanding a real usage case, I would suggest for
now we continue using existing constant definitions as it is currently and re-visit
this later when we have real usage cases as we can always add this API, if
needed.
To provide you with another data point, the SAM3X on Arduino Due allows you
to change the clock divider at runtime. This means the clock cannot be regarded
as constant after boot. Quark SE, on the other hand, drives the PWMs/comparators
at constant 32MHz (if I remember correctly).
No, the PWM at least will run based on the system clock, which can be
changed to lower values than 32MHz during runtime.


Re: [RFC]PWM API Update

Daniel Leung <daniel.leung@...>
 

On Mon, Sep 26, 2016 at 05:58:04PM +0000, Liu, Baohong wrote:


-----Original Message-----
From: Tomasz Bursztyka [mailto:tomasz.bursztyka(a)linux.intel.com]
Sent: Monday, September 26, 2016 12:02 AM
To: devel(a)lists.zephyrproject.org
Subject: [devel] Re: Re: Re: Re: Re: [RFC]PWM API Update

Hi Baohong,

static inline int pwm_pin_set(struct device *dev, uint32_t pwm,
uint32_t period_cycles, uint32_t pulse_cycles);

static inline int pwm_pin_set_usec(struct device *dev, uint32_t pwm,
uint32_t period_usec, uint32_t pulse_usec);

Note: implementation of pwm_pin_set_usec API shall convert the input
(in usec) to cycles and then call pwm_pin_set.

I felt that get_cycles_per_sec() is not needed since there are already
constant definition for this. (e.g.
CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC
in boards/arduino_101_sss/arduino_101_sss_defconfig).
Actually, this might be true if the PWM device is on the SoC and/or it's
controller clocked at same speed. But I think this is still needed if the PWM
device is external, and clocked differently. (external clock source for it, etc
etc...) Don't you think so?
That's a good point to think about. Only concern adding such an API
(e.g get_cycles_per_sec) which we do not really need for now is that it can
become a maintaining overhead over time in order for it to support different
SoCs/boards. Also without understanding a real usage case, I would suggest for
now we continue using existing constant definitions as it is currently and re-visit
this later when we have real usage cases as we can always add this API, if
needed.
To provide you with another data point, the SAM3X on Arduino Due allows you
to change the clock divider at runtime. This means the clock cannot be regarded
as constant after boot. Quark SE, on the other hand, drives the PWMs/comparators
at constant 32MHz (if I remember correctly).


Daniel


Re: [RFC]PWM API Update

Liu, Baohong
 

-----Original Message-----
From: Tomasz Bursztyka [mailto:tomasz.bursztyka(a)linux.intel.com]
Sent: Monday, September 26, 2016 12:02 AM
To: devel(a)lists.zephyrproject.org
Subject: [devel] Re: Re: Re: Re: Re: [RFC]PWM API Update

Hi Baohong,

static inline int pwm_pin_set(struct device *dev, uint32_t pwm,
uint32_t period_cycles, uint32_t pulse_cycles);

static inline int pwm_pin_set_usec(struct device *dev, uint32_t pwm,
uint32_t period_usec, uint32_t pulse_usec);

Note: implementation of pwm_pin_set_usec API shall convert the input
(in usec) to cycles and then call pwm_pin_set.

I felt that get_cycles_per_sec() is not needed since there are already
constant definition for this. (e.g.
CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC
in boards/arduino_101_sss/arduino_101_sss_defconfig).
Actually, this might be true if the PWM device is on the SoC and/or it's
controller clocked at same speed. But I think this is still needed if the PWM
device is external, and clocked differently. (external clock source for it, etc
etc...) Don't you think so?
That's a good point to think about. Only concern adding such an API
(e.g get_cycles_per_sec) which we do not really need for now is that it can
become a maintaining overhead over time in order for it to support different
SoCs/boards. Also without understanding a real usage case, I would suggest for
now we continue using existing constant definitions as it is currently and re-visit
this later when we have real usage cases as we can always add this API, if
needed.


Br,

Tomasz


Re: Nanokernel stack border protection

Benjamin Walsh <benjamin.walsh@...>
 

On Mon, Sep 26, 2016 at 05:44:45PM +0000, Boie, Andrew P wrote:
On Mon, 2016-09-26 at 16:49 +0000, Boie, Andrew P wrote:
How do you propose to implement returning an error code to the user?
This isn’t the heap…
Disregard, this landed directly in my inbox, we're actually talking
about kernel LIFOs.

We could add checking to LIFOs, but this adds additional overhead. For
LIFO use-cases where we always know how deep we will be going, it's not
necessary.

I am unsure on whether that additional overhead is OK in the final
cost/benefit analysis, but I thought I would at least point it out.
It might be better to have a data structure built on top of LIFOs which
adds this tracking, and not modify the base data structure. 
Kernel LIFOs are unbounded, since they are lists. He's actually talking
about nanokernel stack objects, which are bounded.

Like I said in a previous response, there is an __ASSERT() in the
unified kernel on k_stack_push() now.


Re: Nanokernel stack border protection

Boie, Andrew P
 

On Mon, 2016-09-26 at 17:33 +0000, Boie, Andrew P wrote:
- In the generated assembly by GCC, it looks like when it references
data pushed on the stack (local variables, function arguments, etc)
it
doesn't put any segment selector in the code, so its all in terms of
DS. I am not sure how to get the compiler to express all stack memory
references with SS, and whether there are any performance
implications
of doing this.

The last point may be a dealbreaker, I am not sure. I'm digging
through
the GCC manual to hopefully understand this better.
Yeah it's probably a dealbreaker. In C you can take the address of some
stack variable and pass it to another function, stack references seem
to really need to be on the data segment. Oh well.

I think I have to conclude that this kind of stack bounds checking
isn't possible on x86 at least in C code.

Andrew


Re: Nanokernel stack border protection

Boie, Andrew P
 

On Mon, 2016-09-26 at 16:49 +0000, Boie, Andrew P wrote:
How do you propose to implement returning an error code to the user?
This isn’t the heap…
Disregard, this landed directly in my inbox, we're actually talking
about kernel LIFOs.

We could add checking to LIFOs, but this adds additional overhead. For
LIFO use-cases where we always know how deep we will be going, it's not
necessary.

I am unsure on whether that additional overhead is OK in the final
cost/benefit analysis, but I thought I would at least point it out.
It might be better to have a data structure built on top of LIFOs which
adds this tracking, and not modify the base data structure. 

Andrew


Re: Nanokernel stack border protection

Boie, Andrew P
 

On Mon, 2016-09-26 at 10:01 +0100, Jon Medhurst (Tixy) wrote:
How would you propose to implement such a border protection?
Use the features provided by the CPU? On ARM Cortex-M, the stack
limit
registers PSPLIM and MSPLIM. Presumably other CPUs have similar
things.
This is cool, I think we should implement this for ARM targets. I will
file a user story.

Thinking out loud for x86:
If I understand correctly, push/pop operations work on whatever memory
segment is in SS. At the moment it's just the flat data segment.

I am not completely sure, but we might be able to use dedicated per-
thread stack segments, which would generate an exception if the bounds
are exceeded.

It's hard to find stuff on this Googling around since much of what is
written on this topic that I can find is more concerned with buffer
overflow protection and not exceeding the bounds of fixed-size stacks.

- We could define stack segments in the GDT for the set of known
threads, and set them when the threads are created. _Swap() would have
to be modified to additionally track SS when switching context.

- The struct tcs would need to be moved elsewhere as it currently lives
in the lower addresses of its thread's stack.

- We would need some black magic in how threads are created in
code...probably something like how IRQ_CONNECT() works, which looks
like a function call but populates the IDT, creates an assembly stub,
etc. We could do something similar to create the stack segments.

- In the generated assembly by GCC, it looks like when it references
data pushed on the stack (local variables, function arguments, etc) it
doesn't put any segment selector in the code, so its all in terms of
DS. I am not sure how to get the compiler to express all stack memory
references with SS, and whether there are any performance implications
of doing this.

The last point may be a dealbreaker, I am not sure. I'm digging through
the GCC manual to hopefully understand this better.

Andrew


Re: Nanokernel stack border protection

Benjamin Walsh <benjamin.walsh@...>
 

On Mon, Sep 26, 2016 at 10:01:14AM +0100, Jon Medhurst (Tixy) wrote:
On Sun, 2016-09-25 at 10:08 +0000, Boie, Andrew P wrote:
On Sat, 2016-09-24 at 14:39 +0800, tidyjiang(a)163.com wrote:
Hi All,

The nanokernel uses an array as stack memory space, but there is no
border protection when push data to the stack. When the array is
already full, it will cause array overfow, leading to unpredictable
behavior.

Why not add the border protection? When the array is full, it returns
an error code to user.

Is it necessary ?
How would you propose to implement such a border protection?
Use the features provided by the CPU? On ARM Cortex-M, the stack limit
registers PSPLIM and MSPLIM. Presumably other CPUs have similar things.
Is this a new thing on ARMv8-M ? I don't remember seeing this on
ARMv7-M and I cannot find it in the ref. manual either (DDI0403E.b).


Re: Nanokernel stack border protection

Boie, Andrew P
 

How do you propose to implement returning an error code to the user? This isn't the heap...

Andrew

From: Tidy(ChunHua) Jiang [mailto:tidyjiang(a)163.com]
Sent: Sunday, September 25, 2016 3:32 AM
To: Boie, Andrew P <andrew.p.boie(a)intel.com>
Cc: devel(a)lists.zephyrproject.org
Subject: Re:Re: [devel] Nanokernel stack border protection

Hi Andrew,

Yeah, we can't really implement such function in fact, but we can and just return an error code to user. I think it's better than now.

Thx & Rgds.
Tidy.
At 2016-09-25 18:08:33, "Boie, Andrew P" <andrew.p.boie(a)intel.com<mailto:andrew.p.boie(a)intel.com>> wrote:

On Sat, 2016-09-24 at 14:39 +0800, tidyjiang(a)163.com<mailto:tidyjiang(a)163.com> wrote:
Hi All,
The nanokernel uses an array as stack memory space, but there is no
border protection when push data to the stack. When the array is
already full, it will cause array overfow, leading to unpredictable
behavior.
Why not add the border protection? When the array is full, it returns
an error code to user.
Is it necessary ?
How would you propose to implement such a border protection?
Andrew


Re: [RFC] Ring buffers

Benjamin Walsh <benjamin.walsh@...>
 

On Fri, Sep 23, 2016 at 06:12:41PM -0400, Boie, Andrew P wrote:
On Fri, 2016-09-23 at 14:56 -0700, Andy Ross wrote:
Benjamin Walsh wrote (on Friday, September 23, 2016 2:36PM):

I think that we should still have the code to under kernel/ though,
and rename APIs to k_ring_buf_<whatever>.
Naming isn't a big deal, but I'll reiterate my previous point: a ring
buffer is a data structure, not an OS abstraction provided by a
kernel. It's equally useful to application or subsystem code, or I
dunno, Windows C++ apps. It's not meaningfully a "Zephyr" thing.

I mean, the dlist implementation is used pervasively in the kernel,
but I don't think anyone would argue it belongs in kernel.h instead
include/misc/dlist.h.
I agree with Andy for reasons above, plus if you *did* move it, it
would be subject to our deprecation policy (maintain both APIs for two
releases with the old one marked __deprecated).
All valid points. We won't touch them.


Daily JIRA Digest

donotreply@...
 

NEW JIRA items within last 24 hours: 0

UPDATED JIRA items within last 24 hours: 2
[ZEP-240] printk/printf usage in samples
https://jira.zephyrproject.org/browse/ZEP-240

[ZEP-454] Add driver API reentrancy support to UART shim drivers
https://jira.zephyrproject.org/browse/ZEP-454


CLOSED JIRA items within last 24 hours: 0

RESOLVED JIRA items within last 24 hours: 0


Daily Gerrit Digest

donotreply@...
 

NEW within last 24 hours:
- https://gerrit.zephyrproject.org/r/4996 : ieee802154_cc2520: Correct debug output
- https://gerrit.zephyrproject.org/r/4993 : ieee802154_cc2520: Improve error logging
- https://gerrit.zephyrproject.org/r/4986 : net: yaip: Do not source contiki headers always
- https://gerrit.zephyrproject.org/r/5000 : net: tests: Add RA message unit tests.
- https://gerrit.zephyrproject.org/r/4999 : net: yaip: Adopt new nbuf API's to RA message handlers.
- https://gerrit.zephyrproject.org/r/4995 : ieee802154_cc2520_legacy: Implement set short address
- https://gerrit.zephyrproject.org/r/4998 : frdm_k64f: Add support for RGB LEDs
- https://gerrit.zephyrproject.org/r/4997 : frdm_k64f: Add support for push button switches
- https://gerrit.zephyrproject.org/r/4994 : ieee802154_cc2520_legacy: Improve debugging for the driver
- https://gerrit.zephyrproject.org/r/4992 : toolchain: Add BUILD_ASSERT macro for compile-time checks
- https://gerrit.zephyrproject.org/r/4989 : Bluetooth: init: Add HFP to automated tests
- https://gerrit.zephyrproject.org/r/4990 : doc: Update the device power management API documentation
- https://gerrit.zephyrproject.org/r/4980 : device: Consolidate DEVICE_ and SYS_* macros
- https://gerrit.zephyrproject.org/r/4984 : unified: Don't assert if work is pending on submit
- https://gerrit.zephyrproject.org/r/4983 : unified: Add k_work_pending
- https://gerrit.zephyrproject.org/r/4979 : power_mgmt: Reduce complexity in handling of power hooks
- https://gerrit.zephyrproject.org/r/4976 : pinmux: remove unused pinmux_drv.h
- https://gerrit.zephyrproject.org/r/4974 : pinmux: quark_d2000: use pinmux driver instead of own functions
- https://gerrit.zephyrproject.org/r/4975 : pinmux: arduino 101: use pinmux driver
- https://gerrit.zephyrproject.org/r/4973 : pinmux: rename pinmux driver local header
- https://gerrit.zephyrproject.org/r/4977 : pinmux: quark_se_c1000: use pinmux driver and APIs
- https://gerrit.zephyrproject.org/r/4971 : pinmux: remove confusing pinmux_dev and implement as main driver
- https://gerrit.zephyrproject.org/r/4972 : pinmux: fix driver api and style
- https://gerrit.zephyrproject.org/r/4970 : pinmux: remove nonexistant galileo Kconfig

UPDATED within last 24 hours:
- https://gerrit.zephyrproject.org/r/4321 : Bluetooth: BR/EDR: Refactor distribution of security procedure status
- https://gerrit.zephyrproject.org/r/4952 : Bluetooth: HFP HF: Fix getting inaccessible internal
- https://gerrit.zephyrproject.org/r/2255 : rfc: unified kernel
- https://gerrit.zephyrproject.org/r/4555 : Bluetooth: HFP HF: SLC connection-Send/Parse BRSF
- https://gerrit.zephyrproject.org/r/4486 : Bluetooth: SDP: Server: Initialize and accept incoming connections
- https://gerrit.zephyrproject.org/r/4916 : net: yaip: Fix copying incorrect byte order address field
- https://gerrit.zephyrproject.org/r/4881 : nano_work: Don't assert if work is pending on submit
- https://gerrit.zephyrproject.org/r/4880 : nano_work: Add nano_work_pending
- https://gerrit.zephyrproject.org/r/4951 : Bluetooth: HFP HF: Enforce Kconfig's HFP_HF relation to RFCOMM
- https://gerrit.zephyrproject.org/r/4511 : unified/doc: Kernel primer for unified kernel
- https://gerrit.zephyrproject.org/r/4968 : tests: remove redundant PRINT definition
- https://gerrit.zephyrproject.org/r/4959 : x86: interrupts: optimize and simplify IRQ stubs

MERGED within last 24 hours:
- https://gerrit.zephyrproject.org/r/4991 : Bluetooth: Controller: Fix __packed placement
- https://gerrit.zephyrproject.org/r/4988 : Bluetooth: Controller: Remove unused macro
- https://gerrit.zephyrproject.org/r/4985 : net: yaip: cc2520: Fix setting proper IEEE 802.15.4 address
- https://gerrit.zephyrproject.org/r/4978 : remove unused whitespace in arch/arc/core/fault_s.S
- https://gerrit.zephyrproject.org/r/4943 : Bluetooth: L2CAP: Cleanup flags names for BR/EDR channels
- https://gerrit.zephyrproject.org/r/4874 : Bluetooth: RFCOMM: Handle dlc disconnection from peer
- https://gerrit.zephyrproject.org/r/4953 : Bluetooth: tester: Fix advertising data


Re: Nanokernel stack border protection

Benjamin Walsh <benjamin.walsh@...>
 

On Mon, Sep 26, 2016 at 05:34:55PM +0800, Tidy(ChunHua) Jiang wrote:
Un...
I mean the nanokernel’s stack object type, not the system's stack.
"stack" is such an overloaded term. :-)

FYI, the unified kernel version k_stack_push() has an __ASSERT() that
checks the limit.

The approach we are taking for kernel APIs is: if it's a user error, use
an __ASSERT(); otherwise, return an error code.

Pushing more than the limit on a stack object is considered a user
error.

Refer https://www.zephyrproject.org/doc/1.5.0/kernel/nanokernel/nanokernel_stacks.html

At 2016-09-26 17:14:52, "D'alton, Alexandre" <alexandre.dalton(a)intel.com> wrote:
Hi,

FIY, ARC has this implemented (see CONFIG_ARC_STACK_CHECKING)
And it is more than useful !

Regards,
Alex.

-----Original Message-----
From: Jon Medhurst (Tixy) [mailto:tixy(a)linaro.org]
Sent: Monday, September 26, 2016 11:01 AM
To: devel(a)lists.zephyrproject.org
Subject: [devel] Re: Re: Nanokernel stack border protection

On Sun, 2016-09-25 at 10:08 +0000, Boie, Andrew P wrote:
On Sat, 2016-09-24 at 14:39 +0800, tidyjiang(a)163.com wrote:
Hi All,

The nanokernel uses an array as stack memory space, but there is no
border protection when push data to the stack. When the array is
already full, it will cause array overfow, leading to unpredictable
behavior.

Why not add the border protection? When the array is full, it
returns an error code to user.

Is it necessary ?
How would you propose to implement such a border protection?
Use the features provided by the CPU? On ARM Cortex-M, the stack limit
registers PSPLIM and MSPLIM. Presumably other CPUs have similar things.

--
Tixy
---------------------------------------------------------------------
Intel Corporation SAS (French simplified joint stock company)
Registered headquarters: "Les Montalets"- 2, rue de Paris,
92196 Meudon Cedex, France
Registration Number: 302 456 199 R.C.S. NANTERRE
Capital: 4,572,000 Euros

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.
--
Benjamin Walsh, SMTS
Wind River Rocket
www.windriver.com
Zephyr kernel maintainer
www.zephyrproject.org


Re: Timer utility function to use single timer

Benjamin Walsh <benjamin.walsh@...>
 

In Zephyr there will be a chance to run out of timers at times and
we will not get the timer handle to use for our modules.

Means whenever we required a timer for our module we can call a
timer API defined in zephyr (i.e.
*task_timer_alloc(),task_timer_start(),task_timer_stop, etc*)
I'm a little curious about this API design too. AFAICT, it's always
legal to statically allocate a k_timer struct of your own and
initialize and use it at runtime. If you need to know you won't run
out of timers, you can be guaranteed not to lose this one.

The allocate/free API looks like it's just a convenience wrapper to
allow sharing of timer objects between usages if you know they
aren't all going to be needed simultaneously.
The main reason we still have k_timer_alloc and k_timer_free is to
support the legacy APIs task_timer_alloc and task_timer_free.

How about making the allocation/freeing of timers hidden, and only there
to support the legacy API ? The paradigm for the unified kernel would
then be "use statically allocated timers".

As you mentioned it's always legal to statically allocate a k_timer
struct of your own and initialize and use it at runtime. But suppose
we allocated 10 timer structure and now if we make another call to
initialize 1 more timer structure it will fail. How to handle that
issue. If more than 10 timers are required in a module then statically
we cannot allocate more than 10 timer in a given time. Is there any
implementation or patch available to have only 1 timer structure
initialized in the beginning and simulate all the other timer request
coming from above and satisfy them by maintaining only 1 timer
internally without actually requesting more timer to the zephyr OS.
This will reduce the timer request to zephyr OS and will not have the
risk of running out of the timers.

The problem though, is that to get this facility Zephyr allocates 10
(by default) k_timer objects in a pool and shares only those.
Obviously Zephyr has no heap, so it can't share the memory as
anything but timers. And these aren't tiny objects. My quick
manual count says that they're 64 bytes a piece, so that's half a kb
of RAM that we're allocating in every default-configured app just to
save a handful of bytes in apps that want to use the "sharing"
convenience API. That seems like a bad trade to me.

Would anyone object to a patch that set CONFIG_NUM_DYNAMIC_TIMERS to
zero by default and disabled the API unless it was 1 or greater? Or
maybe deprecating it for future removal?


Re: Nanokernel stack border protection

Tidy(ChunHua) Jiang <tidyjiang@...>
 

Un...
I mean the nanokernel’s stack object type, not the system's stack.
Refer https://www.zephyrproject.org/doc/1.5.0/kernel/nanokernel/nanokernel_stacks.html

At 2016-09-26 17:14:52, "D'alton, Alexandre" <alexandre.dalton(a)intel.com> wrote:
Hi,

FIY, ARC has this implemented (see CONFIG_ARC_STACK_CHECKING)
And it is more than useful !

Regards,
Alex.

-----Original Message-----
From: Jon Medhurst (Tixy) [mailto:tixy(a)linaro.org]
Sent: Monday, September 26, 2016 11:01 AM
To: devel(a)lists.zephyrproject.org
Subject: [devel] Re: Re: Nanokernel stack border protection

On Sun, 2016-09-25 at 10:08 +0000, Boie, Andrew P wrote:
On Sat, 2016-09-24 at 14:39 +0800, tidyjiang(a)163.com wrote:
Hi All,

The nanokernel uses an array as stack memory space, but there is no
border protection when push data to the stack. When the array is
already full, it will cause array overfow, leading to unpredictable
behavior.

Why not add the border protection? When the array is full, it
returns an error code to user.

Is it necessary ?
How would you propose to implement such a border protection?
Use the features provided by the CPU? On ARM Cortex-M, the stack limit
registers PSPLIM and MSPLIM. Presumably other CPUs have similar things.

--
Tixy
---------------------------------------------------------------------
Intel Corporation SAS (French simplified joint stock company)
Registered headquarters: "Les Montalets"- 2, rue de Paris,
92196 Meudon Cedex, France
Registration Number: 302 456 199 R.C.S. NANTERRE
Capital: 4,572,000 Euros

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.


Re: Nanokernel stack border protection

D'alton, Alexandre <alexandre.dalton@...>
 

Hi,

FIY, ARC has this implemented (see CONFIG_ARC_STACK_CHECKING)
And it is more than useful !

Regards,
Alex.

-----Original Message-----
From: Jon Medhurst (Tixy) [mailto:tixy(a)linaro.org]
Sent: Monday, September 26, 2016 11:01 AM
To: devel(a)lists.zephyrproject.org
Subject: [devel] Re: Re: Nanokernel stack border protection

On Sun, 2016-09-25 at 10:08 +0000, Boie, Andrew P wrote:
On Sat, 2016-09-24 at 14:39 +0800, tidyjiang(a)163.com wrote:
Hi All,

The nanokernel uses an array as stack memory space, but there is no
border protection when push data to the stack. When the array is
already full, it will cause array overfow, leading to unpredictable
behavior.

Why not add the border protection? When the array is full, it
returns an error code to user.

Is it necessary ?
How would you propose to implement such a border protection?
Use the features provided by the CPU? On ARM Cortex-M, the stack limit
registers PSPLIM and MSPLIM. Presumably other CPUs have similar things.

--
Tixy
---------------------------------------------------------------------
Intel Corporation SAS (French simplified joint stock company)
Registered headquarters: "Les Montalets"- 2, rue de Paris,
92196 Meudon Cedex, France
Registration Number: 302 456 199 R.C.S. NANTERRE
Capital: 4,572,000 Euros

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.


Re: Nanokernel stack border protection

Jon Medhurst (Tixy) <tixy@...>
 

On Sun, 2016-09-25 at 10:08 +0000, Boie, Andrew P wrote:
On Sat, 2016-09-24 at 14:39 +0800, tidyjiang(a)163.com wrote:
Hi All,

The nanokernel uses an array as stack memory space, but there is no
border protection when push data to the stack. When the array is
already full, it will cause array overfow, leading to unpredictable
behavior.

Why not add the border protection? When the array is full, it returns
an error code to user.

Is it necessary ?
How would you propose to implement such a border protection?
Use the features provided by the CPU? On ARM Cortex-M, the stack limit
registers PSPLIM and MSPLIM. Presumably other CPUs have similar things.

--
Tixy


Re: Galileo Gen 1 GPIO

Gottfried F. Zojer
 

Tomasz,

Thanks for your clarification.Code example and particularly Quark documents
helped to understand.Inclusive this external side (
http://hackerboards.com/intel-aims-15-dollar-quark-d2000-dev-kit-at-iot-devices/
)

Best regards

Gottfried

On Tue, Sep 20, 2016 at 4:01 PM, Tomasz Bursztyka <
tomasz.bursztyka(a)linux.intel.com> wrote:

Hi Gottfried,

Using this Cypress chip is a mandatory support as a low level pinmuxing
driver
if you want to configure and use the hardware pins, the ones exposed as
arduino compatible.
Your use case of inserting an arduino shield is exactly one of those which
will fail.

If you take a look here: http://www.intel.com/content/
www/us/en/embedded/products/galileo/galileo-g1-datasheet.html
You'll see the grey boxes "MUX": this is about this chip.

The mapping: http://download.intel.com/support/galileo/sb/
galileoiomappingrev2.pdf

It is the same story for Gen 2, but the chip is different. Take a look at
boards/galileo/pinmux*
(and drivers/gpio/gpio_pcal9535a.c).

About USB, Zephyr as a low level usb API. I know it works for Quark SE
SoC, but not Quark x1000 (Galileo's).
It could be the same controller, I don't know if anyone has tried.

Tomasz

Tomasz,

Thanks for your answer.Like Fabio I also want to use Galieo v1 board but
not really certain about your answer and what
type of restrictions you are talking about.Myself I will not use Cypress
so maybe your answer would be different.
But it would be nice to know what GPIO restrictions are there on Galileo
v1
Playing around with busybox on Galileo was cool but would love to connect
2 devices to it ( one usb-host ,one arduino shield ).I am well aware that
you work for zephyr and not Galileo

Br

Gottfried
.

On Mon, Sep 19, 2016 at 7:37 AM, Tomasz Bursztyka <
tomasz.bursztyka(a)linux.intel.com> wrote:

Hi Fábio,

Unfortunately, we do not support Galileo v1 pinmuxing, thus: the whole
board is basically unusable at this stage.
You won't be able to get very far unless you provide the cypress chip
driver.

Can you use another board? We support quite a few (see boards directory
in zephyr's tree)

Br,

Tomasz



Dear Sirs,
I am using Galileo Gen 1 (Cypress I/O expander) and I can not change
the GPIO levels.
What GPIO driver should I set in menuconfig tool (DesignWare,
PCAL9535, MMIO, Intel SCH)?
What driver name and pin numbers should I use in functions API
(gpio_pin_configure, gpio_pin_write, ....)?
Thank you very much.


Re: [RFC]PWM API Update

Tomasz Bursztyka
 

Hi Baohong,

static inline int pwm_pin_set(struct device *dev, uint32_t pwm,
uint32_t period_cycles, uint32_t pulse_cycles);

static inline int pwm_pin_set_usec(struct device *dev, uint32_t pwm,
uint32_t period_usec, uint32_t pulse_usec);

Note: implementation of pwm_pin_set_usec API shall convert the
input (in usec) to cycles and then call pwm_pin_set.

I felt that get_cycles_per_sec() is not needed since there are already
constant definition for this. (e.g. CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC
in boards/arduino_101_sss/arduino_101_sss_defconfig).
Actually, this might be true if the PWM device is on the SoC and/or it's
controller clocked
at same speed. But I think this is still needed if the PWM device is
external, and clocked
differently. (external clock source for it, etc etc...) Don't you think so?

Br,

Tomasz

7101 - 7120 of 8700