Jon Medhurst (Tixy) <tixy@...>
On Tue, 2017-05-09 at 13:54 +0200, Tomasz Bursztyka wrote:
Hi Jon,The main reason is that I hadn't thought of doing so ;-)
Just had a look at the sensor APIs and they could possibly work.
Two immediate questions spring to mind.
1. How to handle screen touched/not-touched state.
2. What units would a touchscreen use for it's values?
For 1, we could decide a special Z value meant screen-not-touched. Or
have a new trigger type for screen-not-touched events. With the latter,
the user would have two different trigger functions (for not-touched and
for data available), requiring the driver and user to be careful not to
get in a mess with any concurrent access by these two sources.
For 2, if we returned display pixel coordinates, then the complicated
translation function for converting the touchscreen values to display
coordinates would have to always be done, which could be an unnecessary
overhead for some users. Also, the application or user wouldn't be able
to get at the raw values in order to calibrate the system for that
coordinate translation. Possibly we'd need to support both raw and
If the sensor driver did the coordinates translation, the sensor API
needs to gain a possibly messy new function for setting some touchscreen
calibration method (of which there could be many). So I'd favour just
returning raw touchscreen sample values the hardware produces and make
the user use other methods to convert that into more relevant
information. Though that doesn't seem to fit with current sensor types
which return well defined SI unit values.