Hi,
I am currently porting our software from Windows (using FTDI's own
driver) to Linux (using libftdi). Thus far it works great but I am now
on a specific case in the program that needs to read chunks of data from
our device.
The commands between the host (now a PC, later an embedded machine like
a RPi) and our device are all 'request' -> 'reply' based. This specific
case requests 522 bytes of data from the device that are sent over a
500.000 bps link. Note that there is no flow control on the link but the
incoming and outgoing packets have a fixed / predictable size and a
reply can only follow a request. The program flow is linear:
'ftdi_write' the request
while (not all 522 bytes received)
'ftdi_read' the reply
When latency is 16ms or something, this guarantees the reply to be
created by our device before the latency timer expires. In that case the
problem does not occur (or not as frequent to spot it anyway).
When latency is at 1 ms, our device is not ready in time with the reply,
which means that it takes about 6 iterations of expired latency timers
and ftdi_read calls that return with 0 bytes. After that the complete
chunk of 522 bytes comes in.That is ok for me BUT every once and a while
not the complete chunk is received and I receive less that the expected
522 bytes. This happens more often if the computer is busy. I also never
seem to hit the case that ftdi_read does not give me the expected bytes
(or nothing), so the while loop for an incomplete read (except the
0-case) seems redundant.
I think this happens because the host does not poll the USB bus often
enough and therefore loses data that is sent by the device but that is
just a guess because:
* The read chunk size is at 4096 and therefore larger than expected 522
bytes
* If the latency timer is higher, the problem does not seem to occur (so
at what moment can we lose bytes, where are bytes stored when the
latency timer has not expired yet and can we lose bytes just as well?)
* The internal FTDI buffer is only 128 bytes (for sending) anyway so how
can I even receive chunks of 522 bytes at one ftdi_read command
* The first couple of ftdi_read calls return in about 1 ms (latency)
with 0 bytes revceived but THEN I get one single call (that takes
>10ms!) that return with 'less than expected'. If I am inside the
ftdi_read during that time, how can I ever lose these bytes if there
were not there the call before? Note that the same applies if a call
succeeds: a couple of 1 ms call, then a ~12 ms call with the complete
packet.
Can anyone explain this to me (and maybe hand me some insights or a
solution :))?
Some more info:
* I checked with usbdump and it indeed seems that not all bytes are
actually transferred over the USB bus
* I know that the FTDI driver in windows seems to be polling all the
time (if on 1 ms latency it sucks away 'kernel CPU' especially in our
case with more than 1 FTDI connected and the port open) but the
behaviour I see here does not happen (or VERY infrequently in Windows)
* I am on kernel '3.16.3-031603-lowlatency', could the 'lowlatency' part
be interfering?
Thanks in advance!
Regards,
Hendrik
--
libftdi - see http://www.intra2net.com/en/developer/libftdi for details.
To unsubscribe send a mail to libftdi+unsubscribe@xxxxxxxxxxxxxxxxxxxxxxx
|