libftdi Archives

Subject: Re: FT245R transfer rates

From: Jan Wilmans <janwilmans@xxxxxxxxx>
To: libftdi@xxxxxxxxxxxxxxxxxxxxxxx
Date: Mon, 22 Feb 2010 15:39:15 +0100
Yes, I understand, but to actually be able to receive 9 full buffers, the hardware reading from the FT245 must
read the data faster (or at least as fast) then it is arriving over USB, otherwise the buffer will be full and no more data can be received.

My assumption was, and I see how I was wrong about that now, that the hardware would never be reading that fast. And so I assumed that there 
could never arrive more then one buffersize of data per millisecond.

Thanks, for the links.



On 22 February 2010 15:24, Xiaofan Chen <xiaofanc@xxxxxxxxx> wrote:
On Mon, Feb 22, 2010 at 10:16 PM, Xiaofan Chen <xiaofanc@xxxxxxxxx> wrote:
> On Mon, Feb 22, 2010 at 10:08 PM, Jan Wilmans <janwilmans@xxxxxxxxx> wrote:
>> But what I think you mean is: data is being read from the FT245
>> simulatiously,
>> so whilest packets arrive, the receive buffer is being read and more then 4
>> packets (up to 19 is the hardware reads fast enough) can be send in 1 ms?
>> Greetings,
> I do not know how the internal of FT245R works. But for USB bulk transfer
> multiple (up to 19) packets (each packer is 64Bytes max for full speed bulk
> endpoint) can be done in one transfer (1 frame, 1ms).
> I can imagine the following.
> The receive buffer can hold two 64bytes packets, each frame (1ms)
>  they can receive up to 19 USB packets of 64bytes (9 x full
> 128 byets full buffer plus one 64 bytes partial buffer) in theory.
> In reality, maybe it is less. And there are some overhead involved.

To put it this way, the USB SIE is fast enough (up to 1216 bytes/ms),
if the FIFO engine is fast enough, then it can transmit and receive
what the USB SIE is capable of.

In reality the data rate is lower due to some overhead and some
internal latency of the chip. The driver plays a big part as well.

Met vriendelijke groeten,

libftdi - see for details.
To unsubscribe send a mail to libftdi+unsubscribe@xxxxxxxxxxxxxxxxxxxxxxx

Current Thread