libftdi Archives

Subject: Synchronous FIFO hardware considerations, was: Re: kudos

From: Uwe Bonnes <bon@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
To: libftdi@xxxxxxxxxxxxxxxxxxxxxxx
Date: Tue, 3 Apr 2012 18:50:29 +0200
>>>>> "Thomas" == Thomas Heller <theller@xxxxxxxxxx> writes:

    Thomas> Sure it is something similar to /dev/null currently.  

If there is some physical transfer I don't consider it as /dev/null...

    Thomas> I'm using
    Thomas> a FT2232H minimodule, TXE# is connected to WR#, and RXF# is
    Thomas> connected to RD#.  Let's just assume that my FPGA that I will
    Thomas> connect next can keep up with the read/write cycles.

The FPGA should easily cope with the rate. What's hard to reach is the
timing. WR# needs minimum 11 ns before CLKOUT, so with 16.667 ns period you
have only 5.667 ns in the FPGA. You won't reach that with a XC6S-4 device. I
had to relax the constraints for about 1/2 ns.

    Thomas> My program uses a home-grown ctypes based python wrapper for
    Thomas> libftdi, it reads resp. writes in 1MB blocks from/to the
    Thomas> minimodule.  The functions that I use are ftdi_read_data() and
    Thomas> ftdi_write_data(); the program measures the time that these
    Thomas> calls take and reports a datarate of 19.xxx MB/s in each thread.

    Thomas> I'm looking with an oscilloscope at the RD# and WR# signals, a
    Thomas> screen shot is attached.  I can see bursts of 8 block (of 512
    Thomas> bytes probably).

With real data every time the FTx232H is not ready you need more clocks to
recover than with your data-shortcut. So data rate might drop. But we also
reach > 16 MByte/s.

    Thomas> The PC is a lenovo W520 notebook, running Win7 64-bit.

A mighty machine...

Bye

-- 
Uwe Bonnes                bon@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Institut fuer Kernphysik  Schlossgartenstrasse 9  64289 Darmstadt
--------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------

--
libftdi - see http://www.intra2net.com/en/developer/libftdi for details.
To unsubscribe send a mail to libftdi+unsubscribe@xxxxxxxxxxxxxxxxxxxxxxx   

Current Thread