Hi,
I was working on some code requiring bitbang with a defined baudrate. The real
baudrate was not was I expected so I looked at the code a bit and found it a
mess:
When using bitbang, the given baudrates are "magically" multiplied by 4 but
are reported back (via ftdi_context.baudrate) in this increased form. This *4
conflicts with the "actual baudrate 16 times the baudrate" found in the
datasheets everywhere.
I wrote a small test program which writes out a defined size of bytes at a
given baudrate and compares the expected and actual runtimes.
To make things complicated, the actual runtimes differ systematically from the
expected ones (tested on BM-type, 2232D and R-type):
serial: expected / actual time: about 1.0, everything ok
async bitbang: expected / actual time: about 0.25
sync bitbang: expected / actual time: varying with baudrate and txbuffer
size, I have observed factors of 0.18 to 0.05
I guess the factor of 0.25 for async bitbang was the reason for adding the *4
to the code back then. But does anybody know how to explain this factor? I
could not find anything in the datasheets about this.
I ported my test program to D2XX and it behaves identically, so it doesn't
seem to be a bug in libftdi.
When looking at the timing diagram for sync bitbang (e.g. page 5 of
http://ftdichip.com/Documents/AppNotes/AN232R-01_FT232RBitBangModes.pdf )
one might expect constantly 1/6 of the real baudrate. But why is the factor
changing that much with baudrate and transmitted blocksize?
Does anybody know whats really going on?
Could anybody with a scope or good logic analyzer at hand take a look whats
really written out?
You can find my test program in current git or at
http://developer.intra2net.com/git/?p=libftdi;a=blob;f=examples/baud_test.c;hb=HEAD
Kind regards,
Gerd
--
libftdi - see http://www.intra2net.com/en/developer/libftdi for details.
To unsubscribe send a mail to libftdi+unsubscribe@xxxxxxxxxxxxxxxxxxxxxxx
|