At this point i have designed a small pipeline that grab the pixels, takes the Y (luminance) component performs a downscale of the image (640x480 -> 80x60) and send the picture on serial at 3Mbaud. Frame grabbing and sending is done at 30hz. There is no soft core involved, everything is performed using homemade modules (i2c, pixel grabbing, downscaling ...) and i'am only using some BRAM for configuration storing, one 80 pixel line storing for dwonscaling, and a 128 byte FIFO for the serial communication.
I made a little java app to display the picture and test evrything. The picture are fine but it seems that i sometime get transmission errors.
The next step is to add an edge detector to the pipeline and try to dectect a line an build a line following robot in the future !
and the board, the reset button is directly pugged into the female header ...
The sensor connected straight into the papilio headers, is just had to add a little stripboard to get access to the vcc and gnd of the papilio. There is still room to connect a second camera ... stereovision anyone ?
Device Utilization :
Selected Device : 3s250evq100-4
Number of Slices: 252 out of 2448 10%
Number of Slice Flip Flops: 215 out of 4896 4%
Number of 4 input LUTs: 483 out of 4896 9%
Number used as logic: 474
Number used as Shift registers: 9
Number of IOs: 22
Number of bonded IOBs: 22 out of 66 33%
Number of BRAMs: 3 out of 12 25%
Number of GCLKs: 4 out of 24 16%
Number of DCMs: 2 out of 4 50%
EDIT: The project can be checkout on
The VHDL and SystemC code are not very clean but i plan to do some refactoring.
The project also contains a small SystemC to VHDL application i designed to help with this project but not a complete work. It only translate RTL like SystemC.
if you want to read the whole thing please check the original project thread here