• Content count

  • Joined

  • Last visited

  • Days Won


Everything posted by alvieboy

  1. You can try using ZPUino itself to perform the SPI programming. Let me know if you need some pointers on how to do it. Contact me at Alvaro
  2. dcachev2_zpuino_preliminar.tar.gz

    Version 2.0.0


    Preliminary ZPUino dcache (v2)
  3. Implementing a IWF cache (Important Word First) is quite complex. I did it for xThundercore (which is another CPU I am developing), but ended up quite big, and to be honest I did not see any spectacular performance improvement. One technique (which is simple, but may require compiler awareness) is to assume all forward branches to be a miss, and all backward branches to be a hit. I'll send you my dcache by private message (and the write buffer). Alvie
  4. Another question - note that I have had not much time to look at your implementation - why are you snooping the data bus cyc in the instruction cache (I assume it's your change, has a TH comment on it) ? Alvie
  5. Not if you use the embedded multipliers. Those are slow (never managed to get a 32x32 to work above ~105MHz or so). I have a data cache I wrote for ZPUino (not published, it's a two-way associative). Let me know if you want to take a look. Regarding bitfiles: how you program the design afterwards - or do you have to embed the code inside the bitfile ? We can try porting the ZPUino bootloader for your new platform, should be pretty much trivial. Alvie
  6. Do you have clang+llvm working for the platform ? Last time I looked it seemed like work in progress. Or do they use gcc ? What's the current clock rate for the system ? Can it go past 50Mhz ? Alvie
  7. I have powered ESP8266 using PPro rails just fine, even when overclocked. Just make sure you have a nice caps on ESP supply to support the high current bursts. On another note: I have ESP8266 wings ready, in case you want them. I can sell you the PCBs for USD1.5 each, plus shipping. Or I can share design with you if you want to build them yourself. Alvie
  8. Excellent. Now, we should document that somewhere... just unsure where. The code size different is substantial I believe, even if you don't actually use writes. Alvie
  9. I assume you're using the SD library. If so, try creating a file named "config.h" in your sketch folder, and add the following line: #define SD_WRITE_SUPPORT 1 That should enable write support. Sorry this is not documented anywhere. It should be, but unclear where I'd put such information... Alvie
  10. Good work Alvie
  11. You're the man, Markus
  12. AFAIK the diodes are just for protection [to make sure we don't kill the VGA monitor], and the caps (which should only be a few pF) are part of a low-pass [although HF] filter. Jack should have more details. You can perfectly live without them, though. Alvie
  13. I have an implementation for it. let me know if I should publish it (I think it's on an non-published ZPUino branch) Alvie
  14. To be more precise, the FPGA boots the initial ZPUino from SPI flash [FPGA design], it starts, waits for 1 second for serial/usb commands and then loads the user code from SPI flash and executes it. Alvie
  15. Not that I know of. It should work flawlessly out of the box. Note that often "writing" is disabled, to save code space. What issues are you encountering ? Only write not working? Alvie
  16. No, I never used Microblaze at all. I find the architecture (most notably the function preambles/postambles for C ABI) a bit awkward. And it's commercial Alvie
  17. I believe we also have the generic VGA working on DUO for those resolutions. Alvaro
  18. It should be fixed by now, I think.
  19. We have such an implementation using the SDRAM. If I recall well, it works OK at 640x480x8bit. Ideally you should get a memory which is almost 3 times as fast as the fastest datastream you need (read or write). You will also need to arbiter between the read and write requests, and probably will need a read FIFO to account for the jitter of the arbiter (we do exactly that). You may want to take a look here: Alvaro
  20. Indeed. However, we have two FPGA designs. The master designs always loads first, and can set those registers so that when the second (user) design loads, it is already set up. Not sure. I actually think we will not be able to use the method cause PC will always drive DP/DM signals... still to understand and test. I think the idea was to force an USB reset, and that will force DP/DN to go low (Single-Ended zero). May work, may not work... Alvie
  21. Btw, here's the USB wishbone controller:
  22. Any expertise is welcomed What we have put to work so far (with a PPro and my USB wing) packs a simple USB transceiver, so all PHY-related stuff is actually inside the FPGA. This works well for full-speed (12Mbps) and we have a quite generic USB interface for it, with support for most useful endpoint types, but not all (isochronous is not supported). The internal design we have uses also ULPI, so should be fairly simple (never that simple, is it?) to use USB3300 or other ULPI/UTMI chip. But USB 2.0 puts some emphasis on larger endpoint sizes, and memory is not that big internally. My idea (original idea) was to actually have a EHCI interface to the CPU, but not sure it is worth the effort. Not sure when I will be able to test USB3300, I will let you all know when I do. Alvie
  23. Might well be.. or at least play a role on it. Can you try setting all those actively driven to FAST ?
  24. On the mapping report (.mrp) you should see if your IO pins have dedicated IO registers. +---------------------------------------------------------------------------------------------------------------------------------------------------------+ | IOB Name | Type | Direction | IO Standard | Diff | Drive | Slew | Reg (s) | Resistor | IOB | | | | | | Term | Strength | Rate | | | Delay | +---------------------------------------------------------------------------------------------------------------------------------------------------------+ | CLK | IOB | INPUT | LVTTL | | | | | | | | DRAM_ADDR<0> | IOB | OUTPUT | LVTTL | | 12 | FAST | OFF | | | .... | DRAM_CLK | IOB | OUTPUT | LVTTL | | 12 | FAST | ODDR | | | | DRAM_CS_N | IOB | OUTPUT | LVTTL | | 12 | FAST | | | | | DRAM_DQ<0> | IOB | BIDIR | LVTTL | | 12 | FAST | IFF | | | | | | | | | | | OFF | | | | DRAM_DQ<1> | IOB | BIDIR | LVTTL | | 12 | FAST | IFF | | | | | | | | | | | OFF | | | ... | DRAM_DQM<0> | IOB | OUTPUT | LVTTL | | 12 | FAST | OFF | | | | DRAM_DQM<1> | IOB | OUTPUT | LVTTL | | 12 | FAST | OFF | | | | DRAM_RAS_N | IOB | OUTPUT | LVTTL | | 12 | FAST | OFF | | | | DRAM_WE_N | IOB | OUTPUT | LVTTL | | 12 | FAST | OFF | | | As you can see here, the address lines have OFF in Reg(s) tab, meaning "Output Flip Flop". The Clock line is driven by a ODDR flip flop. The data lines have both OFF and IFF (Input Flip Flop). All other control lines have OFF. This ensures minimal delay from clock to pad, and to chip, and its very important to timing. You seem to have increased one of the timings, though. I wonder it is the same issue we are facing with newer SDRAM parts. Alvie
  25. It may relate to different setup/hold values, and how FPGA routes stuff. Can you check that all your DATA/ADDRESS lines for the SDRAM have IO registers ? Alvie