Jaxartes

Members
  • Content count

    82
  • Joined

  • Last visited

  • Days Won

    4

Everything posted by Jaxartes

  1. More: I'm not aware of there being any chart mapping bit values to voltages. That doesn't mean there isn't one. I'd expect it to be pretty nearly linear, at least that's what I guess as "ideal." So if you measure the actual binary-analog relationship you're getting (between nybble values within your Verilog, and voltages at the VGA pins), it might tell you something. In particular: Is it the same for all three colors? Do certain bits make more or less difference than you'd expect, assuming linearity? Does any actually match what you'd expect for that bit, or one of the other bits, assuming linearity and a 0.7v full range? Does this FPGA have configurable output voltages (less than 3.3v)? Any chance one of those is in use? But it seems unlikely to cause the major difference you're seeing.
  2. Various minor thoughts, none of them very novel but I'll toss them out here: At http://papilio.cc/index.php?n=Papilio.ClassicComputingShield there are tables of pin mappings according to various notations. I wonder if settings for output current or drive strength, on the FPGA pins used for VGA, could be an issue? I haven't dealt with those, and it's probably completely different on Altera. Hooking a scope up to the digital lines (where you'd expect 3.3V) might help solve this problem. I don't have firsthand experience of the Computing Shield, but I'd expect that it generates pretty much full range 0 to 0.7v. I've used the Papilio Arcade MegaWing and didn't find the colors to be dim. If the order of bits were reversed, then I'd expect some low nybble values like 2 or 4 to show up brightly, which isn't what happened. Pick 8 of the 12 VGA color bits and put them on the (I think) 8 LEDs on the BeMicro Max 10. Then display colors. Pick just one of the 12 VGA color bits and put it on all 12 of those pins. Then trying displaying all possible color values.
  3. Regarding video timings, the pixel rate is generally higher than the calculation of refresh*horizontal*vertical, due to periods that no pixels are being transferred. For the most common 640x480x60Hz resolution, it's 40ns per pixel instead of 54ns. http://tinyvga.com/vga-timing has more detail. One way to stretch that time is to buffer part of the screen in the FPGA's own on-chip memory. If you read and buffer one scan line at a time, you can get it to 50ns per byte, using only 640 bytes of buffer memory. (Calcuation: 640 pixels in a scan line, one byte per pixel, a buffer to hold one scan line is 640 bytes; in the standard 640x480 screen, a scan line takes 32us; 32us/640=50ns.) If you can reduce the amount of memory your screen image takes up, you might be able to fit it entirely onto the FPGA without any off-FPGA RAM. That gives you a number of benefits: It's easy to work with, it's fast, and (at least on the ones I've used) it's dual-ported. The downside is that capacity is limited (it depends on the particular FPGA). But you may be able to reduce how much you need, using techniques from back when computers' memory was much smaller: Fewer bits per pixel; text mode / tile mapped graphics. (For example, 1024x768x8bpp would be 768kB, but https://github.com/Jaxartes/fpga_robots_game/ gets 1024x768 resolution with only 12kB RAM for video.)
  4. The "FPGA reset" pin resets this FPGA itself. So maybe the Xilinx software doesn't let you use that pin. There wouldn't be any purpose in using it, since your logic wouldn't be running while the FPGA is being reset. So you'd need to find an alternative. Some that come to mind: If you've got a shield board plugged into the Papilio DUO, and that shield board has one or more buttons, you can use one of the buttons as your reset. There's a slide switch on P104. If you're not using it for anything else, you could use it as a reset. It's a little inconvenient to use a slide switch for that, because you'd have to move the switch twice (once to reset, and once to stop resetting). It's also possible to write some logic that processes the switch state, so that every movement of the slide switch generates a reset. It's not trivial but it's fairly simple. The easiest way for me to describe the logic is to write it in Verilog; here goes: module reset_finder(input clk, input slide_switch, output switch_moved); reg slide_switch_delayed = 1'd0; always @(posedge clk) slide_switch_delayed <= slide_switch; assign switch_moved = slide_switch != slide_switch_delayed; endmodule
  5. I believe that's the wrong pin [in reference to a question about the clock input, and pin P55, that has been edited out I think]. The Papilio DUO generic UCF file has the following: NET CLK LOC="P94" | IOSTANDARD=LVTTL; TIMESPEC TS_Period_1 = PERIOD "CLK" 31.25 ns HIGH 50%; See also http://forum.gadgetfactory.net/index.php?/files/file/235-papilio-duo-generic-ucf/. P55 seems to be one of the external pins on the board. Generally finding out what pins map to what, I rely on the published UCF files from this site, and then if there's a confusion I refer to the schematic. Note that there is more than one set of pin numbers (FPGA, and board) and more than one kind of thing termed "clock" (SPI bus clock signal for example). I don't see the references that identify P55 as clock or P38 as reset.
  6. The "snake" game from a few months ago inspired me to do something similar for the classic "robots" game (turned out, much more work than I'd anticipated!). It's implemented in Verilog on the Papilio Pro with the Papilio Arcade, but should be portable to other FPGA platforms with comparable basic capacity. I don't know if it would work on Papilio One; it uses a relatively small fraction of the on-chip resources but the timing margin is a kind of narrow. Source code repository at: https://github.com/Jaxartes/fpga_robots_game It displays in 1024x768 on a VGA monitor, and is controlled by keyboard. Or, it can be controlled from the serial port for those without a PS/2 keyboard hooked up. There's a one-minute demonstration video:
  7. Some miscellaneous tips; know that I'm not very advanced at this (I've been doing FPGA designs for two years, but only as a Saturday afternoon hobby). Using logic output as a clock is not recommended on FPGAs. I'm not up on what can go wrong with it, but I believe the routing for the two kinds of signals is different. Glitches (false edges produced by the logic) might be a problem too. Two alternatives are: Use one of the on-chip PLLs to convert the 32MHz clock to another speed (but 10Hz might be way out of its range). Use the higher speed clock (32MHz) and control the speed of your logic by applying a "clock enable" signal to every flip-flop. The clock enable is a synchronous signal, with 10 pulses per second, 1 clock cycle per pulse. That's is what I'd recommend here. Net names have to get mapped to the pins on the FPGA. Generally the constraints file (which in ISE is a UCF file) controls this. If your constraints file is missing or absent or ISE is not set to use it, you might have problems. (I don't know what exact problems they are.) You can find out what pin is used for what. It's in the GUI as "Pinout Report", or in a file named projectname_pad.txt. The first two columns are pin number and net name. Look for warning messages from the build tools. These give so very many warnings that it's a nuisance, and I often ignore them, but when something stops working, they might tell you something. The GUI can show them to you, or they appear in various output files. Some filename suffixes are: .syr, .par, .map, .bgn, .drc. ISE contains a simulator. If you can get that to work, it could show you what's going on inside your design. I haven't used it, so can't give many pointers, except I think you probably want to not use the 10Hz divided-down clock with it, it'll take too much time.
  8. .xise files are supposed to be opened by Xilinx's ISE toolkit. They're not PDF files, but for some reason it looks like your Windows system thinks Adobe Reader is supposed to open them anyway. Maybe ISE is not installed right or something. I wouldn't know why, I don't use Windows much. Operating systems trying to figure out what application to use to open a given file sometimes get it wrong.
  9. It may be possible to have a logic analyzer on the FPGA, and connect it only internally. Regarding what a wrong wishbone slot might do: No, there are ways it could turn out badly: If the register it's writing to controls the serial port: Then you might lose connectivity. And it might appear like a lockup, even though your code is running ok, it can't communicate. If there is no device responding to that address, not even enough to do an ACK: Then it would wait forever. I think ZPUino has something to prevent this, but I don't really know. A few more debugging ideas: Put short temporary debug prints in the sections of code you want to watch. You can see with more precision where exactly it's getting stuck. It might not be where you think it is. I'd use single characters. In place of the LEDs suggested above, use audio output as an indicator: Superimpose a particular sound on top, to indicate a particular state. It has its own difficulties -- it's not the simple on-off of LEDs, and you can't hear a 48MHz square wave (and probably wouldn't want to - ouch!). But if audio's what interests you, and what you have, you can use it.
  10. Some thoughts on debugging: I'm not aware of any general FPGA "debuggers" but I wouldn't necessarily know. (And I'm not a big fan of "debugger" tools anyway -- I often don't use them when I debug software.) Debugging is a real important thing when working with FPGAs. I recommend developing a wide variety of techniques, and patiently applying those techniques to the problem. One thing you can do is add output signals derived from whatever you want to monitor. So if you have a pair of LEDs unused, you could light one up when in a particular bus state, and light the other up when not in that particular bus state. When you get it running, if the "yes" LED is lit, and the "no" LED is not, then it's perhaps stuck in that bus state. If the "no" LED is lit, and the "yes" is not, then it's probably stuck in some other bus state. If both are lit, it's not stuck in any one bus state (but may be stuck in a cycle of them). Each time you do this, you might (or might not) get some information. Repeated several times, monitoring for different states, might get you somewhere. (Like: what's it doing? writing to an address. is it the device address? no. what address is it? the wrong address.) Make sure you got the right wishbone slot. If the one in your C++ code (6) doesn't match the one in your circuit, then you might be writing to some other device's registers, which could have unpredictable and dire consequences, such as causing a lockup (or causing a loss of communication). A "logic analyzer" tool exists in DesignLab, though I don't know much about it.
  11. Re mixing Verilog and VHDL: Yes, it can be done. I think it's not very difficult. Re Wishbone bus: It looks like both sides are using the Wishbone bus. The signal names are different, a few signals are missing from one or the other, and it's possible there are semantic differences between signals with similar names. I'd recommend looking at the wbfmtx page at OpenCores for some documentation of its Wishbone interface. Like maybe this one here: http://opencores.org/websvn,filedetails?repname=wbfmtx&path=%2Fwbfmtx%2Ftrunk%2Fdoc%2Fspec.pdf. (E.g. it says o_wb_stall is always set to zero) Also the Wishbone bus specification is worth looking at. That can be found at OpenCores too. Overall: I recommend trying a less ambitious project. Interfacing two complex cores together is tricky. Writing your own Wishbone core and interfacing to that, might be a good "warm up" exercise. When I was in a similar situation, I ended up writing a small Wishbone "timer" device. The advantage of a timer is it has easy to understand and investigate behavior, and it has no external interfacing other than the Wishbone bus.
  12. The "papilio-prog" command line tool displays the device DNA when I use it to load a bitfile on a Papilio Pro (Spartan6). Source code is in the "papilio_prog" subdirectory of https://github.com/GadgetFactory/Papilio-Loader. Example (underlining added): user@host:~ $ sudo ~/bin/papilio-prog -vf fpga_robots_game.bit Using built-in device list JTAG chainpos: 0 Device IDCODE = 0x24001093 Desc: XC6SLX9 Created from NCD file: fpga_robots_game.ncd;UserID=0xFFFFFFFF Target device: 6slx9tqg144 Created: 2016/06/25 15:02:07 Bitstream length: 2724832 bits Uploading "fpga_robots_game.bit". DNA is 0x59aa4afe43849eff Done. Programming time 684.9 ms USB transactions: Write 176 read 8 retries 7
  13. I expect so, though I haven't tried it. The Spartan6 FPGA, at least, can read its own "Device DNA" with the DNA_PORT primitive according to http://www.xilinx.com/support/documentation/user_guides/ug380.pdf. It sounds like maybe Spartan3E can't according to http://www.xilinx.com/support/documentation/user_guides/ug332.pdf. I don't know if reading it via JTAG boundary scan over the USB connection will work, but that also seems like a possibility.
  14. Regarding uses for soft-CPUs: Sending commands to / collecting results from your FPGA design (like for a user interface, or for debugging); being able to create a sort of custom microcontroller with selected (or newly created) peripherals; learning / experimenting with (small) computer architecture. Regarding your questions about next projects, and about interfacing with AVR: Why NOT do a SPI interface from scratch? It doesn't sound that complicated, and you could adapt it for controlling your future FPGA designs from the AVR. Make the FPGA act as SPI "slave", and the microcontroller as "master;" at first, just doing something simple like lighting LEDs or reading switches. There are also probably plenty of SPI implementations out there you could try to adapt to your needs. Although modifying someone else's code to fit a different purpose can be difficult in itself.
  15. Here's what I think is going on, based on looking at the code in https://github.com/jamesbowman/swapforth/tree/master/j1b/verilog. I think it indicates a read from some I/O device's registers. That is, instead of reading/writing memory, the instruction reads some data provided by one of the peripheral devices. This is one of the common ways a computer design could allow access to I/O devices. It looks like J1B uses a mixture of port-mapped and memory-mapped I/O (see https://en.wikipedia.org/wiki/Memory-mapped_I/O): as I look in xilinx-top.v I see that some memory accesses go to registers such as "uart_baud" too. Analysis in more detail: In j1.v: When "insn[6:4] == 5" then func_ior is asserted to 1. When that is true (along with some other things) then the output signal "io_rd" is asserted to 1. In xilinx-top.v: The signal io_rd is delayed one clock cycle (as io_rd_) and used, along with mem_addr (delayed as mem_addr_), to receive from the UART (see the signal uart_rd).
  16. Regarding SDRAM CKE: I'll point out that the data sheet's "recommended power-up sequence for SDRAM" includes bringing CKE low and then high again. Although it doesn't say how long it needs to be low -- Hamster's SDRAM controller only does it for a single clock cycle, no longer, so it probably doesn't make any difference.
  17. Some thoughts, more from what you're trying to do than from looking at your code: 210MHz is a rather high clock rate to get on these FPGAs. I think it can be done, but might require some trickery (especially pipelining) for it to make timing. PWM is a rather simple thing: You have a counter, and you compare the counter value with your setting, and the result of the comparison is your output. When you have more than one clock, the interaction between logic that uses the one clock, and logic which uses the other clock, can be tricky I really haven't experimented with this much, usually I use the same clock for everything. I think you're on the right track with this code -- where there's a piece which handles the Wishbone bus interaction (Wishbone_to_Registers_n, on the 96MHz clock) and another part which handles the PWM (on the >=210 MHz clock) and a simple one-way connection between them (register_out_array). But you may have to put one or two clock cycles' worth of register delays between the components, to give XST a chance to synch them up. In between, it tries to make logic respond within the worst-case period between the 210MHz clock and the 96MHz one. I think for 210MHz that's something like 300ps! Choosing your clock rate wisely might improve that: I believe for 216MHz it would be 1160ps, for 192MHz, 5200ps. All this assumes your two clocks are derived from the same off-chip source (the 32MHz clock on the Papilio board). If you have a second clock signal coming from off-chip then the situation gets more complicated. I don't see anywhere "clk_96mhz" is being supplied to your code. You use it (passing it to Wishbone_to_Registers_n) but the only clock you're getting is "PWM_clk". You need to get both from somewhere... If you want to see code which handles multiple clocks, in DesignLab, you might look for the video code. It uses multiple clocks for the video output, and presumably still uses the 96MHz system clock for its Wishbone bus interface. It's probably a rather complicated example, but at least it's an example.
  18. You set the baud rate on the host (PC) side, in whatever program you're using to communicate with the FPGA. You're right, there are only two pins which communicate the data between the FPGA and the FTDI chip. The transfer rate isn't communicated -- it's set independently on each side. They have to be set to the same value (within a few percent) in order to communicate. On the FPGA side, this is something you do in your Verilog code, usually with some kind of counter. On the other side, it's controlled by the host (PC). I imagine the host sets up the baud rate on the FTDI chip over USB. Anyway, that's taken care of in the operating system and device drivers. Those in turn are set up by whatever communication software you run on the PC side. The exact details of that depend on the communication software and the operating system.
  19. Assuming you're in DesignLab, maybe try clicking the "Load Circuit" button before the "Upload" button. The "Load Circuit" button will upload both ZPUino and whatever I/O devices are used in your sketch.
  20. When it comes to VGA output, different boards will have different numbers of digital pins used to generate the analog color lines. On Papilio the VGA Wing uses three (one per color component), the LogicStart MegaWing uses eight (2-3 per color component), and the Arcade MegaWing uses twelve (4 per color component). So when adapting a design, meant for one of them, to another, you have to change the relevant parts. As for the errors you're getting, I can't be much help, as I don't work usually with VHDL but with Verilog instead. But what I suspect is that there's a difference between a single logic signal (STD_LOGIC) and an array of some number of logic signals (STD_LOGIC_VECTOR) even if the "some number" is 1. I believe that's the same thing treadstone says above. You might have to make small changes to the syntax of the VHDL code to change one to the other. To up convert from one bit per component to three or four, you can do what treadstone suggests above (make that one bit the msb, and fill the rest in with zero) though what I'd usually do instead is copy that one bit to all three or four.
  21. Yeah, I can see that. I don't know exactly what's causing the error but my guess is: That 'video_ram' was meant for the Spartan-3 FPGA found in the Papilio One board, and not the Spartan-6 FPGA you're building it for. A lot of Verilog or VHDL code is portable, but some things which use vendor primitives might not be. And it looks like video_ram is one of those; see "https://github.com/thelonious/vga_generator/blob/master/vga_text/ipcore_dir/video_ram.v". I see two options for fixing it: (a) Use Xilinx's core generator to make one that will be suitable for Spartan-6; or ( write one which can be inferred and therefore will be more portable. I would usually choose ( but I'm afraid neither (a) nor ( is really easy. (a) is explained in chapter 15 of the eBook linked here: http://forum.gadgetfactory.net/index.php?/page/articles.html/_/papilio/logicstart-megawing/intro-to-spartan-fpga-ebook-r34
  22. Regarding the error message: I can't read it in your post, it just shows up as "?" for some reason. Regarding how to "call" it: The "Code x32" means "character code 0x32". That is, when the value 50 (0x32) is found in the video_ram, the corresponding font data (what you quoted above) is displayed on the screen. So the question for you is how to get the codes for "hello world" into the video_ram. It looks like emitCharCodes.pl is part of that process. I think it's used to generate a new video_ram.coe file, and then you build the whole thing using ISE.
  23. By tools do you mean DesignLab? I think it could go either way. It gives you a lot of help, but also imposes its own way of doing things. I was able to do plenty without it (until starting my current project, which depends on DesignLab for unavoidable and non-technical reasons). Soft-CPU and SoC (system on a chip) designs, in Verilog and VHDL, are plentiful.
  24. Unfortunately, any such component is going to have to interface to the memory controller, to read from RAM, and perhaps share it with whatever component is doing the writing. I agree, that's a pain, but it's unavoidable. The one aspect you don't have to deal with is USB. The Papilio boards have a USB-to-serial converter already built-in on a separate chip (the FTDI chip). So what you need to find VHDL code for is serial communication, period. You connect that to the correct two pins, to go to the FTDI chip; and you interface the code to your architecture; and that's that. If there is, or could be, a soft-CPU somewhere involved in your design, that makes serial interfacing a bit easier, though it has its own complexities. If not, have a look at http://opencores.org/project,uart2bus. I haven't tried it, because I haven't needed it, but it provides a memory-oriented interface accessed over a serial line. The fact that it's designed for 16 bits address and 8 bits data might be annoying, but not insurmountable. A small CPU, using a few kilobytes of on-chip RAM, and with a serial interface and a memory interface, is the route I would take. Start with a simple working system that's already been created, and which has a serial interface (I expect most of them do); then adding the "glue" logic to connect it with your design, and writing a program to talk over the serial port and perform whatever test actions you have in mind. Best would be one that doesn't need to use the SDRAM itself, and can operate entirely from the on-chip BRAM. http://forum.gadgetfactory.net/index.php?/topic/2511-papilio-pro-without-sdram/, a recent thread on this forum, might be of interest in that light.
  25. While experimenting with the example "ZPUino_VGA_Adapter/Demo" in DesignLab 1.0.7 I've run into a couple of problems. I've attached a copy of the sketch, with my modifications to variously make the problems better or worse. I was running on Papilio Pro, with Arcade MegaWing, with 800x600 graphics selected. Code: video_sketch_2.ino.txt 1. The microsecond time values that are transmitted on the serial port, are sometimes very wrong. The wrong values are ones like 4254601156 -- a little under 2^32. The cause: TIMERTSC wraps around every 40 seconds or so, so so does the result of micros() based on it. Now, subtracting one "unsigned long" from another, when the wraparound occurs, wouldn't be a problem. But dividing by 96 or whatever clocksPerMicrosecond is, and then subtracting, results in this problem. I see that some fix was put in for millis() but none for micros(). Workaround: A microsecond-granularity "stopwatch", functions stw*(). But they wouldn't be portable to non-ZPUino platforms, and they assume everyone wants microseconds. 2. With the workaround for #1, a new bug popped out: There's a crash happening in testFilledRects() and testFilledRoundRects(). It either hangs the sketch or restarts it. This problem turns out to have nothing to do with #1, it's just one of those "heisenbugs" which disappear and reappear when you change the code or try to debug them. The cause: The loops in those two tests will sometimes pass out-of-range (namely negative) values to gfx.fillRect() and gfx.fillRoundRect(); and when those functions try to draw to off-screen addresses they probably overwrite some important memory location. It's a matter of opinion whether the bug is in the sketch, for passing illegal values to those functions; or in those functions, for accepting illegal values. Workaround/fix: Changing the loops to avoid out-of-range values. Problem #2 only appeared with those two functions, that I could see; it might apply to others but be harder to reproduce.