where to sync across clock domains


Guest russm

Recommended Posts

Guest russm

(I hope this isn't too much os a stupid question - I'm new to working with FPGAs)

I need to get data in and out of my processing core using SPI. All the implementations of this I've found (eg. the SPI slave in OLS, a few tutorials, some EE course notes online) run the SPI slave off an FPGA-wide clock, with a bit of glue on the external signal lines to make sure that everything is synchronised to the internal clock.

My first thought was to run the SPI slave directly off SCLK and then synchronise across clock domains when the data is being passed to/from the main processing core.

Is there any particular reason that nobody seems to do it my way? The only things that I can think of is this would a) use an additional global clock buffer, and B) reduce throughput in a throughput-intensive application due to the sync pause on every data block.

Neither of these are problems for my application, but I'm wondering if there's anything else I've missed. A general design principle against running multiple clock domains in one design? Interference? "It's just easier if everything runs off the same clock"?

Link to comment
Share on other sites

Hi :) Let me see if I can shed some light on this clock thing.

Four scenarios can exist when we're speaking about clocks:

a) You have one and only one clock which drives all synchronous elements,

B) You have only one clock but use enable signals/dividers to obtain a clock which is an integer division of the original clock, or use an enable signal instead of a dedicated clock (latter is more usual).

c) You have two clocks where the frequency and phase difference are known - this way you can always figure out the worst case scenario and impose constraints on your design based on that,

d) You have two completely unrelated clocks, and here you need to resynchronize data from one clock domain to another.

You seem to have a d) scenario, so I'd suggest the following:

Use the external clock (note that it will be delayed) to synchronize your inputs/outputs, then use a FIFO with different clocks to transfer data from one clock domain to another. Block RAM are good for this.

If your internal clock is higher compared to the SPI clock you're expecting you can resynchronize everything to the internal clock. Just place a latch+ff on each SPI input, and use your clock as output clock. You'll have to use the SPI clock as "level" triggered, so you won't attach this clock to your synchronous elements, but rather use a scheme like this:

signal spi_clk_in_samples: std_logic_vector(1 downto 0);

process(clk) begin

  if rising_edge(clk) then

    spi_clk_in_samples(0) <= spi_clock_in;

    spi_clk_in_samples(1) <= spi_clk_in_samples(0);

  end if;

end process;

Then:

signal spi_clock_rising: std_logic;

spi_clock_rising <= '1' when spi_clk_in_samples(0)='1' and spi_clk_in_samples(1)='0' else '0';

And then:

process(clk)

begin

  if rising_edge(clk) then

    if spi_clock_rising then

    --- Process your data here

  end if;

  end if;

end process;

Other thing is FPGA usually do not allow that many clocks per design, because they have what they call "clock regions". These clock regions have very fast clock paths, but only one clock can be used. So I'd go for this approach if speeds are 4x higher.

Link to comment
Share on other sites

Guest russm

yep, my scenario is d) on your list, and SPI is significantly slower than the main FPGA clock - I'm building my proof-of-concept on a spare OLS board.

the design you mention is what I have seen elsewhere ("run the SPI slave off an FPGA-wide clock, with a bit of glue on the external signal lines to make sure that everything is synchronised to the internal clock"). I know this will work, since it's what everyone seems to do, I'm just curious about design approaches. basically I'd like to understand why things are done the way they are, instead of just cargo-culting in chunks of code.

cheers!

Link to comment
Share on other sites

  • 1 month later...

Archived

This topic is now archived and is closed to further replies.