Idea for a Logic Analyser with high time accuracy.


hamster

Recommended Posts

I've been thinking, what is of more use in a logic analyser?


  •  A Logic Analyser with more bandwidth
  •  A logic analyser with greater temporal resolution

 


For example, if you were to capture at 800Mb/s, but have the shortest pulse of 8 bits (e.g. just over 100MHz) by throwing away the high-freq information then you can compress the bit stream quite a bit.


 


A simple compression coding scheme might be:


 


0x0 -> run of 8 bits before transition


0x1 -> run of 9 bits before transition


0x2 -> run of 10 bits before transition


0xD -> run of 22 bits before transition


0xE -> run of 15 bits without transition


0xF -> Escape code


 


A double escape followed by data nibbles could set initial state four channels channel, a single escape could indicate a channel switch.


 


This would be able to encode at between 26% and 50% (400Mb/s) for capture at 800MHz, but provide the same timing resolution as a 800MHz capture, when capturing signals under 100MHz.


 


Humm...


 


Link to comment
Share on other sites

Here's the coding system so far....

 

00 = output six bits - do not flip the output after

01 = output seven bits, then flip

100 = output 8 bits then flip

101 = output 9 bits then flip

110 = output 10 bits then flip

1110 output 11 bits then flip

1111 output 12 bits then flip.

 

After a run of 12 bits you just add "00"s - so "0001" is a run of thirteen bits. This scheme will always compress any bit string that consists only of runs of seven or more bits by at least 37.5 percent,

 

So I'm thinking of using one SRAM page for each channel, first bit of that page is the initial state, and then (pagesize in bits) *2.5 samples worth of data - so if memory page has  2048 bits it would hold 5120 samples,, but is guaranteed to use only 1921 - the extra 120 bits could be used for metadata perhaps....

Link to comment
Share on other sites

Not sure to understand all. I neither don't know if all logic analyser are working the same way.

 

By using this kind of technique you cannot guarantee to record a known number of sample at a specific sample rate. Then you can't guarantee record for a minimum time. Because you have a dependency on the data observed. It can be any suite of binary symbol.

 

The compression that reduce the number of symbol is interesting to send data through a PC in a reduced time. Fastest we can send data to host system and fastest we can reuse the local memory without loosing data. Also you can take 'snapshot' with an accurate and known number of sample at the maximal sampling rate.

Also such system is probably more scalable. More money you have, more local memory you put on the system and more sample you can have in the 'snapshot' to increase the minimal record period.

 

If i had to build to a logic analyser ( i don't know how are working other project). I would :

Sample data and put them in a local memory.

Send data from local memory  to an host system to be display.

The sample rate would be configurable, the number of channel to observe would be configurable. 1,2,4,8,16,32 ... in a first time.

Ideally the upload link will have enough bandwidth to send all data from the local memory before the lost of data. Compression could be used here : it statistically increase the bandwidth, but you can also expect save power (off chip communication are more expensive in power than in chip communication) by transmitting less data.

 

My 2c thought.

Link to comment
Share on other sites

So say you are working on a 80MHz project, using the OLS you will want to capture at 100MS/s. You are then left with uncertanty if your pulses are single cycle or 2 cycles long. So you have to capture at 200MS/s, using 200Mb/s per channel.

 

But how about this coding scheme - it is the least efficient of the bunch.?

 

00 = A run of three bits, do not flip at the end 

01 = A run of three bits, flip at the end

10 = A run of four bits, flip at the end

11 = A run of five bits, flip at the end.

 

This guarrantees a 66.6% compression (worst case is 00 00 00  or 01 01 01 or 00 01 00 01 etc).

 

Assemble these into frames, with four extra bits

 

Bit 0 - initial state

BIts 2:1 - Number of samples to drop from initial code.

Bit 3 - was a runt pulse seen in this frame

 

You can then fit 'n' samples in (2n/3+4) bits of memory.

 

0000 01 01 10 01 11 ...  decodes to 000 111 0000 111 00000 ... and no runt pulses were seen.

 

With a 2048 bit page size, you can *always* store at least 3,066 samples

 

So back to the example, when working with a 80MHz design, would you rather:

 

a) Sample at 100 MHz and use 100 Mb of memory per second, and be able to see pulses > 10ms, with 10ns uncertanty around transistions occur

 

B) sample at 200 MHz, and use 200 Mb of memory per second, and be able to see pulses > 5ms, with 5ns uncertanty around transistions occur

 

c) sample at 300 MHz and use 200.4 Mb of memory per second, but only be able to acurately capture pulses greater than 10ns, , with 3.3ns uncertanty around transistions, but also be told if pulses between 3.3ns and 10ns were also present?

 

(Other coding schemes allow greater compression for greater minimum run lengths).

 

Mike

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.