Jump to content
  • 0

Analog Discover 2 analog in, sample averaging bug?


Guest

Question

Hi,

I have been doing some tests to assess the AD2 analog-in performance. To this end I hooked up a signal generator to the AD2 and captured raw signals (using the FDwfAnalogInSample16 API function).

If I set my signal generator to be at DC, just be a bit above the range of the analog-in channel, and sample at 100 kHz using the Decimate filtering mode I consistently get back the value "32764" as a sample value.This conforms to my expectation: it's just the maximum value that the 14-bits ADC can output (16383), multiplied by 4 to go from 14 to 16 bits range (65532), then subtracting 32768 to make the range symmetric around 0, yielding, indeed, the value 32764.

However, when I set the filtering mode to Average, I consistently get back the value 32760. Which is not the average of a bunch of samples that are each individually equal to 32764.

I also found that the sample averaging process does not use the lower 2 bits, unfortunately -- they are always at zero. I would have hoped that the sample averager inside the FPGA would use more than 14 bits of precision for its adder, so it could effectively use the full 16 bits that are available when transferring samples to the PC. But alas, that isn't so.

I would appreciate it if someone on the side of Digilent could look into the code that does the averaging to understand how it could ever yield the value 32760 when averaging sample values that are all 32764. It may well be that there's a bug in the sample averaging code which introduces a bias or something like that, and that would certainly not be desirable.

Cheers, Sidney

Link to comment
Share on other sites

13 answers to this question

Recommended Posts

  • 0

Well it makes me distrust the calculation that is being performed.

I think it is a reasonable expectation that an averaging calculation of a DC signal makes the result more accurate, rather than less accurate.

Link to comment
Share on other sites

  • 0

Hi @reddish

I thought this 0.012% error is a limitation of the HDL implementation, but it turned out it was an imperfection in the 15 year old C code. The averaging parameters were calculated for 16bits instead of 14.

It will be fixed in the next sw version.

Thank you for the observation.

Link to comment
Share on other sites

  • 0

Nice, squashed an old bug.

It would be cool if the averaging would produce a 16-bits result rather than a 14 bits result as it seems to do now; since samples appear to be transported as 16-bits numbers anyway, that could be a significant performance win at lower sample rates. But I'm not sure if that would be hard to implement.

 

Link to comment
Share on other sites

  • 0

Okay clear, I understand the tradeoff, thanks for the explanation. Sounds like you guys are really pushing the FPGAs to their limit, nice... :)

Link to comment
Share on other sites

  • 0

Adding to my previous question, I want to confirm: does the averaging happen inside the FPGA or on the PC?

Since I found that Record mode just transfers a bunch of internal buffers to the PC for processing, rather than doing something more clever (see this post), I don't trust that my mental model of how the device works is very reliable anymore.

Edited by reddish
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...