Jump to content
  • 0

Tamires

Question

Hi,

I'm using Analog Discovery 2 to perform data acquisition through the two channels of the Oscilloscope, using the AnalogIn_Record.py code available in the SDK as a base.

However, when the sampling frequency is greater than 600k Hz, samples are always lost and corrupted. For sampling periods longer than 120 seconds the losses are increased.

 

Writing to a file takes almost the same time as acquisition, which makes the program uninteresting for the application I need. To minimize the effects of such a delay, I use a Thread that writes to file 'in parallel' with acquisition. With this, the problem of delay in writing to the file is satisfactorily minimized. However, the problem regarding corruptions and losses is not solved.

 

The example program AnalogIn_Record.py does not present any solution to this problem. It only requests that the sampling frequency be reduced, which is not satisfactory. After all, Analog Discovery 2 supports frequencies higher than those I want to use.

Is there any implementation better suited to mitigate such an error?

 

Thank you in advance for your help.

Link to comment
Share on other sites

3 answers to this question

Recommended Posts

  • 0

Hi @Tamires

The record/play uses data streaming so the rate is limited by the available USB bandwidth and computer performance. Normally it should work at up to 1-2MHz.
The normal (single) acquisitions can go up to maximum sample rate (100MHz) limited to the available device buffer size 8-16Ki samples for AD2.

Link to comment
Share on other sites

  • 0

Hi @attila.


Thanks for the answer.


You say that the recording/playback rate is between 1-2 MHz, but running the AnalogIn_Record.py and AnalogIn_Record1.py examples present in the WaveForms SDK did not get satisfactory results. The longer the acquisition time and consequently the number of samples I want to collect, the more samples are lost and corrupted for frequencies above 500 kHz.


When executing AnalogIn_Record1.py exactly as it is and without making any changes (hzAcq = c_double(1000000) nSamples = 300000000 ) the program never leaves the loop ( while cSamples < nSamples: ) .
What would be the other bottleneck and limitation for acquisitions greater than 500 kHz in addition to the USB bandwidth for samples to be lost and corrupted?
Another question is how corrupted samples are handled internally? Because in several cases I have corrupted samples, but the generated .csv correctly presents the number of samples desired.

Edited by Tamires
Link to comment
Share on other sites

  • 0

Hi @Tamires

The sustainable stream rate depends on the available USB, CPU, storage... bandwidth.

To prevent or reduce chance of device buffer overflow:
- use the 2nd device configuration to have mode scope buffer
- connect the device directly to the computer, avoid hub with multiple devices connected that share the USB bw
- close unnecessary applications, services that could block the system
- store at lower resolution 64, 32 or 16 bit and avoid text format, to reduce storage size

The 'corrupted' samples indicates that buffer over flow could have occurred, when two consecutive chunks exceed the device buffer size. This is uncertain but it could be a buffer overwrite. The record length can be set to unlimited (-1) and stopped from script, like it is implemented in the record py example loop.

Here I've used the Rec option in the WF app with AD2 and passed 300M samples at 1 MHz for 2 channels storing samples as double without any issue.
This is similar to the the py record examples just easier to use.

image.png.8aded5013853b6bfed07d37f6eb02230.png

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...