Jump to content
  • 0

MCC128 on RPi5: Fluctuation in number of samples returned by a_in_scan_start() and a_in_scan_read()


AdamSorrel

Question

I am running a continuous voltage acquisition on the MCC128 on a Raspberry Pi5 (with a PREEMPT_RT). I am running the DAQ card with 10 000 samples/second rate and reading 1000 samples using a python sched module every 0.1 s. In theory, I should be getting 1000 samples every 100 ms, however in practice this number fluctuates quite significantly. 

I am not as much concerned about the actual number of samples, but it is critical to be able to time stamp each value with a reasonable precision.

I was first concerned that my scheduling in the RPi5 is not precise enough, but I am checking both when the request was scheduled (second row) and right after the data was retrieved (third row) and these values are reported as very precise (±100 ns in case of the request and ±0.1ms in case of the data being read). As you can see from the first row of the attached plot, the number of samples is typically around 980, however it fluctuates significantly between 900 all the way to 1100 samples/cycle. The mean value over the time presented here is 999.45 so pretty much exactly a 1000 as expected. The fluctuation is worrisome, because 100 samples would be equivalent to 10 ms uncertainty. This would be an unacceptable error in my case.

Question: I don't understand the hardware well enough to assess the source of this sample fluctuation. Can I just rely on the time stamp being correct and "back-calculate" the timestamps for whatever amount of samples I get? Or is there actually some offset that is changing and will throw my data off and if so is the offset at the trigger (beginning of the data) or the end of the data? 

I am reading all samples available in the buffer in each loop using:

hat.a_in_scan_read(samples_per_channel=-1, timeout = 5.0)

The last row is just the voltage output. This is largely irrelevant, since the DAQ card is not connected to anything, but I have added it just to see the output. I am attaching the raw data used to generate the figure as well as the code I was running to get the data from the DAQ card (see daqTest2.py).

Note: This question was first posted on the GitHub

I have found a previous question on the forum which might be related (see here), however that does not answer my questions. 

 

338962193-3c1cf190-4f1a-4e67-8c8a-5c1fb68537f1.thumb.png.d6f4375ae8dfee77b721e0d04d6097b6.png

outputDaq3raw.txt daqTest2.py

Link to comment
Share on other sites

3 answers to this question

Recommended Posts

  • 0

Requesting data every 0.1 seconds does not guarantee 1000 readings returned. As mentioned in forum post 28379, it is not recommended to request a specific amount of data because of the blocking aspect - use READ_ALL_AVAILABLE. Either way, you can rely on the samples spaced precisely 100usec apart, continuously without gaps. So, every 0.1 seconds, request READ_ALL_AVAILABLE, and 100usec times the samples_per_channel read is the elapsed time for each read event. 

Link to comment
Share on other sites

  • 0
Posted (edited)

Thank you for your comment. Just a quick summary:

  • As you can see in the code I have posted, I am reading all available data (READ_ALL_AVAILABLE is equivalent to value -1).
  • I am not concerned about the amount of data I am getting, as I have already stated in the question. I am concerned about the fluctuation of this amount.

As you mention, I can rely on the samples being spaced precisely 100 usec apart (in my case) with no gaps. That is great news. I then have time when a_in_scan_read() was triggered. Considering that the readout takes some amount of time, the next data acquisition would not start at a defined time point. In contrast, it would be the last value of any dataset that would be collected just before the readout was requested that is precisely timed by the next readout request. So if I want a reasonably precise time stamp for my data, I need to back calculate from the last value of the dataset and use the sampling rate. Can you confirm that I am getting this right, please? 

I am still not understanding how am I regularly getting more than 1000 samples if as you say the samples are consistently 100 usec apart. The trigger time does not seem to fluctuate in a way that would explain this. 

Edited by AdamSorrel
Link to comment
Share on other sites

  • 0

The MCC 128 processor buffers the ADC data at a precise rate generated by a hardware clock (into the hardware FIFO). The daqhats library starts a scan thread running that efficiently transfers data from the hardware FIFO into the scan thread buffer.  This thread shares the SPI bus with any other daqhats that are in use, so it has to be efficient with the bus.  It periodically reads the processor status to know when to transfer the data. Because Linux is not a real-time OS, when the daqhat library (and your program) gets Pi processor time, it is at the mercy of the Linux scheduler, which may help explain the inconsistency in the number of scans read. 

For a timestamp, add 100us for each sample to the previously calculated time. If a_in_scan_read returns 1050 samples, it represents 105ms.

 

 

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...