I am running a continuous voltage acquisition on the MCC128 on a Raspberry Pi5 (with a PREEMPT_RT). I am running the DAQ card with 10 000 samples/second rate and reading 1000 samples using a python sched module every 0.1 s. In theory, I should be getting 1000 samples every 100 ms, however in practice this number fluctuates quite significantly.
I am not as much concerned about the actual number of samples, but it is critical to be able to time stamp each value with a reasonable precision.
I was first concerned that my scheduling in the RPi5 is not precise enough, but I am checking both when the request was scheduled (second row) and right after the data was retrieved (third row) and these values are reported as very precise (±100 ns in case of the request and ±0.1ms in case of the data being read). As you can see from the first row of the attached plot, the number of samples is typically around 980, however it fluctuates significantly between 900 all the way to 1100 samples/cycle. The mean value over the time presented here is 999.45 so pretty much exactly a 1000 as expected. The fluctuation is worrisome, because 100 samples would be equivalent to 10 ms uncertainty. This would be an unacceptable error in my case.
Question: I don't understand the hardware well enough to assess the source of this sample fluctuation. Can I just rely on the time stamp being correct and "back-calculate" the timestamps for whatever amount of samples I get? Or is there actually some offset that is changing and will throw my data off and if so is the offset at the trigger (beginning of the data) or the end of the data?
I am reading all samples available in the buffer in each loop using:
The last row is just the voltage output. This is largely irrelevant, since the DAQ card is not connected to anything, but I have added it just to see the output. I am attaching the raw data used to generate the figure as well as the code I was running to get the data from the DAQ card (see daqTest2.py).
Note: This question was first posted on the GitHub.
I have found a previous question on the forum which might be related (see here), however that does not answer my questions.
Question
AdamSorrel
I am running a continuous voltage acquisition on the MCC128 on a Raspberry Pi5 (with a PREEMPT_RT). I am running the DAQ card with 10 000 samples/second rate and reading 1000 samples using a python sched module every 0.1 s. In theory, I should be getting 1000 samples every 100 ms, however in practice this number fluctuates quite significantly.
I am not as much concerned about the actual number of samples, but it is critical to be able to time stamp each value with a reasonable precision.
I was first concerned that my scheduling in the RPi5 is not precise enough, but I am checking both when the request was scheduled (second row) and right after the data was retrieved (third row) and these values are reported as very precise (±100 ns in case of the request and ±0.1ms in case of the data being read). As you can see from the first row of the attached plot, the number of samples is typically around 980, however it fluctuates significantly between 900 all the way to 1100 samples/cycle. The mean value over the time presented here is 999.45 so pretty much exactly a 1000 as expected. The fluctuation is worrisome, because 100 samples would be equivalent to 10 ms uncertainty. This would be an unacceptable error in my case.
Question: I don't understand the hardware well enough to assess the source of this sample fluctuation. Can I just rely on the time stamp being correct and "back-calculate" the timestamps for whatever amount of samples I get? Or is there actually some offset that is changing and will throw my data off and if so is the offset at the trigger (beginning of the data) or the end of the data?
I am reading all samples available in the buffer in each loop using:
hat.a_in_scan_read(samples_per_channel=-1, timeout = 5.0)
The last row is just the voltage output. This is largely irrelevant, since the DAQ card is not connected to anything, but I have added it just to see the output. I am attaching the raw data used to generate the figure as well as the code I was running to get the data from the DAQ card (see daqTest2.py).
Note: This question was first posted on the GitHub.
I have found a previous question on the forum which might be related (see here), however that does not answer my questions.
outputDaq3raw.txt daqTest2.py
Link to comment
Share on other sites
3 answers to this question
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now