I'm working on getting my MC118 to output scaled data using the mccdaq daqhats Python library, but it's not working as I would expect.
If I call calibration_coefficient_write() with a scale of 1 and an offset of 0, I expect OptionFlags.DEFAULT and OptionFlags.NOSCALEDATA to output raw ADC codes when calling either a_in_read() or a_in_scan_start().
The coefficients are applied in the library as:
calibrated_ADC_code = (raw_ADC_code * slope) + offset
This looks like a bug to me, unless I'm doing something wrong.
Here is a test program that demonstrates the bug:
#!/usr/bin/python3"""
Demonstration of a bug with scaling in a_in_read() and a_in_scan_start()
OptionFlags.NOSCALEDATA does not do what I expect.
ai_hat.calibration_coefficient_write() affects the result.
https://mccdaq.github.io/daqhats/python.html#daqhats.mcc118.calibration_coefficient_write
"""from daqhats import mcc118,OptionFlags
address_118 =0
ai_hat = mcc118(address_118)def simple_read():print(ai_hat.a_in_read(0,OptionFlags.DEFAULT))print(ai_hat.a_in_read(0,OptionFlags.NOSCALEDATA))def scan_finite(nsamples):
ai_hat.a_in_scan_start(0b1111, nsamples,10000,OptionFlags.DEFAULT)print(ai_hat.a_in_scan_read(nsamples,1.0))
ai_hat.a_in_scan_cleanup()
ai_hat.a_in_scan_start(0b1111, nsamples,10000,OptionFlags.NOSCALEDATA)print(ai_hat.a_in_scan_read(nsamples,1.0))
ai_hat.a_in_scan_cleanup()def set_coefficients():print(ai_hat.calibration_coefficient_read(0))
ai_hat.calibration_coefficient_write(0,1.0,0.0)
ai_hat.calibration_coefficient_write(1,1.0,0.0)
ai_hat.calibration_coefficient_write(2,0.0,0.0)
ai_hat.calibration_coefficient_write(3,0.0,10000.0)print(ai_hat.calibration_coefficient_read(0))
simple_read()
scan_finite(1)
set_coefficients()
simple_read()
scan_finite(1)
Question
ehv
Hi,
I'm working on getting my MC118 to output scaled data using the mccdaq daqhats Python library, but it's not working as I would expect.
If I call calibration_coefficient_write() with a scale of 1 and an offset of 0, I expect OptionFlags.DEFAULT and OptionFlags.NOSCALEDATA to output raw ADC codes when calling either a_in_read() or a_in_scan_start().
ai_hat.calibration_coefficient_write(0, 1.0, 0.0) print(ai_hat.a_in_read(0, OptionFlags.DEFAULT)) print(ai_hat.a_in_read(0, OptionFlags.NOSCALEDATA))
This outputs:
-0.029296875 2042.0
The documentation says:
https://mccdaq.github.io/daqhats/python.html#daqhats.mcc118.calibration_coefficient_write
The coefficients are applied in the library as: calibrated_ADC_code = (raw_ADC_code * slope) + offset
This looks like a bug to me, unless I'm doing something wrong.
Here is a test program that demonstrates the bug:
Here is my output:
$ ./bugtest.py -0.000786520708334848 2046.795118641332 MCC118ScanRead(running=False, hardware_overrun=False, buffer_overrun=False, triggered=True, timeout=False, data=[-0.010979898809909017, -0.0017999143186528954, 0.030290890733175146, 0.48317093277360357]) MCC118ScanRead(running=False, hardware_overrun=False, buffer_overrun=False, triggered=True, timeout=False, data=[2044.70751480613, 2048.674508288587, 2056.298605373789, 2147.9981528073927]) MCC118CalInfo(slope=1.0438019176008568, offset=-82.56079326441613) MCC118CalInfo(slope=1.0, offset=0.0) -0.0390625 2041.0 MCC118ScanRead(running=False, hardware_overrun=False, buffer_overrun=False, triggered=True, timeout=False, data=[-0.0439453125, -0.0390625, -10.0, 38.828125]) MCC118ScanRead(running=False, hardware_overrun=False, buffer_overrun=False, triggered=True, timeout=False, data=[2038.0, 2040.0, 0.0, 10000.0])
Only when slope=1 and offset=0 is when NOSCALEDATA outputs the raw ADC code. That's not right.
Link to comment
Share on other sites
1 answer to this question
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now