Jump to content
  • 0

MCC 172 - Continuous conversion of the data into a Numpy array


brit

Question

Hi, I am trying out a MCC 172 on a Raspberry Pi 4 with 64Bit. The programming language is Python for compatibility with other applications. It works quite well so far and records all data.

Now I am writing an application to be able to display the data graphically even temporarily. There are already similar tools, so I want the data to be available as a numpy array.
The plan is to turn the graph on and off as needed. The data could be continuously available in the background, but it doesn't have to be. Probably better to turn it off and only enable it when needed.

However, I'm not getting anywhere because I don't understand how to understand the flow from the various has.a_...
To understand it, I created a minimal example, the core of which looks like this. The data_dict is a dictionary. The data is written to a deque.

while True:
        data_dict = data_dict
        hat.a_in_scan_start(3, 100, OptionFlags.CONTINUOUS) 
        read_result = hat.a_in_scan_read_numpy(100, RETURN_IMMEDIATELY)
        read_data = read_result.data.reshape((len(channels), -1), order='F')

        for channel in channels:
            data_dict['data'][channel].extend(read_data[channels.index(channel)])

            print(data_dict['data'][channel])
            # print(data_dict['data'][channel][0][0])

        hat.a_in_scan_stop()
        hat.a_in_scan_cleanup()
        sleep(0.5)

However, identical values are always written to the dict, even if you stop and do a cleanup. I don't understand this at all. 
Unfortunately you can't use your own numpy syntax because it doesn't get along with the library.

Where is the error?

Link to comment
Share on other sites

8 answers to this question

Recommended Posts

  • 0

Can you answer the question? Here is still the long test example. It should not stay like this later...

I just want to use again a lot of our code that we have developed for other components. The base unit is also no more powerful than a Raspberry Pi.

https://cms-wind.de/news/unsere-cms-applikation/

from mcc_172_libs.daqhat import (
    mcc172, HatIDs, SourceType, OptionFlags)
from mcc_172_libs.daqhats_utils import select_hat_device, chan_list_to_mask

from collections import deque
from time import sleep




def init_deque():
    samples = deque(maxlen=3)
    for i in range(2):
        samples.append(i)
    data = deque(maxlen=10)
    # print(samples)
    for _ in range(2):
        data.append(deque(maxlen=10))
    # print(data)
    data_dict = {
        'data': data,
        'samples': samples,
        'samples_count': 0
    }
    print(data_dict)
    return data_dict


def init_hat():
    channels = [0, 1]
    iepe_enable = 1
    sensitivity_val = 500  # mV/g
    global _HAT
    adress = select_hat_device(HatIDs.MCC_172)
    hat = mcc172(adress)
    hat.a_in_clock_config_write(SourceType.LOCAL, 51200)

    channel_mask = 0x0
    # print(channel_mask)

    # hier ggf. gewünschte Länge der Anzeige
    samples_to_buffer = int(5 * 1000)

    for channel in channels:
        channel_mask |= 1 << channel
        hat.iepe_config_write(channel, iepe_enable)
        hat.a_in_sensitivity_write(channel, sensitivity_val)

    # hat.a_in_scan_start(channel_mask, samples_to_buffer, OptionFlags.CONTINUOUS)
    return hat


def start_data(hat, data_dict):
    channels = [0, 1]
    ALL_AVAILABLE = -1
    RETURN_IMMEDIATELY = 0

    # data_dict = data_dict

    sample_count = 0
    while True:
        data_dict = data_dict
        hat.a_in_scan_start(3, 100, OptionFlags.CONTINUOUS)
        read_result = hat.a_in_scan_read_numpy(100, RETURN_IMMEDIATELY)
        read_data = read_result.data.reshape((len(channels), -1), order='F')

        for channel in channels:
            data_dict['data'][channel].extend(read_data[channels.index(channel)])

            print(data_dict['data'][channel])
            # print(data_dict['data'][channel][0][0])

        hat.a_in_scan_stop()
        hat.a_in_scan_cleanup()
        sleep(0.5)

    # return data_dict


test_dict = init_deque()
test_hat = init_hat()
test_data = start_data(test_hat, test_dict)




 

Link to comment
Share on other sites

  • 0

Hello,

Please have a look at our Python programming examples to gain a better understanding of how the interface works. The fft_scan.py example uses the function hat.a_in_scan_read_numpy in the read_and_display_data function. The data is reshaped so that it is separated by channel instead of interleaved samples.

Best regards,

John

Link to comment
Share on other sites

  • 0

Hello John,
Thank you very much. I looked at the examples first. From that I picked out the hat.a_... and also looked at most of the options in the scripts. For me close was webserver.py with json.dumps. There is a rolling window there.

I'll deal with the samples later. They are not so important for now. They would be structured differently in json format. First only the data itself is generated.

The while loop in the example with start, stop and refresh
I built it to see why the data doesn't refresh after the first record and tried several OptionFlags. the mcc 172 should have a start, stop and refresh every 0.5s now.

It keeps writing out the first record even after running the full loop. Therefore I think that I did not understand this. As if the data is still in the cache after the first record and is always retrieved. The deque is defined very small and therefore should have been completely refreshed.

The example should run if you copy it, only the path to the libraries is different.

I didn't look at the content of the json.dumps, as webserver.py works, it just responds very slowly.

Best regards
Brit

 

data.png

Link to comment
Share on other sites

  • 0

Hi John,

Through trial and error, I found that option RETURN_IMMEDIATELY causes the error. It looks like has.a_in_scan_read_numpy works exclusively with timeout=.... . Is that the case or are there other options?

After all, the application does not exist yet and there are many ways. I'm not sure yet if I turn the has on permanently or only when needed. So it is important to know and understand the options that exist.

Best regards

brit

Link to comment
Share on other sites

  • 0

It's unclear to me the use case for when time out = 0 is required (RETURN IMMEDIATELY). The Python reference says:

timeout (float) – The amount of time in seconds to wait for the samples to be read. Specify a negative number to wait indefinitely, or 0 to return immediately with the samples that are already in the scan buffer. If the timeout is met and the specified number of samples have not been read, then the function will return with the amount that has been read and the timeout status set.

Best regards,
John

Link to comment
Share on other sites

  • 0

Probably that's because I'm still working on understanding the basic principle. I expected that the hat also records new data after stop, refresh and restart.

I tried all three variants and set the option >0, =0 and <0.

You write it is written in the python reference. Where can I find it? I looked so far in the examples and in the descriptions of the functions in the libraries. A manual does not exist?

Best regards

brit

 

Link to comment
Share on other sites

  • 0

Hi John,
I have solved the numpy problem so far. This is how the test data looks like with 51.2kHz.

Now I have to think about how to program the data stream within the operating system. I would be interested in your opinion.
Background: The display is not used very often, primarily when a technician is on site and when setting up the device. Measurements are taken in longer automatic time intervals.
Do you think it is better to have a permanent data stream in the background to access when needed or is it better to restart and stop the hat every time? The hat seems to be very performant, but what about the interaction with the Raspberry Pi?

Best regards

channel0.png

channel0_zoom.png

Link to comment
Share on other sites

  • 0

Hello,

I'm sorry for the late response. The simplest way to approach your application would be to capture a series of acquisitions. For example, capture a block of data, post-process it, and return to capture another block.  However, many customers desire a continuous data stream, so nothing is missed. It's basically up to you to figure out the best approach. 

Best regards,
John

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...