I'm a bit confused as to how the Waveforms manages its buffer. I'm using the Waveforms SDK (via LabVIEW) to both generate a pulse and act as a scope using the Analog Discovery Kit 2. I am generating a 50 ms, 5V pulse with a frequency of 230 kHz. This is connected to a hardware device. I then want to use the scope function to record the output form this device. I can get the scope to read, but I don't really understand why it works the way that it does.
As it stands, the only way that I can consistently get a waveform is as follows (AnalogIn):
AutoTimeout: 0.5
Sample Frequency: 160 kHz
AcquisitionMode: Record
RecordLength: 2 seconds
I also have a 2.5 second wait on my analog out.
If any of these settings change, my waveform will not show up. If perhaps I want a 200ms pulse, I need to change the sample frequency to 40 kHz.
Attached is a screenshot of what the waveform looks like. There are two problems with this. The waveform is pushed far to the right. I want the waveform to take up the entire buffer. Second, if I'm sweeping frequencies, the waveform will show up at slightly different spots between frequencies. My ideal situation is that the ADK2 outputs a pulse, and when the pulse starts it immediately begins to record, filling the buffer after 50 ms. This buffer is emptied into an excel document, and then awaits use again.
I've searched and read through the SDK reference manual but I can't seem to get a grasp on how to fix this. I feel like it's a lack of understanding of how the buffer is managed, but any help would be appreciated.
Thanks
EDIT: Follow-up/related question: How would these settings translate into SDK functions?
Question
TurgidWater45
Hello,
I'm a bit confused as to how the Waveforms manages its buffer. I'm using the Waveforms SDK (via LabVIEW) to both generate a pulse and act as a scope using the Analog Discovery Kit 2. I am generating a 50 ms, 5V pulse with a frequency of 230 kHz. This is connected to a hardware device. I then want to use the scope function to record the output form this device. I can get the scope to read, but I don't really understand why it works the way that it does.
As it stands, the only way that I can consistently get a waveform is as follows (AnalogIn):
AutoTimeout: 0.5
Sample Frequency: 160 kHz
AcquisitionMode: Record
RecordLength: 2 seconds
I also have a 2.5 second wait on my analog out.
If any of these settings change, my waveform will not show up. If perhaps I want a 200ms pulse, I need to change the sample frequency to 40 kHz.
Attached is a screenshot of what the waveform looks like. There are two problems with this. The waveform is pushed far to the right. I want the waveform to take up the entire buffer. Second, if I'm sweeping frequencies, the waveform will show up at slightly different spots between frequencies. My ideal situation is that the ADK2 outputs a pulse, and when the pulse starts it immediately begins to record, filling the buffer after 50 ms. This buffer is emptied into an excel document, and then awaits use again.
I've searched and read through the SDK reference manual but I can't seem to get a grasp on how to fix this. I feel like it's a lack of understanding of how the buffer is managed, but any help would be appreciated.
Thanks
EDIT: Follow-up/related question: How would these settings translate into SDK functions?
Link to comment
Share on other sites
4 answers to this question
Recommended Posts
Archived
This topic is now archived and is closed to further replies.