Jump to content

Marcus Watson

Members
  • Posts

    2
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Marcus Watson's Achievements

Newbie

Newbie (1/4)

0

Reputation

  1. Hi, I am attempting to verify timing of brief audio and visual stimulus presentations using an Analog Discover 3. When I use the Waveforms GUI, I am able to reliably see traces corresponding to both stimuli, but when I try to integrate control of the AD3 into my control scripts using the SDK, I cannot replicate what I am easily able to see in the GUI. Specifically, the SDK-produced trace appears to be the equivalent of an amplitude modulation of the traces I see in the GUI. I have the feeling this is a simple electrical engineering question I've never had to think about before, so apologies if I'm missing something obvious. All data is being recorded over the analog in channels, with a microphone and photodiode connected via BNC. Both the GUI and SDK channels are set up to use averaging as the filter method, with the only pertinent difference that I can see between the two being that I've reduced the sample rate to 10,000 Hz in the (to allow longer windows of data to be recorded). My first screenshot comes from the GUI, and clearly shows the visual (orange) stimulus approximately 200ms prior to the auditory (blue) stimulus. The second file (Trial274) is the data collected from a similar trial, but via the SDK. As you can see, the auditory stimulus is clearly visible (blue again), but its shape is very different from that of the first example - as I said, this trace appears to be an amplitude modulation of the first trace, such that the oscillatory component of the first trace is replaced by height in the second. As there isn't much of an oscillatory component in the orange trace, I cannot detect the visual stimulus reliably in my recorded data. What am I missing? Any help would be much appreciated.
  2. Hi all, Apologies for the basic level of this question, but I'm genuinely confused about something from the script examples provided with the Waveforms SDK. Why do all analogin examples use while loops, in which data is repeatedly sampled with FDwfAnalogInStatus? What I would really like to be able to do is the following: set the acquisition mode to record (and set other parameters accordingly), then start acquisition using FDwfAnalogInConfigure, then NOT enter any kind of Digilent-related while loop, and then at the end of a set time (close to the RecordLength parameter), collect the data in the buffer with FDwfAnalogInStatus. The reason for this is that I'm trying to control my Discovery 3 from within code that controls a neuroscience experiment, which has its own complex set of loops and conditionals. Critically, these loops are dependent on the frame rate of the monitor we are using to present stimuli. There isn't any clean way I can see of incorporating the kinds of loops that are in every example in the SDK inside these other control loops, which is why I was hoping to just use a simple "start acquisition" "do other stuff" "stop acquisition" model. Thoughts? Marcus
×
×
  • Create New...