Jump to content

Keith Penney

Members
  • Posts

    4
  • Joined

  • Last visited

Everything posted by Keith Penney

  1. Hi Fausto, Well that's disappointing but thank you for the clarification. For future device/driver development, this is what the hardware should do to avoid such a usage limitation. This is what is implemented in every device I've encountered with an internal FIFO. Hardware should have a "FIFO Watermark" (fifoMark) register which can be set between 1 and fifoSize (where 'fifoSize' is number of samples that can fit in FIFO; 2048 in the DT9836). Internally, when the FIFO's fill level (nFifoFill) is equal to the watermark (when nFifoFill == fifoMark), an interrupt should be generated which triggers a bulk transfer of the entire FIFO contents to the USB host. The existing DT9836 behavior is as if fifoMark is hard-coded to 2048. When the user requests nTotal samples to be acquired, fifoMark should be set to min(nTotal, fifoSize). After every FIFO data transfer event, the samples remaining to be acquired are calculated nRemaining = nTotal - nTransferred (where nTransferred is the number of samples transferred to the USB host since the acquisition started). Then fifoMark should be set to min(nRemaining, fifoSize). Note that steps 2 and 3 are redundant (at start of acquisition, nTransferred = 0, so nRemaining = nTotal) but I listed them as separate steps for clarity. This all happens between the hardware and the software drivers and so is completely transparent to the user (no changes to the Open Layers API). The only result would be users like me are very happy that our gated external clock application works perfectly. -Keith Penney, Sandia National Laboratories
  2. Hi Fausto, Thanks for the response. In our application, the A/D clock is gated. Thus if we request N samples, it produces only N edges at the A/D clock input. This is because the samples are not regularly spaced in time. Are you saying the only way to get the data out of the FIFO is to provide additional clock edges to the A/D clock input until the FIFO is filled? -Keith Penney, Sandia National Laboratories
  3. Hi Fausto, Thanks for the response. Here are some details regarding the core data acquisition flow of the application. The data is always acquired in chunks of length 7200*nChan samples (where nChan is the number of ADC input channels being used, set by the user). We use this as the buffer size to allocate to the DT9836 via the Open Layers drivers. The number of chunks can be anything from 1 to the RAM limit of the machine, but a maximum of 5 buffers are allocated to the DT9836. The DT9836 is configured for external trigger and external A/D clock which are synchronized to each other (the trigger edge occurs 1-3us before the first A/D clock edge, depending on the operating speed of the experiment equipment). The A/D clock must be gated (synchronously) because the user can acquire the chunks mentioned above either contiguously (no gap in time between last sample of chunk N and first sample of chunk N+1), or with an arbitrary amount of time between chunks. Thus, we can make no assumptions about how much/little time we have between "chunks" so we just configure the drivers to receive nChunks*7200*nChan samples and divvy the samples up properly after receiving the data. In response to the OnBufferDone signal, we pop the latest buffer from the "Done" queue (typically the only buffer on the done queue) and copy its contents to a separately allocated memory area. If the number of sample chunks to be acquired is >5, we push this buffer back onto the "Ready" queue. If the total number of samples in the experiment (nTot = nChunks*7200*nChan) is not an integer multiple of 2048 (FIFO size), the last nRemaining = nTot % 2048 (where '%' is the remainder after integer division) samples are stuck in the FIFO. The only options for getting the data out of the FIFO (to my knowledge) are terrible hacks - i.e. generating more A/D clock pulses until the FIFO is shifted to the Open Layers drivers and an OnBufferDone signal is generated. Even an "Abrupt Stop" (olDaAbort() function in Open Layers SDK API), will simply return the incomplete buffer telling us how many samples are missing. We have verified that the correct number of A/D clock edges are arriving at the DT9836 and the "missing" samples are always <= 2047, so we are convinced the data is just stuck in the FIFO waiting to be shifted out. I don't know what the low-level design of the DT9836 looks like, but every device I've used/made with an internal FIFO also has a user-configurable "FIFO watermark" value that can be set to trigger an interrupt when the FIFO hits a particular value. In our experience, the behavior of the DT9836 is consistent with having a constant "FIFO watermark" of 2048 (i.e. an interrupt will not trigger the device to shift the data out over USB bulk transfers until the number of samples in the FIFO reaches 2048). I'm hoping there exists an ability for the user/drivers to modify this "FIFO watermark" value or an equivalent functionality. I have found nothing of the sort in the Open Layers API, but this would be the type of thing to be implemented inside the black box drivers rather than being exposed through the API anyhow. Any help you could provide is greatly appreciated. Thanks! -Keith Penney, Sandia National Laboratories
  4. Hi MCC/DAQ support, I am using a DT9836 in a custom C++ app via the Open Layers Data Acquisition SDK. We supply the DT9836 with an external gated A/D Clock source and I'm having trouble with the hardware FIFO which is 2048 samples deep. The number of clock edges is not always an integer multiple of 2048 (n_clock_edges % 2048 != 0), and so my last buffer is not getting filled properly. After all clock edges arrive, the hardware subsystem seems to be waiting for additional clock edges to shift more (garbage) data into the FIFO to trigger a FIFO full condition and shift the data into the software buffer. For many application-specific reasons, I do not want to allow the clock source to continue beyond the desired number of samples, so I need a way to get the data out of the partially filled FIFO without sending more triggers. I don't see a way to do that with the DT Open Layers SDK. Is it possible? One idea I had was to switch the clock source to internal to fill the FIFO and force the data out, but this is inelegant to say the least. Is there a better way? Thanks in advance for your assistance. -Keith Penney, Sandia National Laboratories
×
×
  • Create New...