Hi Fausto,
Thanks for the response. Here are some details regarding the core data acquisition flow of the application.
The data is always acquired in chunks of length 7200*nChan samples (where nChan is the number of ADC input channels being used, set by the user). We use this as the buffer size to allocate to the DT9836 via the Open Layers drivers.
The number of chunks can be anything from 1 to the RAM limit of the machine, but a maximum of 5 buffers are allocated to the DT9836.
The DT9836 is configured for external trigger and external A/D clock which are synchronized to each other (the trigger edge occurs 1-3us before the first A/D clock edge, depending on the operating speed of the experiment equipment). The A/D clock must be gated (synchronously) because the user can acquire the chunks mentioned above either contiguously (no gap in time between last sample of chunk N and first sample of chunk N+1), or with an arbitrary amount of time between chunks. Thus, we can make no assumptions about how much/little time we have between "chunks" so we just configure the drivers to receive nChunks*7200*nChan samples and divvy the samples up properly after receiving the data.
In response to the OnBufferDone signal, we pop the latest buffer from the "Done" queue (typically the only buffer on the done queue) and copy its contents to a separately allocated memory area. If the number of sample chunks to be acquired is >5, we push this buffer back onto the "Ready" queue.
If the total number of samples in the experiment (nTot = nChunks*7200*nChan) is not an integer multiple of 2048 (FIFO size), the last nRemaining = nTot % 2048 (where '%' is the remainder after integer division) samples are stuck in the FIFO. The only options for getting the data out of the FIFO (to my knowledge) are terrible hacks - i.e. generating more A/D clock pulses until the FIFO is shifted to the Open Layers drivers and an OnBufferDone signal is generated. Even an "Abrupt Stop" (olDaAbort() function in Open Layers SDK API), will simply return the incomplete buffer telling us how many samples are missing. We have verified that the correct number of A/D clock edges are arriving at the DT9836 and the "missing" samples are always <= 2047, so we are convinced the data is just stuck in the FIFO waiting to be shifted out.
I don't know what the low-level design of the DT9836 looks like, but every device I've used/made with an internal FIFO also has a user-configurable "FIFO watermark" value that can be set to trigger an interrupt when the FIFO hits a particular value. In our experience, the behavior of the DT9836 is consistent with having a constant "FIFO watermark" of 2048 (i.e. an interrupt will not trigger the device to shift the data out over USB bulk transfers until the number of samples in the FIFO reaches 2048). I'm hoping there exists an ability for the user/drivers to modify this "FIFO watermark" value or an equivalent functionality. I have found nothing of the sort in the Open Layers API, but this would be the type of thing to be implemented inside the black box drivers rather than being exposed through the API anyhow.
Any help you could provide is greatly appreciated. Thanks!
-Keith Penney, Sandia National Laboratories