Jump to content
  • 0

The time interval between two Write() and Read() function


lrxd2046

Question

Hi,

I'm trying to test a spi flash memory using Digital Discovery. I'm wondering the how can I change the time interval between two Write or two Read functions. The sample code attached below. 

image.png.c312c83751fe54bb09e92e4b06e7a5e9.png

Thanks for any answers!

image.png

Link to comment
Share on other sites

5 answers to this question

Recommended Posts

  • 0
10 hours ago, attila said:

Hi @lxbox825

You can use the software wait function.
wait(0.010) // ~10ms

Newer software version brings fine (bit) delay options.

image.png

Hi Attila,

I tested, and found that even I put the wait before different commands (Start, Read, Write, Stop), there is still a long delay. I found the delay is typically ~5-6ms. I attached my code and recorded waveforms. image.thumb.png.1c066f561134b78b1443cc50f48405a5.pngimage.png.b4b4edbb207f20f13ffaa7605d6f9538.png

I'm wondering if there is any method that we can use to reduce the delay? For example, after a stop command, I want the next start() happens within 100us. Is this possible using the script? Or I have to go with python script?

Link to comment
Share on other sites

  • 0

Hi @lxbox825

The wait is a software delay function and it does not guarantee accuracy. Resolution and precision depends on the OS and system load, on Windows it could be around 10ms. 
With SDK under Linux the usleep could give better control but the USB transfer latencies will limit the latency to 125us or a few ms.
The ADP3X50 with embedded Linux should have much lower latency.

The newer software version brings precise hardware delay and hw controlled CS. These delays may be too fine for your needs. Are expressed in clock periods and using long delays may split up the transfers.

On AD2 you could use the 4th or 7th device configuration to have more digital device buffers. The max number of bits that fits in one transfer is the (Logic-Pattern) device buffer size or half of Pattern's buffer when any delay is used.

You could use a dummy command to introduce one USB transfer delay (125us...1-2ms), like doubling the Start or Stop.
Start(); Write(...); Start(); Read/Write(...); Stop();
Start(); Start(); Write(...); Start(); Read/Write(...); Stop(); Stop();

You could combine the command and address in on transfer, like Write(32, (0x3<<24) | addr)
The newer sw version also brings CmdWrite/Read functions, to perform command and read/write in one transfer, but this for larger data calls (like 2kBytes) will be split in smaller transfers.
// CmdRead(bits per command, cmd, dummy bits, bits per word, number of words to read )
 

23 hours ago, attila said:

image.png

 

Link to comment
Share on other sites

  • 0
6 hours ago, attila said:

Hi @lxbox825

The wait is a software delay function and it does not guarantee accuracy. Resolution and precision depends on the OS and system load, on Windows it could be around 10ms. 
With SDK under Linux the usleep could give better control but the USB transfer latencies will limit the latency to 125us or a few ms.
The ADP3X50 with embedded Linux should have much lower latency.

The newer software version brings precise hardware delay and hw controlled CS. These delays may be too fine for your needs. Are expressed in clock periods and using long delays may split up the transfers.

On AD2 you could use the 4th or 7th device configuration to have more digital device buffers. The max number of bits that fits in one transfer is the (Logic-Pattern) device buffer size or half of Pattern's buffer when any delay is used.

You could use a dummy command to introduce one USB transfer delay (125us...1-2ms), like doubling the Start or Stop.
Start(); Write(...); Start(); Read/Write(...); Stop();
Start(); Start(); Write(...); Start(); Read/Write(...); Stop(); Stop();

You could combine the command and address in on transfer, like Write(32, (0x3<<24) | addr)
The newer sw version also brings CmdWrite/Read functions, to perform command and read/write in one transfer, but this for larger data calls (like 2kBytes) will be split in smaller transfers.
// CmdRead(bits per command, cmd, dummy bits, bits per word, number of words to read )
 

 

Hi Attila,

Thanks for the answer. Actually, I didn't want that delay. The reason I put wait between each command is I was trying to see if the wait can define the delay between each command. 

I'm trying to use the portal logic analyzer (digital discovery) to read a flash memory. As the waveform showed, the delay between each command takes 90% of the data transfer time. I also tried to use python directly call API to do SPI communication, the delay was even longer than the Waveforms software. As you said, if the delay is USB latency or system latency, there isn't much I can do to reduce the delay. I'll try to execute my scripts on a Linux system to see the results.

Thanks for your help. 

Link to comment
Share on other sites

  • 0

Hi @lrxd2046

The latest version with 16Ki Patterns buffer supports transferring 4096 SPI Bytes or 32bit command + 4094 Bytes.
Half of this when any of the delays is not zero or IO direction needs to be changed, like for SISO (Read0) or command + dual/quad-read...

As mentioned earlier in newer sw there is no need to call Start/Stop only if you want multiple read/write during one CS. 
The CmdRead/Write functions are added to support command+dummy+read/write operations in one transfer.

image.png

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...