Jump to content
  • 0

ADP3450 Pro Ethernet communications


AbbyM

Question

Hello, 

I have been looking into using Ethernet with my ADP3450 and Raspberry Pi 4, and am curious what Ethernet layers the ADPro uses in its communications?   When I ran the iperf utility on the Pi, only a UDP command returned any results, and it reported a rate of ~0.125 MBps.  I did not see any activity over TCP.

Thanks,

Abby

Link to comment
Share on other sites

Recommended Posts

  • 0

Hi @AbbyM

How did you run the iperf utility?

The ADP3450 uses the proprietary Digilent Adept communication protocol, you can find a bit more info here.

As far as I know, the over-the-wire protocol is not publicly documented. What I see when running Wireshark is that it employs a TCP link for communication.

Link to comment
Share on other sites

  • 0

No, the ADPro doesn't run an iperf server. In fact it isn't even installed by default.

To get this to work (I recommend you only try this if you're at least somewhat comfortable as a Debian sysadmin):

  • Make sure you run the ADPro in linux-mode
  • Log in to the ADPro (default username/password: diligent/digilent)
  • Convince yourself that the ADPro can reach the outside world, i.e.. that it has a gateway and DNS server configured:
digilent@ADPro:~$ ping deb.debian.org
PING debian.map.fastlydns.net (199.232.150.132) 56(84) bytes of data.
64 bytes from 199.232.150.132 (199.232.150.132): icmp_seq=1 ttl=55 time=2.82 ms
64 bytes from 199.232.150.132 (199.232.150.132): icmp_seq=2 ttl=55 time=2.66 ms

If not, I recommend changing those from the Waveforms application, as Debian uses systemd nowadays which (to me at least) makes it entirely unclear where the actual configuration files are on any given day (I suspect it depends on the phase of the moon).

  • Verify that /etc/apt/sources.list points to Debian buster. Mine looks like:
deb https://deb.debian.org/debian buster main

If you need to edit, nano and vim are available. 

  • Become root ("sudo -i")
  • Update index of debian packages ("apt update")
  • Take the opportunity to refresh your ADPro's Debian installation, if you want/dare ("apt upgrade")
  • Install the iperf package ("apt install iperf")
  • Run iperf on the ADPro at your leasure, either as a client or as a server.
  • If you are interested to do bandwidth experiments, I'd suggest initiating traffic (aka, run as client) from the side that will push out data in your application.

 

Edited by reddish
Link to comment
Share on other sites

  • 0

Ok thank you!  I installed iperf on Linux mode, and ran both TCP and UDP tests. The ADPro is the "server." It looks like ~85 MBps for TCP, and 1 MBps for UDP.

What I still don't know is what is being used to transfer data during a data collection,  TCP or UDP?   Seems TCP is what I need to achieve the sample rates I want.  (>= 6 MSPS for 3 CH)

image.thumb.png.6bf3bc8f3b4a9e8d63346b5709587f80.png

Edited by AbbyM
Link to comment
Share on other sites

  • 0

Hi @AbbyM

Data rates for UDP should be at least as high as for TCP. Please note that the mode that the server is started in needs to correspond to the mode (tcp or udp) that the client is started in.

With TCP, I saw something like 97 MB/sec to a fast computer which is about what I'd expect.

The DWF library sits on top of Adept, which is Digilent's proprietary library for data transfer. From what I can see (using Wireshark) it uses TCP for its data transfers.

Unfortunately, Adept-based data transfers from the device seem to have been implemented rather sub-optimally for some scenario's (see this post) — and that's putting it mildly. So unfortunately you won't get anywhere near the performance (in terms of samples-per-second) that the hardware is, in principle, capable of, which, at least to me, was pretty disappointing to find out.

Edited by reddish
Link to comment
Share on other sites

  • 0

OK, I installed Wireshark on the Raspberry Pi and definitely saw that it is only using TCP during data recording.  I wonder why it is so slow then, and I had to slow the sample rate down to 100-200 kHz just so it was not losing any samples during recording.  And attempting to sample faster resulted in an early stop because it would only acquire a fraction of its goal before the set time is up.   This was all using the Waveforms GUI.   Using both Linux mode and standard mode did not change it much.  I also tried disabling Wifi completely and that only helped a bit.  

Link to comment
Share on other sites

  • 0

> I wonder why it is so slow then

It is probable that performance is bottlenecked not by the network performance, but by something else; the most probable candidate being the Raspberry CPU. Another possibility is that the large amount of data being moved (even at low sampling rates) saturates the Raspberry's memory bus. And there are some more possibilities beyind that.

As figured out in the thread I linked to, the way Record mode was implemented appears to be rather horrific in terms of performance. I don't know the details but it appears that the complete internal data buffers of the ADPro are constantly being dumped to the PC/Raspberry, irrespective of settings. The receiver is then tasked with getting the relevant data from those raw buffers. Apparently, the Raspberry can't do that fast enough.

Edited by reddish
Link to comment
Share on other sites

  • 0

Ok, noted. 

I noticed also using my Win10 PC, with a direct Ethernet connection, Waveforms will lose samples even at 1-2 MHz.  So maybe I'm running into a larger issue than just on the Raspberry Pi.  What is even more confusing to me is that measuring traffic on Wireshark shows high data rates ~50-125 MBps being passed over Ethernet.  How does it then lose samples even as low as 1 MHz?   

1 MSPS of 3CH of 16-bit signed data:

stmode_nootherapps_1_Mhz.thumb.PNG.93778ff54e95cadfb3843f6410fc9695.PNG

 

5 MSPS of 3CH of 16-bit signed data:

stmode_nootherapps_5_Mhz.thumb.PNG.46ae6925fb72f0c0be72d381c7cc58d1.PNG

 

12.5 MSPS of 3CH of 16-bit signed data:

stmode_nootherapps_12_5_Mhz.thumb.PNG.fb5ab44cdc3cbe4da2e441a298216c31.PNG

 

25 MSPS of 3CH of 16-bit signed data:

stmode_nootherapps_25_Mhz.thumb.PNG.8015ba52d6423e408c7b2e3e40b9abca.PNG

 

100 MSPS of 3CH of 16-bit signed data:

stmode_nootherapps_100_Mhz.thumb.PNG.49ec4324720a7403729e9be6bea45eeb.PNG

Link to comment
Share on other sites

  • 0

Hi @AbbyM

> What is even more confusing to me is that measuring traffic on Wireshark shows high data rates ~50-125 MBps being passed over Ethernet.  How does it then lose samples even as low as 1 MHz?   

The experiment I described in the other thread essentially demonstrates that there is no relation between the bandwidth used by the ADPro and the actual data rate that you would need. Even if you record a single channel at 1 Hz, there will be traffic in the order of tens-of-megabytes-per-second.

It sucks, but it is what it is. I suspect fixing this would mean big changes to the basic design of the software and firmware, so my guess is that this will not be addressed for the current generation of devices at least.

I can only hope that a next generation of the devices will fix this. I for one won't be buying new Digilent devices if this doesn't get addressed, because frankly I think it is pretty ridiculous.

Link to comment
Share on other sites

  • 0

Haha I understand, on USB it seems to work as expected, but now that I'm trying it with Ethernet it boggles my mind what is going on underneath the hood. 

On my Win10 laptop the reported rate is still 71 MBps which checks out given the Wireshark measurements. But not for the resulting data.

Hopefully someone else can respond on this thread.  I really would like it to work for this project I'm using it for. It would complete the goal! 

Thanksstmode_nootherapps_speedtest.PNG.0b7db25da0f5979844dd0323de727836.PNG

Link to comment
Share on other sites

  • 0

Hi @AbbyM

The ethernet has higher throughput than usb. It can transfer captured data faster but it looks like is not as good for recording where a lot of small chunk of data need to be transferred. It seems to cause hiccups in the process. Probably the data transfers in the device between the FPGA (with 128KiS buffer) to the DDRam is blocked by the ethernet transfer (DDRam to eth), the Eth DMA transfers interfering/blocking the FPGA DMA transfers, causing FPGA buffer overflow. See the earlier post...

Isn't 128Mi samples suffice for your task ?
You could capture 128Mi samples at up to 125MHz on one channel or 64Mi @ 62.5MHz x 2Ch or 32Mi @ 31.25MHz x 4Ch

 

Link to comment
Share on other sites

  • 0

Hi @attila

OK, I think 128Mi samples should be plenty for 6MHz for 3CH. Is there a different way I should be recording?  I tried using Waveforms GUI as well as the DWF library in C++

Or is there a special configuration or setting I am missing?

Thanks

Link to comment
Share on other sites

  • 0

Ok thank you.  I see that menu now, though my version does not include the DDR buffering checkbox, why would that be?

Also I can't find where the files are recorded to on my device using this Record mode/Config button? 

Thanks!

Link to comment
Share on other sites

  • 0

Yes ever since I noticed Standard mode is faster than Linux mode.  I also just tried testing on a Ubuntu Linux desktop , brand new powerful machine.  And over Ethernet it also sees the lost samples error message even down to 1 MHz sample rate.   Over USB I do not see that message at least up to 12.5 MHz

Link to comment
Share on other sites

  • 0
Here: https://forum.digilent.com/topic/20153-capture-4-channels-of-120-million-adc-samples/

I describe how I captured 4 channels of 100 Mhz 16-bit ZMOD ADC samples to the XEM7320 DDR and downloaded the data to a PC using the XEM7320 USB 3.0 interface. Because of the system architecture I know that no ADC samples are lost, and can verify this from the hardware. This is a straight-forward way to capture lots of data without hardware/software bottlenecks getting in the way. It's a good deal cheaper than the AD3xxx instruments. This method does require a bit of HDL expertise.

The Eclypse-Z7 was a disappointing effort and the resultant AD3xxx products based on that was always going to be disappointing. It's the architecture.

The 1 GbE Ethernet hardware layer at a 125 MHz data transfer rate is certainly faster than USB 2.0, but not USB 3.0. Transferring huge amounts of data, without data loss, through either interface on a platform running a modern OS will certainly be a problem without proper buffering. The right way to do this as I pointed out above is to have adequate local buffering ( where the ADCs live), without contention for access to the local buffer, and deliver samples to an endpoint ( a PC memory buffer ) at the pace that the endpoint can receive them.

I'm still not buying the argument that iperf is all that helpful. Yes, it can measure the performance between two Ethernet PHY devices at the end of a cable, or whatever else sits between them, but not understanding what you are measuring can lead to bad assumptions. I'm no expert in iperf. What I've read indicates that you can set it up to transfer n buffers of m size, for p iterations. This might be a more realistic test than the default iperf settings. I still don't know if even this is a good approximation to actually sending 1 GB of data from a source to a destination via Ethernet. If I needed to know what minimal data rate, without dropping data, I could transfer from source to destination was, I want to actually do that.

- 1 GB on PC 1 is sent to PC2
- PC 2 saves each packet to memory and echos the packet back to PC1 as fast as possible
- PC 1 checks the data from incoming packets against the data from the original buffer
- User has all of the data available on PC 2 to inspect.

Without knowing what the total transfer time from byte 1 of packet 1 being sent from PC 1 to the last byte of the last packet being received on PC2 without very accurate time measurement makes even that test less than adequate because it assumes adequate buffering on both ends beyond the Ethernet PHY. A better test would be to have a process, sending data packets at a fixed interval at some data rate and seeing if any data was lost. This more closely mimics a source like a multi-channel ADC.

For FPGA Ethernet testing I use an FPGA design that sends 1 GbE packets of programmable size and with programmable packet spacing. It's a completely HDL design so there's no software "x factor". This would be a way to make the proceeding test setup more useful: by having the FPGA fixed data rate source replace PC 1. Now, you can see if there's any data loss due to software on PC 2.

For 128 MB of 3 channels of 1 MHz 16-bit data: 128x2^20 x 2 bytes/sample x 3 channels = 805,306,368 bytes. If you could transfer this through 1 GbE at 125 MiB/s, with 100% user payload, it would take about 6.44 seconds. This seems to me to be a very long time for any process to be running on any platform using a modern OS without interruption. The channel has to sustain 10^6x6 MiB/s which seems to be a reasonable target for short periods that a process might be running before betting bumped by another process. If there's adequate buffering on the receiving PC it certainly seems reasonable. The problem is, what happens during the periods when the receiving PC can't received buffer data? This puts the burden on the sending PC. All of what happens on both ends can certainly be handled by Ethernet protocol but perhaps not in an obvious way that you can measure with tools like Wireshark. Edited by zygot
Link to comment
Share on other sites

  • 0

Hi @zygot

As to your reservations w.r.t. gigabit Ethernet sustainable speeds:

On a modern PC with a decent OS, offload of incoming Ethernet data into a buffer is largely done with DMA, the processor has little involvement other than configuring the DMA, and a modern desktop system the CPU has cores aplenty to handle the high-priority interrupts that the Ethernet subsystem generates, even under high load. Getting the data out of the kernel-managed buffers takes a copy, but copying 125 MB/sec is not a big deal on a modern PC bus.

It is really not difficult for a userspace program to get close (within a few percent) to the theoretical maximum of a bit over 120 MB/sec using TCP, even on a normally loaded system, provided there is enough buffering space for the Ethernet controller to offload incoming data to, and assuming a userspace program is accepting data as fast as it comes in. Another assumption is that there's no other processes eating Ethernet bandwidth, but that's easily solved by having a dedicated point-to-point Ethernet link for applications that benefit from sustained high-bandwidth.

I can recommend writing a simple TCP-transferring program to experiment with, to see how easy it is to sustain close-to-maximum TCP bandwidth, userspace-to-userspace, between two hosts. I think your thoughts on Ethernet performance are overly pessimistic, and that would be a good way to discover that or (even better) to prove me wrong :)

Link to comment
Share on other sites

  • 0
55 minutes ago, reddish said:

I can recommend writing a simple TCP-transferring program to experiment with, to see how easy it is to sustain close-to-maximum TCP bandwidth, userspace-to-userspace, between two hosts.

@reddish, It's not that I don't believe you... but I also play games where the ball gets sent back over a net from whence it came...

Perhaps, since it's a simple thing for you to do, you could provide such a program, that can be compiled using gcc on a Linux host. @AbbyMcould then compile it on her Linux machine and her Raspberry Pi to see what she gets. By the time I get around to doing it we'll all have forgotten what the questions were.. and possibly, for me at least, what year this post appeared. :).

The main point of my last post was that there's a way for the project that the poster is working on to get completed.

Edited by zygot
Link to comment
Share on other sites

  • 0
54 minutes ago, reddish said:

On a modern PC with a decent OS, offload of incoming Ethernet data into a buffer is largely done with DMA, the processor has little involvement other than configuring the DMA

I've done a few raw socket programs for very simple FPGA-SBC Ethernet communications. I do know that for the ZYNQ in the AD3450, Ethernet is more complicated and involves CPU interaction. I would assume, I don't know, that the Rpi4 ARM based uController works in a similar way as the ZYNQ. I have no experience playing with the Ethernet on that platform.

I have done a bit of DMA work on the RPi using it's other hardware interfaces and USB 2.0. For these interfaces, my experiments indicate that the RPi3 or RPi4 are not equivalent to modern PC desktops with X86-64 multi-core processors, 16 GB of memory, and a separate high performance motherboard chipset to handle IO. I wouldn't expect a uController like the3 BM2711 to have the same performance.

There's only one way to know for sure, wouldn't you agree?

Edited by zygot
Link to comment
Share on other sites

  • 0

The proof of the pudding is indeed an experiment.

But iperf does little more or less than what a super basic TCP client/server would do. And I've seen 97 MB/sec using iperf from ADPro3450-to-PC, which is a bit less than what I expected, but still pretty decent. I would expect that PC-to-PC it is possible to sustain close to 120 MB/sec.

Edited by reddish
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...