Jump to content

Fast ADC sampling with Eclypse Z7


Quello

Recommended Posts

Hi!

This summer I will be working as an summer intern. I study cybernetics and robotics. I know C/C++, python, a bit basic electrical engineering, a bit digital design and embedded software development(i have worked quite a bit with AVR chips) , and I am also familiar with ROS(Robot Operating System).  I am obviously far from an expert in any of these areas, but I would say I have a solid foundation in them.

Anyways, enough about myself. I am writing to this forum because I seek advice. The task handed to me is quite open, and the whole project is really in the start phase. Let my briefly explain what it is I am working on.

The goal of the project (not to be completed by the end of my internship) is to develop a prototype sensor for measuring PD(partial discharge in high voltage cables). I do not think I need to spend more time explaining the details about the measuring technique or the physical phenomenon as it is irrelevant for my task. What you need to know is that I will me measuring voltages in the range ca. -1,1 V. The bandwidth that I am interested in is between 1 and 100 MHz. Therefore we need a quick ADVC. It has also been discussed to used an mixer together with a slower ADC, but in the end the choice fell on the Eclypse Z7 because of its fast ADCs( Digilent ADC1410). My task is therefore to somehow use this board for detection of abovementioned PDs and sending information to a host PC (the information could be peak voltage detected, frequency, count of PDs so far, just the fact that the voltage crossed the threshold value, or a combination of these). I have discussed extensively with several employees here about potential solutions, but the more input the better, therefore I would love to hear your opinion on this. Options we have discussed include:

1. coding bare bone, maybe as a start just send a message if a voltage above some threshold value in the abovementioned range have been detected (I implemented a start of this but stopped when I realized how slow UART is - looked at using the Ethernet but that was no fun)

2. Using RTOS with for example LwIP for ethernet communication, sockets. Also this probably makes life a bit easier( I have no experience with RTOS, but I have used ROS - which I naively imagine being somewhat similar)

3. Using Linux(petalinux). This probably makes interfacing with the board the easiest, but I am unsure if it is even possible to manage so much data from Linux, and even if I don't know how(embedded Linux is something I have no experience at all, but I do have experience with both Linux and embedded software). It has also been mentioned that it should be possible to run Linux on one of the arm cores on the Zynq chip and running barebone on the other. This could make it possible to have the best from both worlds - the speed of running barebone and the ease of use that comes with using an OS.

4. Creating an IP block that somehow filters out the voltage measurements that probably belongs to a PD. This idea is pretty open and I haven't spent much time fleshing out how this would work. I would love to learn some FPGA, DPS and VHDL, but this seems like a little ambitious for a summer job(after all its just about 6 weeks).

So far, apart from thinking hard, researching and drinking coffee, I have created an vivado project with ZMOD ADC controllers, LP and HP filters. I have written a standalone application that verified that it was all set up correctly. I used a signal generator and printed the output from the filtering over UART and plotted in python.

I would love to learn during this summer, but I also would love to have something somewhat finished to show to. It is obviously not expected from me to finish the sensor this summer. As I mentioned I am also a bit time constrained in what I can archive. Based on my experience and available tools what would you guys recommend me spending my time on.

Sorry for such a long question. I hope it doesn't scare to many people away. Thanks in advance.

-Markus

Link to comment
Share on other sites

Hi Markus,

Sounds like an interesting summer ahead. You've gotten to a good start with the tools and a selected platform.

Unfortunately, there are no SYZYGY ADC pods that I know of that support a 100 MHz analog bandwidth of interest, so this should be something to investigate. It's a curious mystery as to why no one has designed a SYZYGY converter pod with sampling rates and an analog bandwidth more suitable to it's capabilities. Even Digilent's high end instruments don't use the Series 7 IO advanced features to get the kind of performance one would expect. You don't mention how many contiguous samples your PD events last. The Eclypse-Z7 has a limited ability to collect contiguous samples. I'm assuming that a PD is a one-shot phenomenon.

As for how you report your processed data results the description that you've provided suggests that a 921600 baud UART might be just fine; it depends on how much information, and in what form it's presented. If you decide that the Eclypse-Z7 is appropriate then there's always Ethernet. If that doesn't seem like much of a challenge there's the OTG USB interface. Have you implemented a ZYNQ Ethernet design that transfers large amounts od data between a PC and the FPGA? This might involve more excitement and fun than you expect.

There certainly are applications for which the Eclypse-Z7 is an appropriate platform to meet project goals. Those would be a small subset of potential applications due to its design. Edited by zygot
Link to comment
Share on other sites

Hi Zygot. Thanks for the reply, you bring up a lot of important stuff. Let me try to unpack what you say, and try to answer your questions.

Could you explain what you meant when you wrote “there are no SYZYGY ADC pods that I know of that support a 100 MHz analog bandwidth of interest”? As far as I’ve understood the diligent ADC can run at 100MHz, giving me a max frequency of 50Hz on the captured signal, right? I don’t remember the exact frequency band of the signal, but you are right. A frequency of over 50MHz cannot be captured correctly. I guess I should have specified a frequency band of 1 to 50 MHz. Was this the point you were making? And also, if I understand you correctly, you are implying that capturing higher frequencies without changing the ADC. I tried reading up a little bit on SYZYGY, and I am guessing that in this case it is integrated on the ADC board (https://www.mouser.com/new/digilent/digilent-zmod-adc1410) ? Anyways, 50MHz is probably good enough, at least as a start.

Now let me answer some of your questions. A PD lasts a few microseconds, actually I am unsure what the best way to know how long to measure is - but that’s an digression. I have tried implementing Ethernet in a bare metal project indeed. A gave up after a couple days of interpreting the PHY data sheet, as the xemacps library doesn’t seem to support the Realtek PHY on the board.

Some other things that caught my attention. A Baudrate of  921600? I had no idea it was even possible to have such a high baud - that’s sounds great! UART is something I am quite comfortable with at least, compared to Ethernet. Will also look into OTG USB. What do you think is easiest to implement, Ethernet or USB?

Also, I would love to hear if you think it is possible to use an OS(Linux or RTOS) and still process the data quick enough( we really don’t want to miss any PDs).

Markus

Link to comment
Share on other sites

Yup, if you change your bandwidth of interest to 50 MHz then an Fs of 100 Mhz or 125 MHz might be fine. I specifically use the word might because it depends on the details that you are looking for. There's a difference between digitiizing repetitive signals and one-shot signals. I guess that this analysis is part of your project to work out.

You are correct that Ethernet isn't necessarily easy. There is a difference, in terms of design tasks, between using an Ethernet port connected to the PS of a ZYNQ based board like the Eclypse-Z7, a soft-processor like MicroBlaze implemented in logic, and an Ethernet PHY driving pure HDL logic. All have their own set of barriers to overcome. Typically, people use Linux as a way to hide the software complexity involved. Unfortunately, for software dependent design choices like hard or soft processor implementations using Linux isn't trivial. You will need to do your development on a Linux host for Xilinx based platforms.

921600 is the upper limit of most OS UART support if you aren't using flow control. It's my default baud rate for debugging and PC connectivity. It's also the upper limit for USB UART Bridge COM/TTY device driver support. The only gotcha is the depth of the data FIFO in the FPGA hardware and OS. Transferring large amounts of data can require flow control. The is easy to overcome in an FPGA design, perhaps more complicated for a Windows/Linux software application. For FDTI UART bridge (*H) devices using the D2XX driver and custom software applications you can do up to 12 Mbaud, depending on the bridge device. The UART is a pretty useful PC interface for most projects. The main issue with a UART is that it's generally restricted to ascii characters. You can, of course convert a single hex digit into two ascii characters in exchange for half the data rate. I have code posted in Digilent forums to demonstrate.

Perhaps the most important part of your project is partitioning implementation between software and hardware... assuming that you decide to use software. It is possible to use your Eclypse-Z7 platform to implement your project ( based on what I've been told so far ) without using the ARM cores or any software. My only advise is this. Let the FPGAs do what programmable logic excels at; high speed, tight timing control, parallel process execution, etc. Let any processor do what it's good at; complex state machines like a full TCPIP stack, quick functional modification, using known canned libraries for functionality, etc. As always, the most important part of any design is in the first stage; that is the part where you work out what all of the important details are and how you plan on addressing them. If you don't get this part close to being correct enough everything that follows will become exponentially more difficult or impossible to resolve.

The ZYNQ ARM processors are quite fast... fast enough to cause bus faults on poorly implemented AXI busses. It's possible, depending on project requirements, to avoid AXI busses and still connect the PS to the PL. A fast processor and low latency interaction with hardware signals isn't usually practical. Beware of naive expectations unsupported by experimentation.

I just had an interaction with another person concerning the ZYNQ and OTG. Be careful. Edited by zygot
Link to comment
Share on other sites

Additional thoughts. I want to make clear that what follows should not be taken as advice or suggestions as to how you should conduct your project. I'm trying to be careful not to influence your decision making, for a variety of reasons, in fairness to you.

One of the most important parts of this kind of project, and I've done a lot of this kind of project, is selecting the best platform on which to implement it. Due to design decisions and implementation decisions the Eclypse-Z7 is not an easy platform to work with. Digilent's software support and their use of Github are, ahem, less than ideal. If you read the Digilent sales claims for the Eclypse-Z7 you might have the sense that it comes with full support to take on any random project with minimum added engineering effort on the part of the user. This would be an assumption that might have horrific consequences for many customers. Since you have the board to play with you can get to know its potential and quirks, warts and all. There are non-ZYNQ platforms with SYZYGY interface connectors...Digilent sells one though I haven't used it. The extra effort and potential problems that a dual track development project entails, that is SW and HW (plus integration), may be worthwhile or just a hindrance. The difference might hinge on respective skill levels and comfort levels, but if a project goals can best be accomplished using just HDL development then that would be high on the list of considerations as one would plan out the implementation. Of course, sometimes the development platform is preordained. Usually, more elegant and less complicated work out for the best. A lot of variables go into such a calculation. I've concluded over 15 projects for the Eclypse-Z7 just to evaluate its suitability as a platform for the kinds of projects that I might want to use to for. I've posted 1 demo project for the board.

One advantage ot an FPGA platform without software ( for this discussion specifically ZYNQ ) is that the logic has a direct connection to a large external memory to store captured data. This is ideal for situations where you might want to capture unscheduled, random events, with indeterminate event periods and indeterminate time periods between events, especially if you don't want ot miss any events. You might or might not be able to do the same thing using a ARM based FPGA... but it certainly will be a more complex solution involving more effort.

Sometimes projects come with constraints that limit your design and implementation choices. Sometimes, often with programmable logic projects, you are less constrained with choices and free to 'go wild' with almost endless possibilities. The difference is in how many and how well hidden the 'surprises' are. For a project involving a micro-controller you have a lot of fixed options; fixed machine language, fixed hardware resources, fixed software tool-chain, (hopefully) adequate documentation. For a project involving programmable logic there are few such constraints. This makes preparation work all that much more important. Edited by zygot
Link to comment
Share on other sites

A lot of new considerations here. Thanks, this was very useful. I am currently taking a week off from work. Hopefully I will have some time to think about all of this
 

With regards to what you said about doing a project just using HDL. This sounds like an really interesting direction, however I imagine that, in my case, hoping for anything approaching a finished project sounds somewhat optimistic. Maybe you disagree. The question I really don’t know the answer to is - how long time does it take to learn designing in HDL?  I understand that such a question only has one correct answer - it depends. But getting an estimate would be very valuable.

Anyways, thanks for taking the time. Really appreciate it.

Link to comment
Share on other sites

There are a lot of similarities between HDL languages like VHDL and Verilog ( and System Verilog ) and processor high level languages like C and Ada. There are also significant subtle and not so subtle differences in the concepts; these differences are what can prolong the path to achieving competence in programmable logic development. Notice that I refer to development for logic design competence. That's because the verification tools and skills for programmable logic design verification are significantly different than for software verification. This is mostly because there are a lot more details and vairables to consider for programmable logic connected to external devices operating in real time. Concepts like time and time delay, or even what a logic state happens to be at any instant of time are a lot more fine-grained in logic design than for software design.

If you approach the HDL design/development flow as more akin to digital design, but using text to represent your architecture rather than schematic capture, then I believe that learning it will be easier and quicker. If you simply relate to the HDL design flow as if it were just another high level software language, then getting good at programmable logic design will take longer. Of course anyone can design bad logic and bad software. For both software and logic there's just no substitute for experience. For both disciplines the goal is reliable, robust, error-free performance over all operating conditions. Achieving a low level of competency in either discipline without progressing to the next level is somthing that we all face. So this is my segue to where I say that learning competency in programmable logic design is very hard to do as a self-taught discipline. Having the support and guidance of others that are more competent and experienced is a great benefit. But of course, you beat me to the 'it depends' part of the answer. Some of us are fast studies, some ( like me ) are not. Studies have shown that the quality of self assessment is worse than most of us would assume; but I guess that this is the best that we have. So, having a good ability at self assessment is a good start to knowing how long it might take to achieve a level of competency using the HDL design flow.

Both VHDL and Verilog started as languages for simulating hardware behavior. In the early days of programmable logic neither were available as a way to describe your design. VHDL is a child of Pascal and Ada, Verilog a child of C. If you are starting out I'd recommend Verilog for a varitey of reasons that I don't go into here. Some people think that VHDL is harder to learn... others think that Verilog is harder to learn. One thing for certain is that, like C verses a strongly typed such as Pascal, Verilog allows you to write perfectly valid code in a very terse and obtuse style. This can make reading other peoples code harder to figure out.

As I mentioned before verification is a key to good software development and hardware development. The difference is in complexity. I frankly don't know how software verification is taught in schools. For programmable logic development, verification by writing testbenches, should be a basic part of any good coursework as they are inseparable processes in the design phase. Writing a good testbench for programmable logic device simulation is a lot harder than actual HDL design as the results depend on how well the testbench covers all of the pertinent details. At some companies, its a whole separate specialization. The good news is that there are two levels of logic verification. The behavioral level simply confirms that your HDL code does what you think that it's doing, without detailed timing considerations. RTL simulation covers more detailed considerations including the behavior of external components that your logic is connected to. Timing simulation takes into account the behavior of your synthesized, placed and routed logic. Fortunately, FPGA vendor tools come with simulators suitable to do the behavioral and timing simulation. They tend to be deficient in the area of coverage of corner cases, but there are free alternatives tools for automated coverage available, especially for Verilog. If you want a good logic simulator with code coverage you will have to pay for it. So this presents something of a tautology. Your design HDL is only as good as your understanding of the details of your design requirements, conceptualization of what's happening in actual hardware and quality of your HDL constructs and structures. Your testbenches are written in the same HDL, though usually using parts of the language that are not synthesizable, ad are only as good as your understanding of the corner cases and details of how actual hardware operate. What's it all mean? For simple behavioral verification of boolean logic, simulation might just be a quicker way to identify errors in your HDL expression faster than the tool flow might. For complicated logic designs, operating at high clock rates, an connected to external devices it means an iterative process discovery and of trial and error. But, regardless of your design, it's foolish to try and create logic designs without creating verification test bench designs and simulating the combination. My view is that verification is a process of debugging and improving both the designer and the design. Neophytes tend to exclude the human connection to the design expression.

All of this has been a bit long-winded. I think that what you really want to know is whether learning an HDL is possible in a short window. Well yes, I think that it's possible. It's certainly not easy. Unfortunately, the only alternatives are the IPI flow and FPGA board supporting code. If you are lucky with both of these you might be able to complete an arbitrary design project with relative ease as what's provided does everything that you need to do. Except for replicating particular designs you should not expect to get lucky. For the IPI flow, when the 3rd party IP doesn't get the job done, all you are left with is your HDL skills and lots of code that is very hard to understand, or impossible to understand because it comes with encrypted sources. I've used a lot of FPGA development boards from a lot of vendors and I've yet to come across one with sufficient support to use all of the board resources and external components for any project that I want to do. The IPI flow might be a quick prototyping experience for a very compenent HDL flow designer wanting to get a feel for how much work his project will be.

So now you have a sense of why I recommend the HDL flow to beginners. It's a complicated and arduous journey even when guided by a structured learning environment. It's really hard for most people to learn on their own. It's also, in my opinion unavoidable.

There are websites that will assist the self-educational approach to HDL design. Unfortunately, there isn't a lot of online help available for writing effective testbench code for logic simulation. If you look around the Digilent Forums there are design and testbench examples available as templates. More so than with software development design sources, understanding how or why the code works is generally not as easy as just reading the sources. Learning by imitation is a hit or miss proposition for both software and logic design and development. Edited by zygot
Link to comment
Share on other sites

On 7/2/2022 at 3:19 AM, Quello said:

With regards to what you said about doing a project just using HDL

With the information that I have so far I'd see this as an HDL only project. If I had to use a ZYNQ device I might have a minimal amount of SW, if there was a compelling need. I'd rather do HW development though I usually do both for most projects.  My wheelhouse is digital and programmable logic design and development so when I can I eschew components requiring software development. I've built up a considerable toolbox of IP that I've written and used for years. I've developed a debug tools that I use frequently. I mention this because you are a different person. You have to fit your approach to your strengths, which are unlikely to be similar to mine.

You might not want or need your logic to have its own external memory. You might find that Digilent's Eclypse-Z7 resources are a good fit for your project. These are things for you to figure out for yourself. I can provide one perspective based on my personal experience that might, or might not, help set a course for your project that steers clear of reefs and treacherous seas. One thing that I can't know is what the best course for you is.

Link to comment
Share on other sites

I understand. I will think about the best course to take. I am definitely more familiar with SW development than HW development, but learning a new skill never hurts. Your feedback has definitely been helpful. Thank you. I might reach out again if I feel stuck. Again, thanks.

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...