Jump to content

zygot

Members
  • Posts

    2,859
  • Joined

  • Last visited

Everything posted by zygot

  1. This is how I would have wanted it done. The 10-50 MHz span is, possibly inconvenient, but not a huge issue. I suggest that you include DC ( and AC if pertinent ) parameters for clock sources connected to Trigger1 when using the external clock. It would be nice if you showed how the pin is terminated as well. No one expects to see the complete schematics for a product like the AD3 but partial snapshots like were done for the previous editions are good. I realize that it might be difficult to do this for the AD3 clocking. Nevertheless, I'm never quite happy with sketches in the manner provided so far for the AD3 clocking. Yes, please go back to the earlier Analog Discovery hardware reference manual style.
  2. @attila, Having the capability of a user supplied clock source is by far and away the most significant improvement of the Analog Devices product line to date. Why it wouldn't be featured in Digilent's advertising and product reference manual is astonishing to me. This easily justifies the cost increase of the AD3 all by itself, as far as I'm concerned. I don't understand the rational of anyone telling you "Please spare us important details that make understanding how your product works and how it might be useful to me, and also make finding relevant information as hard to find as possible by hiding it and chopping it up into a myriad of disconnected pieces.", or why it would guide your good sensibilities. The hardware documentation for the AD and AD2 were adequate; not so for the AD3. Yes converting signals between the analog and digital realms involves a lot of complicated theory, as does a physical implementation. If people don't want to take the time to resolve their confusion when reading good documentation then that's their privileged. Please tell customers and potential customers that you value their time and intelligence by providing sufficient product information in one place, and with sufficient details that allow them evaluate suitability of the Analog Discovery product line for any particular purpose efficiently and effectively in a timely manner. This would be good for Digilent, its customers, and everyone else. Confusing people who know what questions need to be asked is a lot worse, for everyone, than confusing people who'd rather not know too much about how to use a product. That's my opinion.
  3. A product that does AD or DA conversion and doesn't have an well designed time base or an external clock input is fine for educational or hobby use, if it's cheap enough. Otherwise both of those things are a requirement for most serious applications. The AD3 might be a step in the right direction. That's why people like you need details.
  4. I'm assuming that the 20 MHz DSC1101 in the AD and AD2 is the cheapest +/- 50 ppm version. Even that Frequency Stability (Including frequency variations due to initial tolerance, temp. and power supply voltage.) is exceptional for a product at the price point of these products. For useful range of the converters used in these devices this should be very good. That doesn't mean that there aren't a lot of other things to consider as your links point out. I haven't done an analysis of the clock circuits' stability or phase noise impact on ADC or DAC conversions accuracy. I also wouldn't expect more than about 12-bits of useful resolution out of the converters. There's no sophisticated AGC on the ADC keeping the input near full scale. They are cheap products for what they do. That's how I use them. It's possible to get more utility out of a product than it was designed to do, but then you have to verify that things work as hoped for. You can spend your time and money on expensive instruments with guaranteed specifications ( and hope that yours meets it ) or you can spend you time doing verification, assuming that you already have a lot of very expensive equipment on hand.
  5. It's a bit awkward that there are 2 cross-linked thread going on at once, but they are certainly about the same subject. Whether the AD3 was produced to avoid being unable to procure Spartan 6 devices at a reasonable cost or not is irrelevant to users. An external system clock input that the user can provide is a "BIG DEAL" in terms of how useful the AD3 is as a cheap, general purpose instrument. How big an improvement depends on how the clocking is implemented. The diagram above doesn't make a lot of sense to me; though a lot of decisions that Digilent make fall into that category. I'm hoping that the diagram is inaccurate. It's just my opinion, but your current documentation will cost you sales from people who need useful technical details about the AD3 Theory of operation. Digilent provided this for the original AD and AD2. I'm betting that the hardware reference manual for those products didn't hurt sales at all. Certainly there hasn't been any new products from other vendors under-cutting Digilent's sales of these devices. Wise up and treat your customers properly and I guarantee you at least one more sale. Having said all of that, the AD3, like its predecessors, is an educational product, at an educational product price, and with support that's better than expensive products with similar functionality, that is analog acquisition, analog waveform generation, digital data acquisition, and a few other capabilities. It's pretty obvious that people with a limited budget would like to use the product line for more than educational purposes.
  6. It's disturbing to me that important information about how a product might be used is left out of product documentation and replaced by a less informative description in, beta software of all places, that Digilent makes so hard to obtain. Recently there seems to be a concerted effort at making it hard to find important information that customers, and potential buyers need. This kind of attitude will have a negative effect on any future decision to purchase Digilent products, by me, in the future.
  7. I don't see anything in the AD1 or AD2 Hardware Reference that suggests that an external clock is possible. How the AD3 clocking is implemented is unknown so far as my search efforts have indicated.
  8. It certainly would be nice if Digilent added a description of the AD3 clocking circuitry as it did for the earlier two versions. Potential customers could figure out a lot more for themselves. This quote from the reference manual is inadequate: "The system clock can changed to use an external reference clock ranging between 10 MHz (Megahertz (million times per second)) and 50 MHz (Megahertz (million times per second)) that is provided as an input to the Trigger 1 signal pin." It raises more questions than it resolves. BTW The statement above would seem to exclude using the Trigger 1 input as an external sample clock of 1 MHz.
  9. 10 ns sample-sample jitter seems a bit optimistic to me if you are asynchronously sampling one of the trigger input pins for synchronization between 2 remote data collection stations. Perhaps +/- 10 ns between stations worst case, depending on other factors? I suspect that the analysis is more complicated than considering the FPGA clocking. The original AD and the AD2 provided a fairly informative description of the circuitry involved but the AD3 is more secretive. It should be noted that none of the Analog Discovery products has a specification for time-base accuracy. The time base design for all of the Analog Discovery products is pretty good, though the clock stability for the 20 MHz DSC1101 isn't specified, so I'm not sure why this isn't specified.
  10. Not sure what you are referring to but the FTDI utility of the same name has been around for ages.. ever since the FT245 came out, possibly before that.
  11. More trivia... Before using FTDI bridge devices Digilent sold it own Digilent JTAG device. Xilinx used to put these on its own FPGA boards. I have KC705 and ZC702 boards that used them for configuration and ISE ChipScope. Digilents' first FPGA boards used the Cypress USB UART bridge device on its own boards like the ATLAS and Genesys. I suppose that using their own JTAG device was too expensive. The Cypress CY6813A had an onboard ARM so that functionality was programmable. I suppose that even this might constitute IP that could be licensed. For unknown reasons the CY6813A was abandoned in favor of the FTDI UART bridge devices. All of the Series7 boards that Digilent sells can really be programmed and connect to the Vivado Hardware Manager using the FTDI driver API.
  12. Just for the curious: Here's a partial report from lsusb -v in Ubuntu 22.04 for the Nexys Video: Bus 001 Device 011: ID 0403:6010 Future Technology Devices International, Ltd FT2232C/D/H Dual UART/FIFO IC Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x0403 Future Technology Devices International, Ltd idProduct 0x6010 FT2232C/D/H Dual UART/FIFO IC bcdDevice 7.00 iManufacturer 1 Digilent iProduct 2 Digilent USB Device iSerial 3 210276689477 Oddly, Digilent has it's own idVendor number, which I'm assuming costs money to acquire, but doesn't seem to use it for any of the many Digilent FPGA boards I own. I haven't checked on the ADx products. Perhaps they used it with the Digilent JTAG devices that Xilinx used to use on boards like the KC705 ( that would make sense ); I don't know. Really, at this point the whole Digilent JTAG licensing thing is odd. BTW, Anyone can change any of the FTDI USB bridge device EEPROM configuration bytes. Whether that constitutes an infringement is for the lawyers to decide ( and I'm not one ). What I do know is that using the FTDI idVendor and idProduct values is pretty common.
  13. The easiest way to debug a project involving 2 target platforms is to use to separate PC hosts. One of the hosts might just be the VIvado Lab tools in order to capture ILA or use VIO on a target. Lab tools uses a separate installer from the regular tools but is a much smaller download. It also lets you use the Vivado Hardware Manager to connect to a target without requiring a version compatible design project; a bi file and perhaps an itx file is all that you need. Recent free versions of Quartus are so bad that the 2 host method is virtually a requirement, in my experience, especially on a Windows host.
  14. I forgot to mention this. Assuming that your FPGA board FPGA device is configured with a working PCIe application you still might have hurdles to overcome. There are likely BIOS chipset options that by default will not allow your OS to use your board's PCIe interface even if the BIOS detects it. One such setting is the PCIe switch that can be controlled by the chipset or by software. This really depends on how recent your PC hardware is and how up to date your BIOS is. There are a lot of duckies to get in a row before using PCIe with programmable logic. Intel and AMD don't typically offer very good FPGA PCIe driver support except for their high-end accelerator card portfolio. What would be the business case for doing that?
  15. I have no experience with your board specifically. I have used a number of FPGA boards with a PCIe slot edge interface with numerous PCs using a wide range of motherboard/CPU/Power supply generations. What I'm saying is that you can't assume that your PC can power an FPGA board, in particular a Virtex board from the PCIe connector on your motherboard. You need to read the documentation for your NetFPGA SUMI and your motherboard ( at least ) in order to determine if PCIe slot power is sufficient. Newer motherboards might only provide 25W of 12V to the PCIe slot. If you have an expensive GPU card in your PC then there might be other things to consider, such as if you even have a way to power your FPGA card from the PC power supply. You can power your NetFPGA SUMI in 3 possible configurations: From the PCIe slot 12V and 3V pins. Not all motherboards can accommodate this option. From the external power connector on your board and an external power supply. From the external power connector connected to a PC power supply PCIe connector. In this case you MUST verify that the 12V/GND pins are compatible and that your board only requires external 12V supply. If you are using configuration 1 or 3 then you must setup your board to configure the FPGA from FLASH and hope that configuration takes place prior to when your BIOS detects PCIe devices. If you are using option 2, then you typically power your FPGA board prior to turning it on. You then configure the FPGA. Then you restart your PC so that the BIOS detects a working PCIe device. As long as the external power supply never shuts down, you won't lose FPGA configuration.. which is needed to allow the BIOS to detect it. I'm talking about general concepts, not providing specific instructions for your board. One thing to keep in mind when using PCIe for FPGA applications is that you OS likely makes heavy use of the PCIe bandwidth for display purposes. Another thing to consider is the gap between OS and motherboard/BIOS vendors. I have a i13/Z790 CPU/Chipset motherboard running Ubuntu 22.04. This combination has issues with suspend/resume that occasionally cause the GPU to fail to come out of suspend mode. That's just baseline operation. When I add a PCIe FPGA card things can get crazy. A lot of very unhappy consequences can arise from debugging PCIe HW/SW applications, such as having to reboot your PC by using the power button. In this case Linux file corruption issues can be a pain to deal with. I prefer using a cheap SBC like the ODYSSEY and an M.2-PCIe adapter cable where really bad days don't mean losing lots of money or years of work. The biggest problem with cheap PCIe FPGA cards ( including the NetFPGA SUMI ) is the drivers. Good Windows drivers are rarely provided and Linux driver code is rarely up to date with the latest OS distributions or motherboard chipsets. Rarely do they have full PCIe functionality. One option is to use Xillybus for PCIe and communicate with your board in device mode. Unfortunately, the free demo versions are limited in performance. The good news is that the PCIe drivers are incorporated in the Linux distribution and are robust ( though I have been unable to build them for Ubuntu 22.04 as yet...)
  16. Gee, you seem to be trying very hard not to provide sufficient information that might allow anyone to provide an informed answer. FTDI USB UART/JTAG bridge devices have been popular with programmable logic vendors as a way to configure their devices for a long time. Using them with software on a particular platform can be tricky. For the most part, when an OS enumerates one of these bridge devices it uses access to some of the information about the devices that are exposed to the OS. Vendor ID and Product ID are but two of these. Intel/Altera has long had a problem with COM verses JTAG enumeration on both Linux and Windows OSes. Fortunately, on modern Linux distrubutions you can ( read as must ) provide a driver rules file in /etc/udev/rules.d/ that helps the OS and applications figure out what the USB device does. That doesn't mean that any particular application is going to use the rules file. On Windows things are more complicated as Microsoft isn't as transparent or open to working with users when it comes to using its OS or applications installed on it. Can you debug Cyclone V FPGA and an Artix A200T devices at the same time using Quartus and Vivado? Yes, I've done it on both Windows and Linux. Is it always a fun ride? Nope. As I write this I'm taking a break from debugging a Nexys Video/XEM7320 project Ethernet project on Win10. One uses Vivado 2022.1 and the Nexys Video uses Vivado 2019.1. The Nexys Video is an Ethernet Echo client that just received and returns all valid packets. I have a couple of ILAs in the Nexys Video to capture Rx and Tx data. In order to do this I need to use the Vivado 2019.1 Hardware Manager. I also have an ILA in the Vivado 2022.1 XME7320 project to capture internal states. Wouldn't it be nice if I could have both versions of Vivado open at the same time? Well, you can't, at least and be productive. In my experience you can't even run 2 instances of the same version of Vivado without severe problems; much less 2 instances of Vivado Hardware Manager at the same time. One way around this would be to use Open OCD and ditch the FPGA vendor tools. The problem with this is that SignalTap and ILA/VIO don't work with third party JTAG tools ( to my knowledge ). It might work fine for configuration, not for hardware debugging. Perhaps this reply will help you clarify the question. A least is shows that there is no simple answer to the quote above.
  17. You can create your own custom target memory device in the MIG tool and specify any parameter setting that you want to experiment with.
  18. PCIe functionality for FPGA based boards involves logic resources, so the FPGA has to be configured with an application that implements PCIe functionality prior to when a PC detects PCIe devices in your PC. This occurs at the BIOS level before the OS is booted. In order to do that you need to configure the FPGA from the onboard FLASH between the time that the board power supplies are up and stable and before BIOS detects PCIe devices. How this works depends on how you power the board. If the board is powered by the 12V and 3V supplies from the PCIe slot connector pins and you auto-configure the FPGA from FLASH, then you should be OK. Newer motherboards and power supplies might not support the power requirements for 12V for FPGA boards. The alternative is to power your FPGA board externally so that you can configure the FPGA and not lose configuration when the PCIe slot 12V disappears, ( during reset for instance ). For developing code this is probably the only realistic way to use such a board. This method can cause problems, especially in newer generation motherboards depending on the board power supply design. Usually PCIe equipped FPGA boards have a simple blocking diode to prevent a powered FPGA 12V from driving current into the motherboard. FPGA boards designed for older generation PCs likely have a number of issues if used in a current generation motherboard. It's not clear from your question how you are using the board in the context of the quote above but perhaps this reply will help clarify the discussion.
  19. It's hard to argue against requiring that all posts that involve AI generated content be identified a such. But how to you do that? From what I've read even AI tools designed to expose AI generated text is pretty bad at it. How would Digilent even verify that a registered user is a human much less than that a human user isn't using AI tools? I looked at the 'examples' of supposed AI content listed above and, while I have suspected AI posts, none of those were referenced and I wouldn't have suspected any of the ones listed as being AI content. There are plenty of bad (human) actors causing problems without thinking about computer generated issues. Personally, I don't want to waste my time providing training data for an AI. I also don't want to waste my time trying to interact with an irrational human either, or do someone's homework assignment, or get caught up in other countless unproductive interactions. Digilent doesn't even control the management of it's website, so how can they resolve the AI or other issues that bother contributors?
  20. I you are using Verilog or System Verilog then you have verification options that Vivado doesn't provide, like Verilator. As @D@n points out, for ZYNQ there is no clear way to "close the loop' to include the PS cores and PS/PL interconnect. Dan has published a lot of good information about AXI testing as it relates to all programmable logic implementation. I'm not sure how much of it is directly applicable to ZYNQ development flow. For logic implemented in FPGA resources there are different kinds of simulations that can be done prior to operating the design on hardware. Behavioral simulation simply tells you if how synthesis understands your code produces behaviors that you intend. The Vivado simulator is good at finding bad syntax or a subset of errors in your sources. It's a shame that Vivado doesn't provide good code coverage analysis that uncovers less obvious coding errors like inferred latches or incomplete case statements. In my experience most companies use tools like Synplicity for this purpose. As for whether or not your code is functionally correct, time step simulators are only as good as your testbenches. Behavioral simulation is not sufficient because even HDL source code that is behaviorally correct and synthesized might not work in hardware after implementation. Standard time step simulation provided by FPGA logic vendor tools lets you simulate a netlist that reflects the design post implementation. This takes into account all of the delays in the signals that make it into your FPGA resources. Because the implemented logic has been changed according to your synthesis and implementation settings, many of the signals in your sources don't exist in the final implementation. But the key here is that timing closure is the final hurdle to overcome before attempting to run your design on hardware. Whether your design includes an ARM PS or and external PC or, for that matter any external device with connections to a device external to the logic resources traditional time step simulators are inadequate to include all of the behavioral or timing related scenarios that need to be verified. Cycle based simulation is fast and certainly one of the tools that should be in your toolbox, especially for a design involving something as complex as a PS core external to the logic or even a ZipCPU implemented in logic resources. Cycle based simulation doesn't cover timing completely or, to my knowledge, even a PS core running software. You don't have to 'package' your AXI design sources in order to do the logic part of verification for ZYNQ development. In fact I'd say that packaging your IP should only be done after simulation of your testbenches using the Vivado simulator and perhaps a third party HDL source coverage tool. Verification can't be complete at this stage because at the other end of your ZYNQ AXI bus interconnect is the PS hard block executing instructions. I have been able to cause AXI bus faults executing perfectly fine C code in the PS by doing reads or writes at too fast a rate. Complete verification is bridge too far for hardware or even software. The best that we can achieve is adequate verification, which as far as I know is undefinable. AMD/Xilinx does have some documentation on verification though it is woefully inadequate, especially for the ZYNQ devices which are really ARM cores with logic rather than programmable logic with hard ARM cores from reading the documentation.
  21. I don't understand what exactly you mean here. Are you referring to adding 'instrumentation' to your source that can help debug AXI operation? You should always write your code in a hierarchical manner so that you can separate functionality into pieces that can be understood and 'easily' verified in simulation. This is especially important for a design sources that connect to external hardware that can't be simulated as a whole. ZYNQ PS/PL connection via AXI is one example. Logic connected to an external PC host via USB is another. You can fairly easily simulate the HDL logic, but there might well be USB upstream behavior for which there is no simulation model; host software driver, application, and OS behavior. If you don't understand all of the possible behavior scenarios of the external thing that connects to your logic, it's easy to make bad assumptions that leads to bad designs. I've run into this with PC USB <--> FPGA designs. Eventually, most programmable logic designs connect to something that you can't get a model for. Sometimes, for instance memory devices, the manufacturer has a model. Often you have to make your own; and then iteratively improve the model and your HDL sources as you refine your verification. So, that's one answer to your post. I expect that you will get other replies, including perhaps from me as I ponder your third question.
  22. I wanted to see if the QDRII+ memory on the NetFPGA_1G_CML board is usable, despite the glaring schematic error for the CY7C2263 part. I'm using Vivado 2020.2 as that's the most recent version that I have a device license for. It uses the latest MIG 4.2 IP. In the schematic the QDRII+ connections are properly terminated for HSTL_I_18. The CY7C2263 part is powered by 1.8V. The associated IO banks have a Vccio of 1.8V. The problem is that MIG only supports the HSTL_I ( 1.5V ) IOSTANDARD. This makes it impossible get through bitgen or to use this part of the board Digilent did the design and manufacturing of the board and supposedly ( according to a reply to a previous post ) confirms functionality of the board. I've been unable to get anything from Digilent that proves this, or allows me to use the QDRII+ memory. If Digilent is unwilling to provide a workaround then it should warn potential customers that the board is not fully functional as described in the user manual and schematic.
  23. Great! Thanks for the feedback. Hopefully, you can see the basic functionality as a starting point for more interesting projects.
  24. Yes, all of the picture cropping and data manipulation has been incorporated into the header file that's been provided. The initial grid display is done in software. You don't need to install GIMP or a hex editor to complete the project. Thanks for not mentioning all of the typos ( I still find more every time I re-read the documentation). So, once you have a bitstream, all you need to do is install the Adept SDK and create a project to turn the software sources into ( for Windows ) a x64 executable. The software sources have a few display content headers for different display modes.
  25. The sales blurb for the Eclypse-Z7 still says "The Eclypse Z7 is specifically designed to enable the rapid prototyping and development of embedded measurement systems.. reducing the time it takes for engineers and researchers to develop innovative and powerful new high-speed instrumentation, control, and measurement systems for edge-computing, medical, and communications applications." Curious as to whether this describes your experience with the board and support so far. I haven't cloned the repositories in quite a while and an hour ago I find that just getting basic information, like how many contiguous ADC samples does the AXI controller support, has gotten a lot harder to find. Still looking by the way. If your application mostly is implemented in your PL logic and you only need a simple, low speed way to write control registers and read status registers to control your design, the simplest way might be to use the spare PS UART, through the EMIO, to connect your software to your logic. The basic idea can be found in the tutorial: https://forum.digilent.com/topic/22512-manipulate-pl-logic-using-ps-registers/ It might not be appropriate for your requirements, but if it is the effort to get your design working might be a whole lot easier. It might be worth looking at.
×
×
  • Create New...