Jump to content

zygot

Members
  • Posts

    2,859
  • Joined

  • Last visited

Reputation Activity

  1. Like
    zygot got a reaction from mutilenka in Capability comparison of two different fpga devices   
    The A7-35T is probably a bit small for video applications, even low resolution VGA ones, but you can do simple display things with the Basys3, as this tutorial points out: https://forum.digilent.com/topic/19910-basys3-game-tutorials-beeinvaders/

    A problem with small devices like the A7-35T is that Vivado IP and MicroBlaze will use up a lot of its resources if you have to use that design flow. For an all HDL design you can fit a lot of functionality into even the smallest A7 device if it doesn't use external DDR3 memory.

    Someone with a very tight budget has to be very careful before spending money. These days boards don't necessarily come with all the stuff that you need to use them; like USB 2.0 cables, power supplies, etc. Also, because of global chip shortages process for even old board have gone up 50-100% over what they sold for when originally released.

    I'd recommend that you take an inventory everything that you will need to actually enjoy the fruits of your labor and add up the costs. It might be cheaper to spend more money on something that provides most of it than starting off with a minimal expenditure and buying add-ons as you go. For any kind of game experience you not only need a video output, and cable that's compatible with a monitor that you have handy, you need some way for the user to interact with your application. Do you expect to have audio? Do you need joystick inputs? Make a list and add it all up.

    Lastly, and perhaps more importantly, how are you going to proceed with your project goals? Do you want to learn FPGA development, or are you more interested in bootstrapping projects that you've found on the web? If it's the latter, then buying a platform for which project code has been written for might make more sense.

    Be careful of making an investment in something that may not allow you to get to your end goal, or will be a lot more expensive getting there by the time that you've purchased all of the extra stuff that's needed... only to find that you have a platform with very limited potential. Most cheap FPGA boards are designed to sell you more stuff. If it's a trap, avoid it, if it gets you to your goal within your budget, then do your homework and go for it if it makes sense. Even some cheaper Intel FPGA boards have PMOD connectors these days. Be cautioned that Intel has restricted the free version of Quartus to Cyclone 10LP and earlier and that some of it's free IP, like DDR interfaces, are broken or very hard to use.

    Cheaper Xilinx FPGA boards might not be the best choice, in terms of value for cost. The Cyclone V Start Kit is available for about double what it originally costs, but comes with a lot more features than a really cheap A35T board. I'm not recommending that as an option, just as an example of something different that's currently available from distributors. Unfortunately, this is not a good time to dive into cheap FPGA board development on a very tight budget.
  2. Like
    zygot got a reaction from JColvin in Using Vivado create_generated_clock   
    You have a few misconceptions going on here.

    The create_clk and create_generated_clock tcl commands are for timing constraints, not generating physical clocks in a design.

    In programmable logic, clocks are different than other signals in an HDL design. FPGA resources and routing resources for clocks are separate from other signal in a design. Most PFGA devices don't have on-board clock resources. So, a clock begins life in a clock module that is external to the FPGA and is connected by a clock-capable ( for Xilinx ) pin. From there it can be used to clock the user internal design logic or can go to an MMCM or PLL, which in turn can generate multiple output clocks that can be used in a design. This is how you should create a clock. You can use Verilog, System Verilog, or VHDL to instantiate a primitive or use the vendor IP; it's up to you. Until you know what you are doing, I'd suggest using the Clocking Wizard.

    While you can create a clocked signal using a counter or divider, or whatever suits your fancy to create a logic signal that you may want to use as a clock for the purpose of clocking other logic this is a very very bad idea. Don't do it. The correct way to create a custom clock with a specific frequency is to use an MMCM or PLL. That's what they are there for.

    Going back to your original tcl commands. Let's say that your FPGA has an external clock module that puts out a 100 MHz clock on a pin assigned the name sysclk. In your constraints file you can create a timing constraint that tells the tools some basic information about sysclk using the create_clock command. At a minimum the tools need to know period and duty cycle as your create_clock command does. Now, if you instantiate a PLL and run sysclk into the PLL and have the PLL create a 10 MHz clk_out that you name clk10, you can also use the create_generated_clock to create a new timing constraint for clk_10. If you use the Clocking Wizard, it will also create it's own timing constraints file in addition to yours with pertinent information. In recent version of Vivado, this can cause issues, or at least a warning that you are over-writing a timing constraint.

    In order for the synthesis and routing tools to properly create an implementation and place it, it need basic timing information for the clocks that create signals in a deign.

    Its possible to create a VIvado project and generate a bitstream using tcl but, except for constraints, an HDL is better for describing your design to the tools.
  3. Like
    zygot got a reaction from BMiller in GPU -> FPGA via DisplayPort Video Interface   
    I've been dealing with programmable logic vendors for a long time. The following is speculation, but perhaps informed speculation. FPGA vendors don't give anything away for free. The first thing that they don't want to do is let customers think that they can use a smaller part or part from a cheaper family to accomplish their goals. So, when they are nice, they just make it hard to use certain device features with cheap parts by either restricting documentation and application notes or making it hard to use their IP with cheaper parts. When they don't feel like being nice, they disable the capability to use a feature in the tools. One example of this is using transceivers for Cyclone V JESD204 applications. They advertise transceivers in Cyclone that can do JESD204. They advertise JESD204 for Cyclone. But I'd be interested if anyone has found a way to actually implement a JESD204 design for Cyclone V using the free IP that comes with Quartus; the tools say nah Stratix only. Transceivers have historically been a feature used to sell more expensive devices. Every Artix device sold has at least 1 transceiver in it. You wouldn't notice reading AMD/Xilinx Application Notes and White Papers.

    Besides now wanting a cheap family to undercut sales of expensive devices, vendors really don't want customers thinking that they can port designs to a device from a competitor. That's why FPGA vendors put so much energy into proprietary soft-core processors and SDK tools. You simply can't port them to a competitor's tools. Better yet, you can make customers reliant on soft-core IP to do things like Ethernet connectivity. If an FPGA vendor thought that a particular market sector application was important enough to take away sales from a competitor you can be sure that they will provide the IP to do so. Xilinx, in particular like to restrict some IP, like video., to AXI buss implementations that might make sense for ZYNQ and are required by MicroBlaze because it uses more memory and is more complicated than the average customer wants to invest time in.

    Of course, down the chain are "partnerships" with IP and hardware vendors. There's a certain amount of loyalty to those loose agreements; similar to the phrase "loyalty among thieves", depending on the marketing aims of the moment.

    You don't necessarily have to buy HDMI IP. While difficult, you can do video applications without it. Long time, but recently absent, user hamster has done valiant work in this arena.

    In the end programmable logic vendors are businesses ( though 80% of the FPGA market is now property of CPU vendors with their own agendas ) and businesses have to make money, so I'm not sure that reasons for why they do things are all that important to their users.

    As an aside, I've recent read that Intel is going to go to a marketing scheme where a customer can purchase functionality, for a price, that can be "enabled" in their CPU devices on live end-user products. That's where the world is headed, whether you like it or not. The age where companies competed for customer sales by providing a better product, or better service has been dead and buried for a while now.
  4. Like
    zygot got a reaction from JColvin in Debugging with the FTxxxx Mini-Modules   
    I often use a UART for debug and as a user interface for projects. 921600 baud that can be used with any serial terminal program is nice for some things but sometimes you need a faster, more flexible interface. Here I provide an FPGA interface that uses 4 pins and a separate FT4332H or similar mini-module to access design resources 1.2 MiB/s full duplex. The first demo is for the CMOD-A735T.
    The demo has something of interest for most readers besides a faster UART.
    CMODA35T_DBUG_DEMO_R1.zip
  5. Like
    zygot got a reaction from Martin Rozkovec in microUSB connector replacement for Nexys4 DDR   
    Micro USB connectors lacking through-hole shield tabs are simply unsuitable for general purpose use. For a student lab environment any product using them should be completely avoided. When these connectors break off... and eventually they will break off, all of the ground, shield and signal traces on the top layer of the board generally go with the connector, leaving the board almost irreparable. It's not just cheap boards that feature these abominations. I have expensive Nvidia boards that currently have these connectors missing.

    It might be possible to be proactive when you have to use boards featuring connectors designed to break off by applying a large dab of epoxy to add some mechanical integrity; but even this is a poor substitute for through-hole soldered tabs.

    Yes, the odd through-hole tab slot adds a bit of cost to a cheap product but no customer wants to save a few pennies when the price is a useless product after a few months of normal use.
  6. Like
    zygot got a reaction from Dereck in Vivado Program Size for Basys 3   
    AMD/Xilinx has never figured out how to distribute their tools. At this point, it's a lost cause. 120 GB downloads is just too big. I thought that the 65 GB download for Vivado 2019 plus the SDK was too big.
    The sad truth is that the older versions of Vivado are just as good, or in some case better than the current version. Vivado 2018.3 will work just as well and is, I think, less than an 8 GB download. Early versions of the tools were distributed on a 4 GB DVD. The only problem with using older tool version is that Vivado keeps changing the syntax for constraints, IP behavior, database structure, etc. On the positive side, if you try and create a demo for your board that was created in an older version ( most are ) then everything pretty much works.
    I have multiple PCs with older OSes and older FPGA vendor tools and often resort to using them to do work because trying to accommodate all bugs in the newer tools is just too much of a hassle.. it's quicker to just use the appropriate version of the tools.
    You can save some space by not installing support for devices that you are using, but it's not as much space as you might hope for.
    On thing that AMD/Xilinx can do is provide a non-GUI download option and better support script managed project development. There's a lot of really bad coding, bug creation, and tool bloat,  associated with the GUI. 
    Whether this is a path that you want to consider is something for you to work on.
  7. Like
    zygot got a reaction from Udayan Mallik in Arty A7 100 Ethernet to LEDs Circuit   
    Perhaps. It all depends on your work flow, design flow and what you want to do. I'm not sure who or what you are referring to when you mention "Cores provided by the Manufacturer". If Digilent has any RGMII, SGMII to GMII cores I've not run into them. All FPGA vendors that I know of take great pains to tie their Ethernet PHY "cores" to encrypted MAC cores and their proprietary soft-processor IP. In the case of UltraScale, Xilinx obfuscates it's actual PHY implementation and embeds a "virtual" serial management interface with a pass-through to the actual physical PHY. If you want to use their Ethernet cores, you have to implement your own serial management interface and program a couple of internal core registers in order to make them work. What are you going to do when your boss tells you that starting tomorrow you'll be using programmable devices from a different vendor, and you have a week to port your designs?  Why would anyone consider this a best bet?
    Sure, if you aren't using your FPGA to do anything else you can feed lots of resources to a MicroBlaze and  have fun creating and debugging a loadable executable with the SDK or Vivis. If all you want to do is be able to rebuild someone else's project or a simple application perhaps this is OK.
    None of that works for me. If what you want to do is learn about Ethernet and implement an easy to use point to point full-duplex communication interface that works at a much higher bandwidth than I suggest ignoring everything that FPGA board vendors and FPGA vendors are offering you. With a bit of work you'll end up with something that doesn't have any vendor IP, uses a fraction of the FPGA resources, doesn't have any FPGA software, and ( with a bit of work ) is portable to any FPGA vendor tools. 
    Sometimes free stuff is worth the price; often it end up being too expensive and not all that useful.
  8. Like
    zygot got a reaction from BMiller in Arty A7 100 Ethernet to LEDs Circuit   
    This is a good project, though perhaps LEDs aren't the best indicator of functionality...

    The Ethernet PHY is a great general purpose interface. It's basically a cable modem. If you understand, or want to understand some basic modem concepts it's a great interface to learn about. A 1 GbE PHY can do over 120 MiB/s, sustained data rates each direction in full-duplex operation. Unlike most serialized interfaces, Ethernet can be highly predictable in terms of timing.

    The place to start is to read about how Ethernet works. Unlike most popular serial interfaces, specifications and descriptions of the protocol and physical layer are public knowledge and freely available. It's well worth the effort as all FPGA vendors only provide Ethernet support connected to encrypted MAC IP . With UltraScale, ADM/Xilinx further obfuscates and abstracts even basic PHY DDR connectivity making understanding what they do almost impossible. Who wants that?
    The Digilent Project Vault is an un-curated, hard to use mishmash of projects with source code, projects that need someone to debug the code, and inappropriate posts that belong somewhere else so finding useful information there is difficult... but.. there are a number of projects with HDL code that might help bootstrap your project. Regardless, of how you proceed, understanding basic serial communications concepts, and in particular Ethernet physical layer operation and basic packets types is essential if you want to know how to tak advantage of this interface. Ethernet is, by design, packet based but you don't have to use standard packets to use it effectively for point to point communication.

    Have you seen this? https://forum.digilent.com/topic/16802-ethernet-phy-test-tool/

    There are a few such projects of varying usefulness in the Project Vault. There are also HDL Ethernet projects with code on the Internet.
  9. Like
    zygot got a reaction from BMiller in Arty A7 100 Ethernet to LEDs Circuit   
    That's an easy question to answer. Everything is your responsibility. That's one reason why FPGA vendors make it so hard to use this interface; it pushes inexperienced FPGA developers and students into using soft-core processors and expensive ( in terms of resource utilization ) IP and bus structures that are incompatible with all other FPGA vendors. There was a version of Quartus, a long time ago, that offered a RGMII-GMII IP with HDL source ( script generated but readable ). ISE provided basic PHY interface source that could, with some difficulty, be worked out.
    The first step to using an Ethernet PHY is to get the data interface working. GMII is a SDR ( one bit per clock ) interface and appeared on older Digilent boards like the original Geneis and ATLAS. RGMII is a DDR ( 2 biits per clock ) interface and uses half the required data bit pins and is more common on current boards as it saves IO and is easier to route PCB signals. So, the first step is to understand how all of the PHY interfaces work. You can find this information in some PHY device datasheets ( Marvel is notorious for requiring a NDA to get complete datasheets but became popular because Xilinx boards used it ). The Ti PHY data sheets describe the data interface pretty well.
    Now, there's a bit if a issue with PHY devices that support multiple data interfaces. The old Marvel 88E1111 PHY on the ATLYS can do GMII, RGMII or SGMII. The bad news is that the device may come of reset configured with the wrong data interface and you might have to reprogram the control registers using the slow MDA interface. The good news is that Digilent board are set up to use the correct data interface at the maximum data rate. Which brings up the second issue. Ethernet PHY devices are backward compatible to support slower data rates. The PHY on your board only supports 10/100. The 88E111 supports 10/100/1000. The same problem of how the device is configured out of reset applied to data rate. 10BASE-T uses a different modulation scheme than 100BASE-T or 1000BASET, and a different clock. Again, Ethernet PHYs on Digilent boards are configured to use the fastest data rate available for that PHY. There's other similar issues like auto-crossover, what data speed the PHY advertises, etc. These all can be changed via the MDA interface.
    What I do is make using the Ethernet PHY as simple as possible. No MAC, only use the maximum data rate ( no clock switching ), minimal functionality in terms of standard packets, no fragmentation support, no re-sending bad packets, etc.
    So, let me offer a task list to consider to get you started.
    The First step is to get your FPGA design to talk to the PHY. For you board this means the data interface. The second Project Vault link that I posted earlier can help with that. You also need a HDL serial MDA interface to configure the PHY registers as you will see below.
    Learn about the different PHY data interfaces and how your PHY is set up. Read and write the PHY control and status registers using the MDC and MDI (bi-directional) pins. Create a data interface supported by how the PHY is connected on your board. You probably have to learn how to use the DDR IO capabilities of your FPGA. This is not trivial and I'd suggest a side project doing just that to get experience. DDR typically requires timing constraints. In trivial designs you can get away without using timing constraints. Generally, advanced IO features require timing constraints like setup for inputs and delay for outputs relative to a clock. If the PHY clock period is much longer than FPAG routing and IO delays you might get away with pretending that timing constraints aren't needed. For 10BASE-T perhaps. For 1000BASE-T no. The more data bit in the interface the more spread in clock to output between them. Once you believe that you have a workable data interface it's time to try and send/receive data. Ethernet PHYs have a loop-back capability that is useful at this point. Whatever data your FPGA application sends to the PHY will be returned. This is a good way to test out just how robust your data interface is. Nothing happens on the cable end of the PHY but the FPGA/PHY data interface can be extensively tested. Once you are writing/reading PHY registers and reading the correct loop-back data from your PHY you are ready to connect it to another Ethernet PHY. This could be a uController, another FPGA board, or a PC. The PC is a whole ball of wax as Ethernet is handled by the OS. My PC has 2 Ethernet PHYs and one uses DHCP for Internet use while the other uses a static IP address and is used to communicate with an FPGA board, Raspberry PI, or other development board. It's always a direct cable connection because switches and routers present more complication and require more supporting packet types. Here's a tip. You don't need standard packets to use an Ethernet PHY, but the PHY is a modem so it needs timing information that needs to be refreshed periodically. This means that you can't just send 1 MB through a PHY. You need to send packets of data of reasonable size, say <16 KB) Every packet has to start with the same synchronization preamble. This identifies the first data bit in the serial stream and keeps the modulation/demodulation working over time. After the preamble you can have all data, header fields and payload data, a CRC field or not... anything that is appropriate for your application. If you FPGA wants to talk to a standard Ethernet node, then your FPGA application will have to support a few standard packets. You don't need a processor to do this. Verilog or VHDL is more than capable.
    I learned of of this stuff the hard way. It took me a year or so to find and work out all of the surprises. If you want to proceed with this be prepared to expend some time and energy on it and not expect to be an expert or complete a real world application any time soon. Don't have a goal of connecting your FPGA board to the internet. Windows, Linux and other OSs can't keep my PCs safely connected to the Internet so the chances of my FPGA doing that is pretty  much zero.
     
  10. Like
    zygot got a reaction from JColvin in Programming S3 board   
    Your question is similar to one that I responded to a while ago: https://forum.digilent.com/topic/4784-s3-starter-board-programmer/

    You may not have this board but might find the project, which includes sources, interesting. With luck, maybe eve useful.
  11. Like
    zygot got a reaction from silantyeved in Arty A7 100 Ethernet to LEDs Circuit   
    That's an easy question to answer. Everything is your responsibility. That's one reason why FPGA vendors make it so hard to use this interface; it pushes inexperienced FPGA developers and students into using soft-core processors and expensive ( in terms of resource utilization ) IP and bus structures that are incompatible with all other FPGA vendors. There was a version of Quartus, a long time ago, that offered a RGMII-GMII IP with HDL source ( script generated but readable ). ISE provided basic PHY interface source that could, with some difficulty, be worked out.
    The first step to using an Ethernet PHY is to get the data interface working. GMII is a SDR ( one bit per clock ) interface and appeared on older Digilent boards like the original Geneis and ATLAS. RGMII is a DDR ( 2 biits per clock ) interface and uses half the required data bit pins and is more common on current boards as it saves IO and is easier to route PCB signals. So, the first step is to understand how all of the PHY interfaces work. You can find this information in some PHY device datasheets ( Marvel is notorious for requiring a NDA to get complete datasheets but became popular because Xilinx boards used it ). The Ti PHY data sheets describe the data interface pretty well.
    Now, there's a bit if a issue with PHY devices that support multiple data interfaces. The old Marvel 88E1111 PHY on the ATLYS can do GMII, RGMII or SGMII. The bad news is that the device may come of reset configured with the wrong data interface and you might have to reprogram the control registers using the slow MDA interface. The good news is that Digilent board are set up to use the correct data interface at the maximum data rate. Which brings up the second issue. Ethernet PHY devices are backward compatible to support slower data rates. The PHY on your board only supports 10/100. The 88E111 supports 10/100/1000. The same problem of how the device is configured out of reset applied to data rate. 10BASE-T uses a different modulation scheme than 100BASE-T or 1000BASET, and a different clock. Again, Ethernet PHYs on Digilent boards are configured to use the fastest data rate available for that PHY. There's other similar issues like auto-crossover, what data speed the PHY advertises, etc. These all can be changed via the MDA interface.
    What I do is make using the Ethernet PHY as simple as possible. No MAC, only use the maximum data rate ( no clock switching ), minimal functionality in terms of standard packets, no fragmentation support, no re-sending bad packets, etc.
    So, let me offer a task list to consider to get you started.
    The First step is to get your FPGA design to talk to the PHY. For you board this means the data interface. The second Project Vault link that I posted earlier can help with that. You also need a HDL serial MDA interface to configure the PHY registers as you will see below.
    Learn about the different PHY data interfaces and how your PHY is set up. Read and write the PHY control and status registers using the MDC and MDI (bi-directional) pins. Create a data interface supported by how the PHY is connected on your board. You probably have to learn how to use the DDR IO capabilities of your FPGA. This is not trivial and I'd suggest a side project doing just that to get experience. DDR typically requires timing constraints. In trivial designs you can get away without using timing constraints. Generally, advanced IO features require timing constraints like setup for inputs and delay for outputs relative to a clock. If the PHY clock period is much longer than FPAG routing and IO delays you might get away with pretending that timing constraints aren't needed. For 10BASE-T perhaps. For 1000BASE-T no. The more data bit in the interface the more spread in clock to output between them. Once you believe that you have a workable data interface it's time to try and send/receive data. Ethernet PHYs have a loop-back capability that is useful at this point. Whatever data your FPGA application sends to the PHY will be returned. This is a good way to test out just how robust your data interface is. Nothing happens on the cable end of the PHY but the FPGA/PHY data interface can be extensively tested. Once you are writing/reading PHY registers and reading the correct loop-back data from your PHY you are ready to connect it to another Ethernet PHY. This could be a uController, another FPGA board, or a PC. The PC is a whole ball of wax as Ethernet is handled by the OS. My PC has 2 Ethernet PHYs and one uses DHCP for Internet use while the other uses a static IP address and is used to communicate with an FPGA board, Raspberry PI, or other development board. It's always a direct cable connection because switches and routers present more complication and require more supporting packet types. Here's a tip. You don't need standard packets to use an Ethernet PHY, but the PHY is a modem so it needs timing information that needs to be refreshed periodically. This means that you can't just send 1 MB through a PHY. You need to send packets of data of reasonable size, say <16 KB) Every packet has to start with the same synchronization preamble. This identifies the first data bit in the serial stream and keeps the modulation/demodulation working over time. After the preamble you can have all data, header fields and payload data, a CRC field or not... anything that is appropriate for your application. If you FPGA wants to talk to a standard Ethernet node, then your FPGA application will have to support a few standard packets. You don't need a processor to do this. Verilog or VHDL is more than capable.
    I learned of of this stuff the hard way. It took me a year or so to find and work out all of the surprises. If you want to proceed with this be prepared to expend some time and energy on it and not expect to be an expert or complete a real world application any time soon. Don't have a goal of connecting your FPGA board to the internet. Windows, Linux and other OSs can't keep my PCs safely connected to the Internet so the chances of my FPGA doing that is pretty  much zero.
     
  12. Like
    zygot got a reaction from Udayan Mallik in Fan   
    All Series 7 FPGA devices have access to the substrate temperature sensor via XDAC. For ZYNQ, getting the temperature is pretty straight-forward.

    I highly recommend users of all Series 7 FPGA devices to incorporate temperature monitoring into their HW and SW designs as a matter of habit. It is possible to configure all Digilent boards ( most FPGA development boards for that matter ), heat sink or not, fan or not, with a design that will over-tax the power supply and thermal dissipation capabilities. Don't think that a fan or heat sink removes proper design tasks from your plate... they don't. Access to FPGA internal temperature and voltages were put into the Series 7 devices for a reason. Except for trivial deigns, depending on the platform, designers are responsible for managing out of spec conditions.

    The Eclypse-Z7 suffers from a lot of poorly thought out SYZYGY DNA and house-keeping design decisions, the fan control being just one. Don't wait for a solution to important issues without addressing them for yourself while you wait for a convenient solution from your board vendor.
  13. Like
    zygot got a reaction from artvvb in ZMOD-ADC1410: Pipeline delay of AD9648   
    The AD9648 has a pipelined architecture by design. That's how it works.

    Generally this type of ADC is continuously sampled so latency is only an issue with respect to the first sample, and then only if there's some absolute time event in your system design that sample need to be referenced to. The piplined delay is fixed. A for phase delay, systems often have elements in the analog conditioning circuitry that have a bigger effect on phase lag and are frequency dependent. This is part of the design analysis for every specific use case when using an ADC.

    You should get the AD9648 datasheet before trying to use the device even if you are relying on a packaged FPGA HW scheme.

    I'm not understanding your perceived association between gain and phase latency.
  14. Like
    zygot got a reaction from Sando in Timebase accuracy, SFDR and RMS noise for USB scopes with waveform generators   
    A problem with Digilent's sampler/waveform generator products, either in the instrument or FPGA development platform line is the lack of an external clock input.

    For their SYZYGY products the lack of an external clock input is a mind-boggling blunder for most serious work.

    For low end instruments like the Analog Discovery line I suppose that a fixed time-base clock of unknown accuracy and stability is good enough for some applications. I'm not sure that the market that these product are geared toward even merit a high performance timebase option at a surcharge. I haven't tried working out the ramifications. No doubt the current timebase is adequate for the rest of the hardware.

    The specifications that you are looking for are not unreasonable for low to moderate instrumentation ( excluding the 2 ppm bit ).

    I'm not sure what you mean by SFDR and RMS noise as referenced to a clock source. Frequency accuracy, stability, phase noise, jitter etc. are more common specifications for clock modules. I'm reasonably certain that all of Digilent's AD line use a common clock module. Beyond the time-base specifications certainly ADC and DAC specifications are something to consider.

    Perhaps you could use the datasheets for Eclypse-Z7 clock module and ZMOD ADC and DAC devices as a guide if Digilent doesn't want to specify these things for it's instrument product line. This would likely be a reasonable guesstimate.

    The nice things about the ZMODs and, by inference the more expensive AD instruments, is that they are geared toward low end "scope/waveform generation" applications, and at least initially had great documentation. The worst limiting thing about the Digilent products that use them, or versions of them, is the lack of an external clock reference.

    There's not a lot of selection out there for making your own ADC/DAC FPGA based instrument. I've used the Terasic DCC. It has clock inputs to accomodate any time-base requirement. The ADC and DAC Fs is higher than the ZMODs. Unfortunately, the converters connect to wide-band transformers so DC and low frequency performance isn't so good. This depends on your application of course. The DCC and the Cyclone V GX Starter board make for a nice alternate to the ZMODs and available SYZYGY FPGA base-boards, if you don't need the convenient features that the ZMODs supply.

    The problem with ADC/DAC applications is that there just isn't a small set of criteria sufficient to know if the hardware is up to the task required by the measurement. Making general purpose ADC hardware is almost guaranteed to eliminate a range of potential applications.
  15. Like
    zygot got a reaction from yunwei in Use the PLL output 100MHz on CMOD A7   
    If you set up a PLL to have a 1X input/output frequency ratio you should expect to see that, which is what you report. Try using 12 MHz as the input clock frequency.

    It's true that the datasheet states that the maximum PLL_Fmaxout is 800 MHz for all Artix devices and all speed grades. That doesn't necessarily mean that you can generate such a frequency from a 12 MHz input frequency. There are other parameters that limit MMCM and PLL output frequencies such as PLL_Fvcomax. Let's say that you could generate an 800 MHz PLL output clock; what are you going to do with it? Fmax_bufg is 464 MHz for the -1 part. Seeing evidence of such a clock on an ouput pin that you could measure would be unlikely, even with very expensive FET probes and a high end scope. You might have some luck doing this by inference using a divided down version of the PLL clock.

    The Vivado clocking Wixard is useful for limiting expectations to something realistic. Closing timing on a design that uses a very high speed global clock near the maximum might be challenging.

    The maximum OSERDES toggle frequency for your device tops out at 950 Mbps in DDR mode implying a 475 MHz BUFIO clock which is within AC specifications but too high for any global clock buffer.
  16. Like
    zygot got a reaction from yunwei in Use the PLL output 100MHz on CMOD A7   
    You can easily create a 100 and 50 MHz clock sources from the 12 MHz external clock input on the CMOD A7 boards. Just specify 12 MHz as the input and whatever frequency you want as the output(s) of your PLL or MMCM instantiation using the Vivado Clocking Wizard. The CMOD-A7 has quite a few clock-capable pins available if you want to supply an external clock source, and you can use that external clock as an input to a PLL or MMCM.
    I suggest that most users take advantage of the Clocking Wizard for creating global clocks of arbitrary frequency because it takes into consideration the quality of the output clock with regard to jitter. All derived clocks suffer from jitter when generated by FPGA PLL and MMCM hardware. You can specify a limit to this jitter in the Wizard. What this means in practical terms is that if you have, say a 10 MHz input clock connected to your PLL and want an 11.11105 MHz clock output the wizard will supply an output as close as possible to that specification within the limits of the jitter specification you supply... or possible let you know that it can't provide a clock to your specification. The wizard also has a drop down list of potential output clock frequencies that you might want to consider if that's convenient. If a clock output frequency can be defined as an integer multiple and integer divisor of an input frequency while meeting the maximum VCO specification for your device then none of this is a problem. Common clock frequencies for things like video and communications are generally very odd frequencies so this is where problems occur.
  17. Like
    zygot got a reaction from Udayan Mallik in Xilinx License   
    You are correct. You can use the free version of the tools and there are no IP licenses required to develop for that platform.
  18. Like
    zygot got a reaction from JColvin in Basys: Buttons btn{RLUD} not valid in sensitivity list   
    Sometimes it's really hard to differentiate between questions that are classwork and questions from people just wanting to learn modern programmable logic development. Your question was hard to judge, so I tried to provide clear guidance without presenting specific instructions. I really have no idea what you prof might be looking for as an answer for an introductory class, but hopefully I threaded the needle appropriately.

    BTW, it's also possible to de-bounce mechanical button states using resistors and capacitors in lieu of digital counters as a time delay. It's still logic... just not binary logic. Before programmable logic, before CMOS and TTL LSI and MSI gate level logic devices there was RTL which used discrete transistors, resistors, capacitors, and inductors to do digital design. The end result might have been a 1 or 0, off or on, but what was between the input and output was analog design. In some ways, nothing has changed except that the complicated stuff has been abstracted away from the designer.

    Even simple things turn out to be more complex than one might assume if you don't have experience using them.
  19. Like
    zygot got a reaction from ButtonUp in Basys: Buttons btn{RLUD} not valid in sensitivity list   
    Mechanical buttons and switches are notorious for "contact bounce". It really doesn't matter how you use the logic state of such an input if you don't account for this.

    Neither of your attempts are ideal, but the second one is on the right path. Nonetheless, it would be worth your while to understand what the tools are telling you about how it understands your Verilog code.... and how you understand Verilog sensitivity lists.

    There are two approaches to solving your problem.
    - You can create a 'de-bounced" version of the button input signal that is guaranteed to have only one leading or trailing edge.
    - You can use a state machine to work off of the first logic transition and wait until enough time has passed before being sensitive to the next edge.

    How long it takes for a button input signal to be conditioned into a usable form obviously depends on how the user presses and then releases the button. The problem isn't too hard if you use common sense about how you want it to work. For a user interface you might want to ensure 500 ms between button presses, but of course this limits how you can use a button. Regardless, you are constrained by the behavior of the mechanical device. So, the best idea might be to use buttons and switches as initiators of logic sequences rather than as an enable for counting.

    One way to de-bounce a button is to use a global clock to detect the first transition from the quiescent state to one that indicates a button press. You can create a single clock-wide pulse as your initiator signal. You'll also want to kick off a counter that ignores further input transitions until it's unlikely that contact bounce is a possibility. Assume that this is 10's or 100's of milliseconds.

    Consider that a button press isn't just one event. The user has to initiate opening or closing of a contact switch, and then at some later time release the button and initiate the same thing but in reverse order. So every button press has two sets of contact bounce periods. User's can always cause design issues if your design doesn't restrict potential user behavior with signal conditioning.
  20. Like
    zygot got a reaction from Tony V in Actual Max Protected Voltage Input for Digital Discovery   
    Any signal input to and FPGA that goes below 0V, like true RS-232 is incompatible with FPGA pin DC specifications. If you add an interface that converts these to on the the 3.3V single-ended IOSTANDARD logic compatible logic then you are good to go. Don't drive FPGA pins below ground. There protections diodes in the FPGA device but these are not meant to counter widely out of specification input voltage levels. Read the datasheet for the Artix family to see what the DC specifications are. I haven't looked at the Digital Discovery so I don't know what design information is available. 0-3.3V would seem to be a safe bet for inputs to the product as overshoot is likely for some signal sources.

    I'm not sure that the Digital Discovery product was design to compete with the other products that you mention. None of these product are meant to replace those clunky expensive logic analyzers from the big instrument vendors.

    If you really want something to analyze RS-232 or RS-485 traffic there are likely cheap products to do that. Really, you could turn any cheap FPGA board with external memory into such an analyzer. If the board has an FT232H or similar device and supports synchronous 245 FIFO mode then even better. You'd still have to make an adapter to convert those signals into FPGA friendly logic levels. You then have an instrument that you can tweak to provide just about any analysis you can ever want. I did this a long time ago so I know that it's possible and doesn't require any exotic programmable logic design expertise. Also a good educational project.

    If you just want a ready made inexpensive instrument the AD1 or AD2 is hard to beat. You still need to condition signal inputs to appropriate levels.
  21. Like
    zygot got a reaction from Prabhat.kumar in DDR3 memory interface with Zybo Board   
    For a PHY clock of 533 MHz 1066 Mbps is correct in terms of the data rate per DQ pin. 1066 MT/s is also correct. The actual data rate depends on the width of the DQ data bus, so neither the Mbps or MT/s performance specification has much meaning without knowing the later.

    I'm not sure what you mean by "exploring the SDRAM available on the board". It's possible to change the memory controller setting registers but this is not recommended. In general PL designs can DMA data to and from the PS external memory, or internal PS memory using the PS AXI infrastructure.Of course these memories can be used by applications running in the PS ARM cores. If both the ARM cores and the PL are sharing a memory resource there will be performance penalties.

    Reading the board Reference Manual is great. Before trying to understand what you can do with your board you should read all of the relevant reference material associated with the ZYNQ FPGA device on you board as well as the documents related to using the programmable logic in its PL.
  22. Like
    zygot got a reaction from YuanhuiHuang in Zynq - When does it become useful?   
    My opinion is that for someone just starting out learning about FPGA development should start with a board just like yours. FPGA devices with an embedded hard ARM complex just adds to the complexity... and there's plenty of complexity for a beginner in programmable logic to master.

    A very small percentage of my projects involve a processor in the FPGA, whether a soft-processor like MicroBlaze or a hard processor like ARM. At some point you need to be able to create you own IP and designs that are FPGA vendor agnostic. And even if you are targeting a ZYNQ ir the like, you will need to have good logic design experience. Trust me, the Arty can keep you busy for a very long time honing those skills.

    I think that a lot of people ( especially those who work for ARM Holdings ) see the ZYNQ as a microprocessor with some FPGA logic attached. I see it as an FPGA with an attached, relatively high performance microprocessor. A lot of my projects are just that... one or more FPGA devices that communicate with a PC or SBC via PCIe or USB 3.0. This can usually provide more processing horsepower and flexibility than an ARM based FPGA. Software development is easier as well. My ZYNQ projects usually require two software projects; one for ARM and one for a PC. If you are building a small embedded system then ZYNQ is a pretty nice way to go.

    Put off the ZYNQ platform until you have mastered logic design verification and debugging. When you need to use an ARM-base device you will be in a much better place to do so effectively.
  23. Like
    zygot got a reaction from BMiller in ARTY A7-100 Need to use external clock   
    I (almost) always specify a device part number rather than a board in the project setting because I do an HDL design flow rather than the IPI design flow. I mention this because I do run into this situation for larger designs using multiple clock domains sourced by unrelated external clocks. Being unaware of all of the constraints in a design can indeed cause problems. Your observation about the MMCM placement being correct for an external clock that might be good for an external clock pin assignment in the board file sources is more than interesting and worth tracking down. I figure that one needs to be familiar with the quirks and habits of one's team mates if one wants to win games. In the case of FPGA vendor tools it's not always clear that they are on the same team as I am. I decided a long time ago that letting the tools be in charge was not in my best interests even if that meant more work for me. I keep coming across reasons to continue with that strategy every once in a while. New tool versions introduce new bugs and odd behaviors as documentation doesn't keep up with syntax or database changes and updating scripts get overlooked. The trend with the big FPGA vendors is to 'encourage' ( or force ) users into letting the tools manage the details of a project, so I find myself fighting with the tool even more than previously.

    You'd think that the tools would find a problem with an initial placement strategy that would lead to a hard error early in the analysis phase and terminate processing well before going through a complete analysis and implementation run. You'd think that making a 'pre-processor' that identifies confusing source constraints very early in the operation of the tools and notifies the user wouldn't be that hard to do. No doubt, companies that run the tools from home-grown scripts rather than the GUI interface do this as finding some problems before going through the algorithm crunching phase should save a lot of time.

    FPGA vendor tools don't reveal all of the 'tricks' behind the curtain. Sometimes this causes consternation and mysterious and seemingly bizarre behavior. Either you investigate or just move on. In general modern FPGA devices are so good that most designs never need that much hand holding in terms of timing closure. It might be wise to just put off worrying about having designs that are 'non-optimal' until timing closure becomes a problem... or not. I tend to stick with older tool devils ( versions ) that I know unless there's a compelling reaons not to so spending the time to resolve such quirks makes more sense. When I do need to use a newer version of the tools there tends to be a limit on how much time I'm willing to choose to spend fighting the tools verses how much time I have no other choice.

    [edit] It might be an easy and simple experiment to create a separate project, using the same sources, but targeting the proper device rather than your board. I do similar sanity checks when confronted with odd issues that seem to be very weird. A/B testing is a tried and true form of debugging, particularly when you are debugging the tools. More often, I find myself doing this with a different version of the tools on a different PC host to help expose release bugs and quirks. This works best with HDL designs of course as breaking old IP is a constant feature of new tools. I'm very cautious of letting the tools know more than they need to about my designs and platform because what exactly they do with board files isn't well documented or transparent. I do know, based on experience that the same IP, when used in native mode for the HDL design flow, doesn't get implemented the same way or work the same way as when used in the IPI flow... at least for the few times that I've tried doing this to test out new FMC mezzanine cards. Generally, the time saving and detail hiding options of FPGA vendor tools ends up giving me more problems than any imagined benefit. I do recommend that no one ever try and run two instances of the tool on the same host as this is a good way to expose some really bad coding practices on the part of the tool developers, in particular for Vivado.
  24. Like
    zygot got a reaction from jb9631 in A custom DDR3 controller for the S7-50 board   
    How's that ditty go.. "eyes wide open"..? In a sense the challenge turns out to have nothing to do with external memory controllers. There are things that you can learn from textbooks. There's things that you can learn in school. There's a lot more that you can learn from old battle worn engineers who've long fought the information wars with suppliers who claim to be your company's "partner". Product support doled out in tiers according to how important your company is viewed in terms of your vendor's willingness to provide key support predates programmable logic by decades. It's possible to live an entire career in the magical world where electronic components work as expected and life is easy. If the components are high performance or complex, it won't take long for many engineers to find themselves at the mercy of a vendor who isn't forthcoming with information vital to a project's success because you happen to be insignificant to their market objectives.  Anyway, without getting too expansive on a subject near and dear to my experiences, let's just say that if you want to do extraordinary things with complex silicon devices, you had better be prepared to find a way around unexpected obstacles thrown in your way. There are not many silicon devices as complicated as modern FPGAs and the connected-to-the-hip sibling that are the tools. Understanding that what you see ( or even read ) is not exactly what you get is an important part of this obstacle evasion skill set. Don't get discouraged... sometimes you get lucky ( I once worked for a very small startup that the big boys, more accurately someone working for them, thought had future market interest, and saw first hand an elevation in informatinon tier rating ), and sometimes you just lose a game in the competition between vendor and customer. There's almost always a path to getting to where you need to be though.. the important part is how long it takes you to get there.
    From what little you've posted about yourself I'm guessing that you are no grizzled old-timer... but you do seem to have the hard won wisdom of one. I've really enjoyed your posts, and perspective... perhaps with more than a few winces of empathetic pain that resonates loudly. Extraordinary.... I see good things for your future. Keep asking questions. For everyone's benefit keep posting the journey.
           
  25. Like
    zygot got a reaction from jb9631 in A custom DDR3 controller for the S7-50 board   
    Nice work and good presentation. I whole-heartedly applaud such efforts, especially when they are published with useful citations. Making it easier to find similar efforts encourages like-minded experimentation. Beyond the very practical benefit of an self directed educational exercise being able to eschew vendor IP with HDL sources that you understand is an invaluable asset. In some cases, vendors simply don't care about making their IP usable, or don't want to highlight faulty designs in their hard memory controllers.... so going your own way is a necessity. I've run into just this scenario for a board using a Cyclone V part and LPDDR2. The current tools specifically support the board but the IP is completely useless for a user wanting to have a high performance LPDDR2 design. The IP requires scripts that don't work on Windows, the hard external controller doesn't behave as the sorely incomplete documentation suggest that it should, etc. etc. I am unaware of any published design example that can be replicated demonstrating that the board is capable of burst read or write operation. The effective result is that a board that should be perfectly well suited to a large range of project implementations is rendered unusable for many of them because the external memory can't perform as advertised using the vendor's tools.

    External memory IP isn't the only functionality that FPGA vendors use to compete with their customers. Ethernet, transceivers and just about any high performance interfaces are also examples where users might find it useful, or necessary, to develop their own IP in lieu of the vendor's offerings.
×
×
  • Create New...