Jump to content

zygot

Members
  • Posts

    2,859
  • Joined

  • Last visited

Everything posted by zygot

  1. I've been using Digilent's Adept Utility for Windows to configure FPGA devices for a long long time. It's easier and more convenient than any of the Xilinx configuration tools through the years. It's especially more convenient than Vivado Hardware Manager if you have more than one device to configure at once. As long as it supports your device it's great. If it doesn't support your device, you can ( usually ) modify one text file to do that; I've done this. For Linux users there is no such GUI application but Linux users would rather type than demean themselves with a GUI application... or so I've been told. I do a lot of FPGA development on Linux hosts but as my brain has trouble herding the cats, er fingers, I generally just use the FPGA vendor tools. So, I'm not a Linux aficionado, just someone who sometimes need to get away from Windows.
  2. Say What??? If I'm reading you post correctly, what you want to do is disconnect the on-board power supplies using JP3 and drive the outputs with your own external power supplies using the test point pins... correct? Even if this were possible you'd have to adhere to the power supply sequencing requirements for your device. You'd have to disconnect the on-board power supply IC outputs from the board. You'd have to confirm proper supply startup operation. You're making a lot of assumptions about the board layout and connections as well as current carrying capability of the test point and pin. Series7 documentation covers device power supply design. This is where to start with such a project. Using a board like the Nexys A7 for such an experiment is not a good idea. You might be able to find an FPGA module with connections for external power supplies but even still there are a lot of details to resolve. Test points are for observing, not for replacing circuitry.
  3. Of course. You have the Digilent AXI controller source as a guide. With some effort you can create a different AXI controller to control both ZMODs. This will affect your software sources more than likely. There's no hardware constraint preventing the Eclypse-Z7 from implementing 4 ADC channels. While I've used the Eclypse-Z7 for 4 ADC channel projects I've never been inclined to slog though the bad documentation and source code to try and cobble a workable mod to do that; someone else might have a different opinion on this. A more straightforward solution might be to use a different platform. You can have 2 ADC-1410 ZMODs on the XEM7320. Since this isn't a ZYNQ based board the only software is using the USB 3.0 PC interface. Opal Kelly has sufficient examples and documentation to make this "not too painful'. You can still use the Digilent low level ADC-1410 controller source if you want to simplify your design problem. At least there's no Xilinx ARM software involved, or AXI related IP to deal with. I've done this so I know that it is a viable solution. Another option, which I've done as one of the many experiments with the Eclypse-Z7, is to create your own 64-bit AXI interface using one of the free AXI based IP provided with Vivado. There are a number of options supporting AXI DMA streaming with easier to understand data interfaces. The Z7020 can do up to 1200 MB/s DMA between the PL and PS controlled external memory using 64-bit data busses. I don't know of better ADC or DAC reasonably priced alternatives to the ZMOD product line and a good FPGA board ( I wouldn't put the Eclypse-Z7 in that category for a number of reasons ) with a couple of SYZYGY ports. It sure would be nice if Digilent offered a good non-ZYNQ FPGA platform to go with the ZMODs. Surely, even a good FMC add-on board with 4 standard SYZYGY ports and one SYZYGY transceiver port would be a good seller... reserve a couple for me. Someone's going to do it if Digilent doesn't want to support it's own product line because it would compete with their expensive instrument product line.
  4. I had supplied a link to this project: https://forum.digilent.com/topic/22512-manipulate-pl-logic-using-ps-registers/ Unfortunately, I didn't have my scripting blocker active and Digilent's website didn't allow me to choose how the line would be displayed after copying it to the post. I can't see the link above when I have scripting blocked in my browser, as I do now. Anyway, look it over. It might provide some clues. Understand that UltraScale might be different than Z7000 in terms of EMIO signals and software. The Xilinx software tools always have bugs. Some are not too bothersome difficult to work out and others are very time consuming to resolve. Cost of doing business. . Just be careful that what the tools are doing is what you intend. Sometimes you just have to create a special experimental project to explore the root cause of unexplained issues. I do this all the time. In the end the extra effort usually saves time because you can simplify the experimental project an make it simpler. Of course you need access to hardware to do this. I use UART interface for all of my projects, even if only for debugging. I have 4-5 TTL USB UART cables and breakout board and usually they are all connected to some board. All you need are 2 spare GPIO connector pins and a DGND pin and off you go. BTW the tutorial referenced above has a generic UART that works well and is simple to understand so I include it in my published project as a source. Since you are using a Linux based software environment there are a number of extra steps to consider when adding user mode hardware. Personally, I prefer to test hardware interfaces in as simple a fashion as possible. You need to work in a way that suits you... but consider alternatives when brute force isn't working.
  5. I've used the EMIO to connect PL logic or pins to a spare PS UART on many occasions. I've never used the any of the UART self-test software example projects. A better test is to implement a UART in logic and do you own test. Once that passes you can connect the UART EMIO signals to external pins. The EMIO UART interface might be more complicated than you are expecting. Also, sometimes the software tools confuse UART0 and UART1. Also, make sure that your software project are connected to the correct PS UART instance. You have to be careful with the software examples in the tools. Many times they require a particular hardware loopback or other arrangement. I don't remember well enough to say for sure, but I think that the PS UART has an internal loopback capability that an EMIO UART wouldn't have, unless you designed a UART to emulate that. A simple test could be to connect the Tx and Rx signals in logic for your example. Personally, I prefer to just connect this type of PS UART via EMIO directly to pins and use a 3.3V TTL USB UART cable or breakout board to test it from a PC. You can always add an ILA to troubleshoot problems, though I don't recall ever needing to do that. All of the issues that I recall facing are related to commentary in this post. I'm 90+% sure that your problem is related to these comments and not the board or tools.
  6. What you have is a logic standard compatibility problem, not an FPGA problem. If I were to consider connecting a sensor to an FPGA I'd never think about a direct connection to an FPGA pin. Unless, by some miracle, the sensor had a standard logic output signal compatible with one of the FPGA supported standards for the IO bank Vccio driving the pin, and I knew that the signal NEVER exceeded specification. I'd create a simple circuit to take in the sensor signal output and convert to an appropriate one suitable for the FPGA device. In the old days programmable logic supported Schmitt-trigger type inputs which is nice when an external signal doesn't exceed the maximum specified input levels of the FPGA IOSTANDARD but doesn't necessarily meet the minimum voltage levels either. There are some analog comparator devices available that are fast enough for your needs ( I suppose, since I don't have enough information to say for sure ). Anyway, it should (might ??) be easy to create a simple PCB that connects to your DE2 and makes the new sensors compatible. You can find out what logic types any particular FPGA family supports. For AMD/Xilinx Series 7 devices UG471 is a good place to start. The details of the specifications are in the datasheet for the device family you might be interested in. Traditionally, some logic families have a decision point about midway between the highest voltage that is considered a logic 0 and the lowest voltage that is considered a logic high. Some families, notably TTL have a min-max range for logic 0 and another one for logic 1, and a zone between them that is undefined. If the connection between an external logic driver and an FPGA pin has poor transmission line signal integrity characteristics for that driver then you can easily see overshoot, which is when the input pin sees logic levels that are too far below GND and too far above Vccio. This is a good enough reason for having a logic level conversion circuit, if protecting the FPGA device isn't a good enough one already. FPGA devices support a wide range of logic families, but they aren't designed to accommodate signals that don't conform to specific behavior. The voltage that powers the IO bank for a particular FPGA pin will restrict the choice of logic even if the external signal is within specifications for a particular logic type. What I'm trying to say is that you probably want to condition any external signals before driving an FPGA input pin regardless of the family or device. You'll need to consult the FPGA board vendor schematic to see what voltage poweres the IO bank for a particular FPGA pin or connector pin. I really don't understand what a 20 ns coincidence window actually means, particularly for 15 ns wide pulses, but there are logic, or analog comparators with logic outputs, that are fast enough... though there's quite a bit of detail work to go through to be sure. I'd think that It's quite possible to miss detection or have false detection simply due to ill-conditioned logic input signals for such an application. How important that might be is beyond my speculation. [edit] Designing programmable logic, or an interface connecting multiple unrelated sensors to an FPGA is a lot more complicated than many people would assume. A big problem is meta-stability which can result when you try and clock a signal where the transition between states is in the vicinity, in time, of the clock edge being used. Your sensor is kind of a digital version of the nested doll problem in that they have the same meta-stability possibility. Meta-stability isn't something that you can avoid when dealing with lots of unrelated clocks or signals that transition between states at random times unrelated to clock edges. You have to detect, account for and accommodate meta-stability. There are certainly digital design best practices practices for mitigating the phenomena though.
  7. Good to know that iperf2 works with Win10. I'm still confused by the "Ethernet 0 enabled in PL" reference. What do you mean by that? If you are running an iperf server on your ZYBO then you must be using the PS GEM connected to the RJ-45 jack on your board. That has nothing to do with the PL. The ZYBO Z7 reference manual is a bit confusing with regard to Ethernet connectivity. It mentions how the timing is set up for the MIO bank PHY RGMII interface but doesn't mention if changing the PHY settings is a requirement for operation. Regardless, there's no capability for using GEM1 with a PL connected PHY on that board.
  8. The EMIO allows one to route GMII Ethernet signals from the Pl to one of the 2 GEM interfaces in the PS. In order to do that you need an Ethernet PHY connected to PL IO pins. I'm not sure how you could do that with your board. The EMIO doesn't work the other way around; that is you can't route PS MIO pins to the PL. Using any recent version of Windows for Ethernet connectivity to development boards can be a painful experience. The fact that Win10 requires you to install an iperf3 client application is generally a sign of impending troubles. My Win10 box has 2 Ethernet ports. I use one for keeping the OS up to date and rare internet connectivity, and the other is set up with a static IP address. I use the static port to communicate with FPGA boards. I do remember trying to do what you are trying to do a while back. The only thing that that I found was a test of the ZC702 PS Ethernet running the Standalone iperf server application. I too installed the iperf3 for Windows x64 application and had no success. I did manage to run an iperf client on Centos6 however with success. For that test I used a USB 3.0 Ethernet dongle assigned a static IP. According to my notes this was the Centos6 commend to the iperf client: iperf -c 192.168.1.10 -i -t 20 -u -b 1G -B 192.168.1.20 The first IP address is what the Z7020 Ethernet was using and the second is the Centos Ethernet static address. If you have multiple Etherent ports on your PC you must specify which port to use with the iperf client. The first thing that anyone wanting to connect something to their computer via Ethernet should do is make sure that the 2 nodes are talking. This is best done in a terminal window or Windows command window using ping. If the addresses aren't compatible you usually get a message about the target address being 'unreachable'. Sometimes this can be resolved by correctly setting the address mask. Based on what I know so far though, I'd say that your biggest problem is no Ethernet connectivity in your ZYNQ platform. You might want to try building the iperf client or server application for you PS connected Ethernet port.
  9. I have some thoughts that you can feel free to ignore if you want. The first thing that anyone embarking on a project should do is decide on an appropriate platform for meeting the goals of a project. I have no reason to believe that you can't success implementing some portion of "speech processing", however you might care to define that in real terms. Choosing one of the least powerful ZYNQ devices in terms of FPGA resource and ARM processing power might be something to evaluate. A lot of this depends on the audio signal sample rate, sample storage requirements, whether you want to do real time signal processing or post capture processing, etc. Most of the projects that I do involves some calculating and experimental preliminary projects to make a basic assessment. Of course available hardware interfaces like an audio CODEC make a big difference. Fortunately, you don't actually have to possess hardware in order to do this level of preparation. The current tools allow you to target just about any ZYNQ processor to test out investigatory projects designed to provide a sense of what a particular platform's capabilities are. The next important question, for AMD/Xilinx development is what is the best tool to use. I don't have any real world experience with HLS so I can't comment. A lot of the calculus in deciding to use HSL or the regular version of the tools is how experiences you are with logic development, and what kinds of resources you want to bootstrap off of to get started. All I want to say about that is assuming that a similar project implemented on one platform can be ported to any other is probably not a good idea. Here's a few though to consider, or not, if they don't make sense to you: start off with one or two narrowly defined processing algorithms to implement and prototype them on a PC using whatever tools you are comfortable with and it's udio interfaces. if you don't have extensive FPGA development experience consider alternate approaches. For instance a cheap FPGA board without ARM processors can implement what programmable logic is really good at, such as custom interfaces. You can connect an FPGA to a Raspberry PI 3 or 4 through one or two SPI interfaces using DMA. 3-4 MB/s per SPI interface is a reasonable goal. The idea is to leverage software tools and libraries available on the RPi to simplify achieving the initial project goals. You can always build on small successes to get to a final goal. Often, the shortest route to achieving a goal that involves a lot of complexity is not a straight line. it might be easier to get started by by skipping most of the FPGA development flow if you are just learning it. Learning how to do programmable logic design while trying to implement a complex design is not for most people. If you want to use an FPGA board to capture 9600 Hz 16-bit audio from a CODEC, a UART at 921600 might be sufficient, though you might want to use 2 ascii characters per hex nibble. There are likely better alternatives, but I'm just throwing out ideas. My point is that it would be better trying to implement a project using your favorite language on a PC first and then try and port it to an embedded platform later. Too many variables spoil the soup.. to mangle more than one phrase of wisdom. It's hard to say from your post if you have well defined specific way-points and goals in mind or just want to dive into something that isn't well defined. Hoping to have success replicating something that you believe has been done without knowing the details of the hardware and software development flow and the specific pieces involved is hard to pull off.
  10. The project in the tutorial was developed on the Zedboard which uses a Z7020 device. Digilent boards generally have both a USB UART and JTAG implemented in the same FTDI bridge device and using one USB connector. The Vivado Hardware Manager can use this same USB cable to configure the device and run the ILA. That's how I used the ILA to verify that the design works as intended. The USB UART and JTAG are enumerated as separate devices with different endpoints so both can be used independently. Uually the Hardware Manager automatically finds both a bit file for configuration and an Itx file for the ILA, if it actually got placed into bitstream. If you had to find the Itx file youself there must be something gone awry during the analysis and implementation that caused the ILA to be removed. You can look through the messages for a clue. I run into this situation from time to time and it's always a silly mistake on my part.
  11. The PS contains a PL330 DMA Controller that can move data from any address accessible to the cores. It's possible to provide addressable connections to logic in the PL though AXI bus IP. For instance you can add an AXI BRAM controller and dual port BRAM ( resides in the PL ) in such a way that the cores can read or write to it as well as the logic in your PL. This is a fairly simple and straight-forward way to pass data between the PL and the cores. If you have a lot of data that you want to transfer between the PL and memory that the PS controls, like DDR, there are other AXI IP that implement DMA in logic without PS intervention. The BRAM Controller provided with Vivado is limited to 8KB but you can implement many iterations to achieve a larger size. I would think that it's quite possible to implement, say a VGA 680x800 8-bit frame buffer in BRAM and do everything that you want to do in logic. It's a matter of scale and what you want to demonstrate. The PS might not even have much involvement, depending on how you do your processing. There are a lot free video related AXI IP that comes with Vivado that might be of interest so there isn't one ideal way to implement a particular design. I don't want to steer you in any particular direction. The unfortunate thing about using the free IP that comes with Vivado is that they come and go from tool version to tool version. Also, FPGA vendors have a bad habit of breaking IP from previous tool versions. The only alternative is to write your own AXI master or slave. I wouldn't recommend that for anyone without a lot of AXI design experience. Look around. I'm sure there is a similar project published for your board. When using Windows I usually use Digilent's Adept Utility to configure FPGA boards as it can be less of a hassle than Vivado Hardware Manager... especially if I have multiple boards in a design, which happens frequently. I really never was friends with the ISE Impact programming facility.
  12. Any signal input to and FPGA that goes below 0V, like true RS-232 is incompatible with FPGA pin DC specifications. If you add an interface that converts these to on the the 3.3V single-ended IOSTANDARD logic compatible logic then you are good to go. Don't drive FPGA pins below ground. There protections diodes in the FPGA device but these are not meant to counter widely out of specification input voltage levels. Read the datasheet for the Artix family to see what the DC specifications are. I haven't looked at the Digital Discovery so I don't know what design information is available. 0-3.3V would seem to be a safe bet for inputs to the product as overshoot is likely for some signal sources. I'm not sure that the Digital Discovery product was design to compete with the other products that you mention. None of these product are meant to replace those clunky expensive logic analyzers from the big instrument vendors. If you really want something to analyze RS-232 or RS-485 traffic there are likely cheap products to do that. Really, you could turn any cheap FPGA board with external memory into such an analyzer. If the board has an FT232H or similar device and supports synchronous 245 FIFO mode then even better. You'd still have to make an adapter to convert those signals into FPGA friendly logic levels. You then have an instrument that you can tweak to provide just about any analysis you can ever want. I did this a long time ago so I know that it's possible and doesn't require any exotic programmable logic design expertise. Also a good educational project. If you just want a ready made inexpensive instrument the AD1 or AD2 is hard to beat. You still need to condition signal inputs to appropriate levels.
  13. Of course, assuming that you have any logic in the PL to capture. How do you do that? This is more complicated. It depends on your design flow... the ILA core has a native or AXI version. The logic in the PL can be clocked with a source from the PS clocking infrastructure or external clock connected to a PL pin. If you are looking for a starting point you might consider this HDL design flow example: https://forum.digilent.com/topic/22512-manipulate-pl-logic-using-ps-registers/ If you try the tutorial out and have questions about it post them to that thread.
  14. Please read carefully through UG585 the ZYNQ 7000 series TRM. The PS has a DMA controller, and I've used it, but it's not of much use if there's no data path between the PS and PL. At some point, if you want to use the PS DDR, or you want an ARM core to have access to a PL video frame buffer, then you will have to have some AXI IP in your PL logic connected to either one of the fabric masters ports or slave AXI ports. The Z7010 isn't very resource rich but it might be possible to implement your project without much interaction from the PS if your video frame buffer can be small enough, and your video resolution is low enough. I don't know if the HDMI interface on the original ZYBO is all that usable. Digilent has improved upon HDMI interface design in more recent products. The Z7020 is roughly equivalent to an A75 device in terms of resources. I still use ISE on Win10 for ATLAS based projects. In some specs that is a better video platform to work with than low end Artix based devices. The Spartan family did not have nearly the BRAM resources that the Series 7 devices have though. Personally, I usually opt to do development with ISE on my aging Win7 box where everything works as intended. I guess that you have some tough decisions to make before thinking about how you might do about implementing your project. Video depth and resolution will likely be a driving factor in that process.
  15. Good question to think about. Connectors requiring high insertion force generally have low insertion cycle specification numbers; it's all about mechanical deign and stress. You can look at datasheets for similar connectors to get the answer. No doubt it will be disappointing, hopefully conservative. Products designed to fit the price range of the Analog Discovery don't usually consider such things as they don't provide lifetime warranties for parts and connectors with high insertion cycle specifications aren't cheap enough to use without altering the product price considerably. One solution, if you are worried about it, would be to create a custom adapter to avoid plugging and unplugging thing onto the IO connector. I suppose that if the demand were there Digilent would offer their own solution to those willing to pay for it.
  16. The answer to your question is somewhere in the design sources. If it's important enough you can find the answer. Fortunately, since this has to do with clock generation and resets, you can narrow down the number of source files that you need to focus on. The exercise is well worth the effort, even if only for educational purposes. I've not come across anyone even interested in logic behavior during reset as most people are singularly focused on what's happening when reset is not asserted. It's still a good question to pursue as logic resets aren't as straight-forward as most people would assume. Can I assume that you came across this because the design isn't working as expected?
  17. There's no way to use the Mig or any other IP to allow your PL design direct control over memory controlled by the PS for ZYNQ devices. If you look at the schematic, what pins would you assign the DDR controller PHY to? There are none. The ZCU106 is one ZYNQ based board with a dedicated DDR memory connected to PL pins. I am unaware of any other such Xilinx board. Your only choice, if you need to use external memory in your logic design, is to transfer data to and from PS memory via AXI infrastructure. I haven't done this but I believe that there are design examples to lead the way. For low resolution displays like VGA you can use Pl BRAM as a frame buffer and avoid using the PS DDR. The original ZYBO has a VGA connector for low resolution video output and a single HDMI for output and, in theory input. Both interfaces are connected to PL pins. Again, you need to have some understanding of what the ZYNQ device resources are as well as how your board is designed in order to make a usable design plan. If you are going to use a ZYNQ based device or soft-processor based deign, then you need to figure out how to partition your functionality between hardware and software as well. There's a lot of information to consume but you can do what you want with the board available to you. Start with reading the documentation for your board and then ZYNQ related documentation It's not a trivial pursuit so you need to figure out how to scan through all of the basic documentation and concentrate on the pertinent stuff.
  18. External memory for devices like the on one the Arty-A7 will be completely different than for a ZYNQ based board like the ZYBO. There are very few ZYNQ-based boards with external memory connected to the PS and a separate external memory connected to the PL. You could use a MicroBlaze and AXI bus IP in an Arty-A7 video application, but this would consume a lot of resources, particularly BRAM. The ZYNQ 7000 TRM is a good place to start reading. FPGA vendors have had a bad habit of making some interfaces, like DDR memory, overly complicated and confusing. Dynamic memories are complicated all by themselves. As data rates get higher, so does the complexity.
  19. For a PHY clock of 533 MHz 1066 Mbps is correct in terms of the data rate per DQ pin. 1066 MT/s is also correct. The actual data rate depends on the width of the DQ data bus, so neither the Mbps or MT/s performance specification has much meaning without knowing the later. I'm not sure what you mean by "exploring the SDRAM available on the board". It's possible to change the memory controller setting registers but this is not recommended. In general PL designs can DMA data to and from the PS external memory, or internal PS memory using the PS AXI infrastructure.Of course these memories can be used by applications running in the PS ARM cores. If both the ARM cores and the PL are sharing a memory resource there will be performance penalties. Reading the board Reference Manual is great. Before trying to understand what you can do with your board you should read all of the relevant reference material associated with the ZYNQ FPGA device on you board as well as the documents related to using the programmable logic in its PL.
  20. Mig is only for external memory connected to logic for Series7. You board only has external logic connected to the PS block. I assume that what you want to do is access the PS DDR from a logic design in the PL. The path to do that is through AXI bus resources in the PS. There are IP available in the tools for creating PL AXI interfaces. If you are running the Xilinx Standalone or FreeRTOS then there are software examples to help with the software. The place to start for any FPGA project is to read the documentation for your device, especially for ZYNQ.
  21. Things change.. especially when a vendor has a new owner. The way that it was is not necessarily the way the it is, or always will be. Something has clearly changed since AMD took ownership of Xilinx. My node-locked device license for the Kintex 325T works with Vivado 2020.2. In fact, it works for the NetFPGA-1G-CML board which has a K325T in a smaller package. That doesn't mean that the newer tools will honor node-locked device licenses for those boards. My K325T license is only good for Vivado 2021.12 so I can't verify that the ML tools will honor the license ( There's no such thing as version 2021.12 and I don't have Vivado 2021.1 installed ). It's possible that the rules for node-locked licenses have changed and you may have issues with Vivado 2020.2 and your board.. but it might be worth the effort to find out. What I do know is that the ML version of Vivado 2021.2 takes forever to open, but not before telling me that it's timed out and I try starting an earler version of the tools. I think that it must want to call home... which it can't do on this machine. I also know that when I tried to create a new project in Vivado 2021.2 by adding sources from an older tool version of a Genesys2 project it never completes the required IP "upgrade" process... just keeps telling me that it's working on it. I have used Vivado 2021.2 to create new ZYNQ UltraScale+ projects from scratch as the "free" version supports my devices. I also know that all of the older tool versions now take a long time to start up since the 2021.2 version was installed. Something has changed with recent versions of the tools. In the short term you might try using Vivado 2020.2. In the past older versions of the tools seem to have honored newer licenses, though I wouldn't guarantee it. Likely the whole license scheme is different from what it used to be. So far the AMD/Xilinx situation isn't as bad as Intel/Altera has gotten... but rivals tend to imitate each other, usually to the detriment of their customers. [edit] Imitation isn't just a sincere form of flattery... it appears to be a good way to create ersatz forms of monopolistic and unfair competitive practices without legal consequences. The losers of this game generally are the customers. Obviously, a vendor that sells a particular FPGA board that requires a special node-locked device license needs to keep up with the tools as some boards will no longer be viable in the market place if the customer has to pay 3X the cost of the board for 1 tool version in order to use it. I'd suggest that Digilent should be ahead of this and work out issues before customers find them. That means testing all new tool versions with such boards and being proactive in letting potential customers know about any changes the the tool vendor decides to impose on its customers.
  22. Could you be more specific? What IP are you referring to? What hardware are you referring to? The idea of looking for a platform that specifically supports the DisplayPort spec that your GPU supports is a good one. As JColvin hinted at, don't assume that any tool version supports any interface standard for any particular device. Sometimes, IP is used to help sell a new device family or board and then the IP gets depreciated in later tool versions. This happens quite a lot with all programmable logic vendors. The original TRD for the ZCU106 might serves as a good example of this. As far as free FPGA IP goes, "beware of Greeks ( FPGA vendors ) bearing gifts. There's always the 3rd party IP for a price. Sounds like an interesting project but there are a lot of details that might derail your plans. In general FPGA transceivers are general purpose and IP support for specific standards generally lag with respect to other current hardware. Determining how well such IP supports standard specifications for interfaces like DisplayPort isn't always easy to do. The Kintex part on the Genesys2 has GTX transceivers capable of 10 Gbps line rates. The DP_IN and DP_OUT each support 4 lanes of transceivers. That doesn't mean that the board is a good platform for implementing a specific display standard for any custom video transport application.
  23. Let's discuss your 5000000 baud rate. If your FTDI USB UART Bridge device uses the VCP driver then your maximum baud rate is limited. Also, OS "FIFO" resources are limited. Have you considered as to whether or not your HDL design requires a FIFO? If you use the D2XX driver then you can use the D2XX API to get higher baud rates using C or C++. In general there are no standard applications that use the D2XX driver and support 8 or 12 Mbaud. You will still need to use hardware flow control in your PC software ( as well as a FIFO in your HDL design to transfer large amounts of data ). FTDI "H" devices support the 8 and 12 Mbaud rates as special cases. Below those rates there is a limited selection of exact baud rates available. FTDI application notes aren't always that helpful but AN_120 might be of interest. UART stands for Universal Asynchronous Receiver Transmitter. As the baud period gets smaller there are a number of problems to deal with. For one, the achievable baud rates become less fine grained. Also, the allowable deviation between master clock rates becomes tighter. Usually, the way to deal with these issues is to increase the clock frequency. You can easily do this in an FPGA, but the internal clock frequency of the bridge device is fixed. Without delving into your implementation, you might consider deciding on a more carefully selected baud rate, and perhaps HDL UART clock frequency. Before fixing the communication problem it might be a good idea to think about what your project requirements are. Do you need to capture live data with low latency and process sample on the fly in your PC application? Do you need to capture a long sequence of data for post-analysis? Starting off with a good specification will inform your data interface design choices. This means having a formal, or even informal, system design that is appropriate for your project goals in place before trying to implement it. Having a PC application in the processing loop is a very different thing than having an FPGA data capture design that transfer data to a PC application.
  24. There's nothing special about the ARTY as far as using a MRCC or SRCC pin routed to a PMOD connector as a clock source is concerned. You shouldn't be having so much trouble with it... so perhaps there's something else going on. You can definitely use any available clock capable pin as an eternal clock source regardless of the platform being targeted. I frequently have multiple clock domains using pins that aren't defined in the master constraints file supplied by board vendors. You mention assigning sys_clk to an arbitrary pin. Signal names are important. Try naming your new clock 'clock_p17' or other unique name. I suspect that reading the messages carefully will help identify the problem. Make sure that you source and constraints are using the EXACT same name for signals. This is one of the many examples where the IPI design flow causes more headaches and confusion than an HDL design flow would if the designer maintains all of the sources, including the constraints file(s). Don't for get to supply at least one timing constraint defining your new external clock frequency. If you have more than one clock domain in your design you need to know how to allow signals in your design interact properly using good logic design methods. You are likely to require additional timing constraints to get good synthesis and implementation results. Except for the few IPI design example projects that handle this for you, this design flow isn't so helpful when you want to do a custom design. I don't have a huge problem with IPI. It just isn't good for learning FPGA development. Sometimes it's OK for an expert in programmable logic design who just wants to quickly prototype an idea or concept. Even then it only works out well some of the time as script generated source code isn't usually easy to slog through when things aren't working as expected.
  25. Feel free to ignore the following commentary since I flopped so badly in my first attempt to help. Let's put things into perspective. You want to use an up to date host OS. You want to learn a depreciated tool that will hasn't been updated or supported for an age ( in terms of how long an OS version is current and up to date ). Is this really reasonable? Are you familiar with ISE? I only pose this question to you because I still do development for Spartan devices from time to time and use ISE. Though I have the ability to use Win10 for such projects, I generally use an OS that was current back when ISE was still being supported, like Win7. There are a number of good reasons for doing this in terms of tool functionality and licenses. It's just a thought. If you want to understand OS compatibility with ISE then the place to go is the AMD/Xilinx tool download page. I don't think that you will want to build a PC using one of those Linux versions just to have a "seamless" experience with a tool that is sorely out of date. Of course I could be wrong... Meanwhile Adept is a supported way to configure an FPGA that works on the kinds of Linux distributions that you seem to want to use. This might not change your perspective but even back in the days before Vivado, when ISE was the only Xilinx tool, I used the Adept utilities and avoided Impact when possible. Impact was never a particularly pleasant feature to work with.
×
×
  • Create New...