Jump to content

zygot

Members
  • Posts

    2,859
  • Joined

  • Last visited

Everything posted by zygot

  1. I (almost) always specify a device part number rather than a board in the project setting because I do an HDL design flow rather than the IPI design flow. I mention this because I do run into this situation for larger designs using multiple clock domains sourced by unrelated external clocks. Being unaware of all of the constraints in a design can indeed cause problems. Your observation about the MMCM placement being correct for an external clock that might be good for an external clock pin assignment in the board file sources is more than interesting and worth tracking down. I figure that one needs to be familiar with the quirks and habits of one's team mates if one wants to win games. In the case of FPGA vendor tools it's not always clear that they are on the same team as I am. I decided a long time ago that letting the tools be in charge was not in my best interests even if that meant more work for me. I keep coming across reasons to continue with that strategy every once in a while. New tool versions introduce new bugs and odd behaviors as documentation doesn't keep up with syntax or database changes and updating scripts get overlooked. The trend with the big FPGA vendors is to 'encourage' ( or force ) users into letting the tools manage the details of a project, so I find myself fighting with the tool even more than previously. You'd think that the tools would find a problem with an initial placement strategy that would lead to a hard error early in the analysis phase and terminate processing well before going through a complete analysis and implementation run. You'd think that making a 'pre-processor' that identifies confusing source constraints very early in the operation of the tools and notifies the user wouldn't be that hard to do. No doubt, companies that run the tools from home-grown scripts rather than the GUI interface do this as finding some problems before going through the algorithm crunching phase should save a lot of time. FPGA vendor tools don't reveal all of the 'tricks' behind the curtain. Sometimes this causes consternation and mysterious and seemingly bizarre behavior. Either you investigate or just move on. In general modern FPGA devices are so good that most designs never need that much hand holding in terms of timing closure. It might be wise to just put off worrying about having designs that are 'non-optimal' until timing closure becomes a problem... or not. I tend to stick with older tool devils ( versions ) that I know unless there's a compelling reaons not to so spending the time to resolve such quirks makes more sense. When I do need to use a newer version of the tools there tends to be a limit on how much time I'm willing to choose to spend fighting the tools verses how much time I have no other choice. [edit] It might be an easy and simple experiment to create a separate project, using the same sources, but targeting the proper device rather than your board. I do similar sanity checks when confronted with odd issues that seem to be very weird. A/B testing is a tried and true form of debugging, particularly when you are debugging the tools. More often, I find myself doing this with a different version of the tools on a different PC host to help expose release bugs and quirks. This works best with HDL designs of course as breaking old IP is a constant feature of new tools. I'm very cautious of letting the tools know more than they need to about my designs and platform because what exactly they do with board files isn't well documented or transparent. I do know, based on experience that the same IP, when used in native mode for the HDL design flow, doesn't get implemented the same way or work the same way as when used in the IPI flow... at least for the few times that I've tried doing this to test out new FMC mezzanine cards. Generally, the time saving and detail hiding options of FPGA vendor tools ends up giving me more problems than any imagined benefit. I do recommend that no one ever try and run two instances of the tool on the same host as this is a good way to expose some really bad coding practices on the part of the tool developers, in particular for Vivado.
  2. This is one of those instances where the tool actually provides a clear and detailed description of your problem. It's less clear about how to resolve it. FPGAs have clocking regions. Intel FPGA devices tend to be more restrictive in this regard, and boards using those devices require more externally connected clock inputs. You can read the Series 7 Clocking reference manual as well as the documentation on timing closure for more details, and I highly recommend doing that. The tools come with default synthesis and implementation strategies that don't always overcome all FPGA board design choices. You are stuck with the board layout and pin assignments that you have. This isn't necessarily a problem though. You can change the default strategy settings. You can manually place resources like MMCM to be more "optimal", though this might impact timing closure depending on your design and the IO pin assignments that you have to work with. Probably the best way forward is just to add the CLOCK_DEDICATED_ROUTE BACKBONE constraint suggested by the tools and see if you have timing issues in the post-implementation reports ( assuming that you have the report set up appropriately ). This likely works for just about anything that you want to do with your board. Vivado provides templates for such constraints, though these are not always up to date making them not so helpful. The TCL console is another way to obtain valid syntax for constraints. Sometimes this works out. Unfortunately, these features sometimes provide guidance that the synthesis and P&R tools will subsequently reject due to syntax changes. The tools, using the GUI, are just too complicated to maintain... though this problem as a pretty easy fix. It isn't likely that you can connect an external clock source to your FPGA clock capable pin optimally anyway. There are FPGA boards that are designed to accept such signals. Except for boards with SYZYGY or FMC headers Digilent boards are not designed to accept external clock sources. Most likely, this isn't a significant issue for any designs that are likely to be implemented on the cheaper FPGA platforms. I'm not sure why Xilinx has decided that a "non-optimal" class message that might be just a warning should be a promoted to a bitgen error... perhaps it's just to get the customer's attention to a possible situation where the tools can't produce great results. There are a lot of moving parts involved in pin assignments and resource placement and the tools are designed to avoid certain failure modes. Often overlooked is the fact that clocking wizards in FPGA vendor tools depend on a default jitter specification. This resolves some issues but adds restrictions as well. I haven't tried this but you can experiment with the settings to see if this makes a difference... though I'd be surprised if it does. This is one of those questions where being inquisitive and willing to do some experimentation would definitely be worth the time and effort if one wants to get good at this, but usually most people are just interested in completing a particular objective.
  3. How's that ditty go.. "eyes wide open"..? In a sense the challenge turns out to have nothing to do with external memory controllers. There are things that you can learn from textbooks. There's things that you can learn in school. There's a lot more that you can learn from old battle worn engineers who've long fought the information wars with suppliers who claim to be your company's "partner". Product support doled out in tiers according to how important your company is viewed in terms of your vendor's willingness to provide key support predates programmable logic by decades. It's possible to live an entire career in the magical world where electronic components work as expected and life is easy. If the components are high performance or complex, it won't take long for many engineers to find themselves at the mercy of a vendor who isn't forthcoming with information vital to a project's success because you happen to be insignificant to their market objectives. Anyway, without getting too expansive on a subject near and dear to my experiences, let's just say that if you want to do extraordinary things with complex silicon devices, you had better be prepared to find a way around unexpected obstacles thrown in your way. There are not many silicon devices as complicated as modern FPGAs and the connected-to-the-hip sibling that are the tools. Understanding that what you see ( or even read ) is not exactly what you get is an important part of this obstacle evasion skill set. Don't get discouraged... sometimes you get lucky ( I once worked for a very small startup that the big boys, more accurately someone working for them, thought had future market interest, and saw first hand an elevation in informatinon tier rating ), and sometimes you just lose a game in the competition between vendor and customer. There's almost always a path to getting to where you need to be though.. the important part is how long it takes you to get there. From what little you've posted about yourself I'm guessing that you are no grizzled old-timer... but you do seem to have the hard won wisdom of one. I've really enjoyed your posts, and perspective... perhaps with more than a few winces of empathetic pain that resonates loudly. Extraordinary.... I see good things for your future. Keep asking questions. For everyone's benefit keep posting the journey.
  4. Nice work and good presentation. I whole-heartedly applaud such efforts, especially when they are published with useful citations. Making it easier to find similar efforts encourages like-minded experimentation. Beyond the very practical benefit of an self directed educational exercise being able to eschew vendor IP with HDL sources that you understand is an invaluable asset. In some cases, vendors simply don't care about making their IP usable, or don't want to highlight faulty designs in their hard memory controllers.... so going your own way is a necessity. I've run into just this scenario for a board using a Cyclone V part and LPDDR2. The current tools specifically support the board but the IP is completely useless for a user wanting to have a high performance LPDDR2 design. The IP requires scripts that don't work on Windows, the hard external controller doesn't behave as the sorely incomplete documentation suggest that it should, etc. etc. I am unaware of any published design example that can be replicated demonstrating that the board is capable of burst read or write operation. The effective result is that a board that should be perfectly well suited to a large range of project implementations is rendered unusable for many of them because the external memory can't perform as advertised using the vendor's tools. External memory IP isn't the only functionality that FPGA vendors use to compete with their customers. Ethernet, transceivers and just about any high performance interfaces are also examples where users might find it useful, or necessary, to develop their own IP in lieu of the vendor's offerings.
  5. @jb9631 I don't think that your post fit the category of derailing, or even hijacking, the thread started by @escou64. Most if the content is directly applicable. Some are tangential... but still relevant. Personally, I find that the tangential can be more informative than people tend to realize. Usually, people posting to a site like this one have a single-minded desire to get past a wall blocking them from completing a project, and have no interest in learning about the details, much less the tangential ones. But it's those tangential bits of knowledge and perspective that make dealing with the next hard challenge a bit easier to tackle. So, based on your post, here are a few thoughts that may be worth reading, or not, depending on how you approach FPGA development. Historical context can be very informative and helpful in dealing with FPGA vendor tools and IP that are hard to decompose. The vendors would rather have customers that are only interested in completing the task before them now, preferably using the handful of vendor IP that the tools offer, not concerned with the details of how they work. There's nothing wrong about wanting the tools to do your work for you, and not caring about learning details... as long as the tools get the job done. If you are going to spend a lot of time doing FPGA development, particularly using the 'advanced' features of the devices, then you are likely to get idisabused of that model very quickly. If you understand the features of those old Spartan 3 devices, it's a pretty fascinating journey comparing those ancient families to the most recent ones. I have no doubt that implementation design improvements and FAB node evolution has made the basic clocking and IO features better, in most ways. But not better, necessarily, for all customer needs. I don't just use one FPGA vendor. It's interesting how the two giants in the market have copied what the other does, both in terms of device capabilities and features, and tool experiences. There was a time when Altera tools were so much more stable and bug free, compared to its rival, that many companies selected programmable devices on the merits of the tools alone. Those days are gone as all of the software that makes the tools has gotten too complex to manage. There are two spheres of reality concerning the tools. There's the stuff that the customer can see, that is the GUI, the tcl, pearls and other scripts that setup and create the vendor IP and connect all of the sources together, and the executable software where the 'magic' happens and sources are turned into bitstreams. In my lengthy experience,regardless of the silicon device documentation in the way of datasheets, application notes, and such have always been a bit short of details, or just plain wrong, when the customer is faced with difficulties using a device for a particular purpose. Undeniably, part or most of this is due to how difficult it is to write, much less maintain documentation that's error free. Also true, is that this is an opportunity to game the customer experience with vendor devices in favor of the vendor. I won't elaborate because this discussion is more worthy of book writing than forum posting. My point is that sometimes one just has no option other than embarking on a quest for the holy grail, which in this context means understanding details and conducting an extensive hunt for relevant information that separates the nonsense from the necessary facts. There is no 'conspiracy theory' at work here... just the unfortunate side of how people and companies make money and compete in the marketplace. I mentioned that one facet of programmable logic that the customer can't ever parse, the inner workings of the executable(s) that grind out loadable bitstreams used to configure programmable logic devices.. that is the tool software. It's always been the case that what goes on behind the curtain isn't exactly what the documentation and all of those IP, timing and project related scripts would suggest. If one wants to use the advanced features of UltraScale devices, this becomes all the more clearer. A lot of control and implementation details are really done in the tool software. This most certainly puts the advantage in favor of the vendor in terms of what a customer can do with vendor devices and, in the end, how income gets generated. In my experience there has always been, for certain projects a disconnect between what vendor documentation says and actual implementation experience when using some device features. With UltraScale this boundary between what the documentation says about how certain features are implemented and what's actually controlled behind the curtain is just more apparent; never mind the trend to more encrypted sources and less control for the user. In the case of the other major FPGA vendor there's no embarrassment about using the vendor documentation, tools and scripts to 'encourage' customers toward a path that results in greater dependency on the part of the customer and a greater income stream for the vendor. All of these things are interconnected. I'm a bit less capable of being succinct, but I do think that for some people there's a lot more to consider than just being able to use vendor supplied resources to complete a particular objective for doing programmable logic development effectively and efficiently. Hopefully, my thoughts are worth thinking about. If not, you've probably not gotten this far and that's OK with me. What's boring or meaningless to one person might be a helpful spark of insight to another.
  6. I doubt that you'll find many Questa users reading you post, but I hope that you'll get lucky and prove me wrong . I know that Intel has ditched ModelSim in favor of Questa but that's pretty recent. I'd say that the best place to get an answer to your question depends on who your Questa vendor is. If it's Intel then you'll have to hope that their documentation will be helpful. If it's not then you'll probably have to get the details of mixed language simulation support based on your company's tool license, from your vendor's documentation. An alternate might be to write your design in Verilog. Based on your signal spy comment you must have some knowledge of the differences between VHDL and Verilog in terms of their simulation capabilities.
  7. It's doubtful that having a blank page for Digilent designed FPGA board schematics covering configuration will result in significant lost sales. Ill will? Probably. But hiding details of product schematics is becoming a worrisome habit for Digilent. For a product like the ADC/DAC ZMODs, this is more problematic for customers who have a need for such things for other than educational or hobby related projects. It does represent a troubling change for those who've relied on Digilent as a vendor. Configuring an FPGA doesn't require a detailed schematic, unless the design is faulty. Not having the details of and ADC or DAC circuit might well make such product unsuitable and cost Digilent sales. For configuration customers might be inclined to think that perhaps you have some sort of ingenious trick that might go under the heading of IP, and let it pass as long as they don't have configuration issues ( which includes the JTAG header ). For converter analog conditioning that's pretty hard to let slide. Customers who need the details are more likely to wonder what is it that they're afraid of me knowing about? I guess that it's more important than ever for prospective customers to study the documentation, and in particular the schematic, before making a purchase.
  8. The demo that we've been talking about works fine on Win10 and the CMOD-A7-35T. You just need to install the prerequisite software as mentioned in the demo README.txt file under 'Demo Requirements:'. It takes about a minute to read through. I've used the sources to create a bitstream and checked the provided Python script out on Win7, Win10, and Centos 6; all of which have the necessary software installed. Your problems with the demo have nothing to do with the provided materials. When people do post questions about projects that I've posted I am quick to respond. My Python 2.7 script isn't likely to be much help for you if all you use is Python 3. I've never asserted that the FT2232H on the CMOD-A7-35T didn't have issues with the both the UART and JTAG USB endpoints in use with software applications running concurrently. In fact, in both the demo README.txt file and in this thread I've said the exact opposite. Somehow it's one point of agreement between us. I've told you how I use a cheap TTL USB UART cable when I want to use a UART in my CMOD design and the ILA using Vivado Hardware Manager at the same time. I did assert that the FT2232H can have both channels operating at the same time. This is true and I do it all the time with other FPGA boards (designed by Digilent and other vendors) that aren't the CMOD-A7. Sometimes people make a habit of running into problems and arriving at false conclusions and decide to post incorrect conjectures and misinformation rather than just asking a simple question, which would generally more appropriate and avoids animosity. I have quite a bit of experience using USB, especially with FPGA devices. I think that I might have helpful information for people struggling with this interface... if they can read my comments, as written, and avoid injecting irrelevant inferences that don't belong in the conversation. This whole exchange started with you asserting "The CMOD-A7 can NOT use its on-board FT2232H as a full duplex UART". I think that we've gotten past that, and ( I think ) you seem to be saying that my posts aren't helping inform you so I'm done with this thread. I'm glad to know that you have a humble side. Perhaps that side can be the one who posts questions when you run into problems instead of wild incorrect assertions. The Digilent Forums are a place for people to post technical questions and, hopefully, get factual and helpful answers. Anyone can read a thread of posts on a topic that interests them. Everyone should try to maintain a forum that spread correct technical information.
  9. I'm not particularly proud of this demo. It was just a simple project with a few specific goals to achieve. It does, despite what you want to believe, work as a reasonable demo for the CMOD-A7-35T, out of the box, using current tools and minimal effort, and accomplishes the goals that I had for the project. If anyone wanting to try it out bothers to read the short documentation text file I expect that what's presented is potentially useful for them. No one posts demo projects with code for the purpose of being incorporated into someone else's project. I really don't care about what happens with your project and have no interest in helping you complete it. I don't feel any compunction to defend any product vendor. I do feel compelled to make an attempt at addressing statements that are posted to the forum that are just plain incorrect. Generally, I'm more concerned about the casual reader of posts who might be steered in the wrong direction by blatantly false conjectures than I am about correcting the person who wrote the post. Sometimes, trying to help, whether the effort is useful or not, is met with gratitude; sometimes with hostility and contempt. That is, unfortunately, life. It is frustrating to have people excoriate and complain bitterly about statements and intent attributed to you that are not your statements or intent, or, more than often, the opposite of what you've posted or intended. It's particularly sad to read your last post on this particular day. These are indeed dangerous times.
  10. FPGA devices don't have 'internal' clock' sources. You can use an MMCM or PLL to take the only external clock available on CMODs, the 12 MHz USB clock, and create suitable clock of just about any frequency ( there are limitations due to how the clocking resources in the MMCM and PLL work ) to clock your logic. Read the Series 7 reference manuals for information. The material for your VIvado installation are available through the Xilinx Document Navigator. Read the datasheet for information about capabilities of any particular device and speed grade. Expecting to clock a design at the maximum clock frequency in the datasheet is likely to be overly optimistic. I heartily recommend taking advantage of the FPGA substrate temperature sensor for Series 7 designs as most boards aren't designed to support the power supply needs or heat dissipation requirements for all designs that can fit in the FPGA. This is particularly true for the CMOD boards.
  11. With the information that I have so far I'd see this as an HDL only project. If I had to use a ZYNQ device I might have a minimal amount of SW, if there was a compelling need. I'd rather do HW development though I usually do both for most projects. My wheelhouse is digital and programmable logic design and development so when I can I eschew components requiring software development. I've built up a considerable toolbox of IP that I've written and used for years. I've developed a debug tools that I use frequently. I mention this because you are a different person. You have to fit your approach to your strengths, which are unlikely to be similar to mine. You might not want or need your logic to have its own external memory. You might find that Digilent's Eclypse-Z7 resources are a good fit for your project. These are things for you to figure out for yourself. I can provide one perspective based on my personal experience that might, or might not, help set a course for your project that steers clear of reefs and treacherous seas. One thing that I can't know is what the best course for you is.
  12. There are a lot of similarities between HDL languages like VHDL and Verilog ( and System Verilog ) and processor high level languages like C and Ada. There are also significant subtle and not so subtle differences in the concepts; these differences are what can prolong the path to achieving competence in programmable logic development. Notice that I refer to development for logic design competence. That's because the verification tools and skills for programmable logic design verification are significantly different than for software verification. This is mostly because there are a lot more details and vairables to consider for programmable logic connected to external devices operating in real time. Concepts like time and time delay, or even what a logic state happens to be at any instant of time are a lot more fine-grained in logic design than for software design. If you approach the HDL design/development flow as more akin to digital design, but using text to represent your architecture rather than schematic capture, then I believe that learning it will be easier and quicker. If you simply relate to the HDL design flow as if it were just another high level software language, then getting good at programmable logic design will take longer. Of course anyone can design bad logic and bad software. For both software and logic there's just no substitute for experience. For both disciplines the goal is reliable, robust, error-free performance over all operating conditions. Achieving a low level of competency in either discipline without progressing to the next level is somthing that we all face. So this is my segue to where I say that learning competency in programmable logic design is very hard to do as a self-taught discipline. Having the support and guidance of others that are more competent and experienced is a great benefit. But of course, you beat me to the 'it depends' part of the answer. Some of us are fast studies, some ( like me ) are not. Studies have shown that the quality of self assessment is worse than most of us would assume; but I guess that this is the best that we have. So, having a good ability at self assessment is a good start to knowing how long it might take to achieve a level of competency using the HDL design flow. Both VHDL and Verilog started as languages for simulating hardware behavior. In the early days of programmable logic neither were available as a way to describe your design. VHDL is a child of Pascal and Ada, Verilog a child of C. If you are starting out I'd recommend Verilog for a varitey of reasons that I don't go into here. Some people think that VHDL is harder to learn... others think that Verilog is harder to learn. One thing for certain is that, like C verses a strongly typed such as Pascal, Verilog allows you to write perfectly valid code in a very terse and obtuse style. This can make reading other peoples code harder to figure out. As I mentioned before verification is a key to good software development and hardware development. The difference is in complexity. I frankly don't know how software verification is taught in schools. For programmable logic development, verification by writing testbenches, should be a basic part of any good coursework as they are inseparable processes in the design phase. Writing a good testbench for programmable logic device simulation is a lot harder than actual HDL design as the results depend on how well the testbench covers all of the pertinent details. At some companies, its a whole separate specialization. The good news is that there are two levels of logic verification. The behavioral level simply confirms that your HDL code does what you think that it's doing, without detailed timing considerations. RTL simulation covers more detailed considerations including the behavior of external components that your logic is connected to. Timing simulation takes into account the behavior of your synthesized, placed and routed logic. Fortunately, FPGA vendor tools come with simulators suitable to do the behavioral and timing simulation. They tend to be deficient in the area of coverage of corner cases, but there are free alternatives tools for automated coverage available, especially for Verilog. If you want a good logic simulator with code coverage you will have to pay for it. So this presents something of a tautology. Your design HDL is only as good as your understanding of the details of your design requirements, conceptualization of what's happening in actual hardware and quality of your HDL constructs and structures. Your testbenches are written in the same HDL, though usually using parts of the language that are not synthesizable, ad are only as good as your understanding of the corner cases and details of how actual hardware operate. What's it all mean? For simple behavioral verification of boolean logic, simulation might just be a quicker way to identify errors in your HDL expression faster than the tool flow might. For complicated logic designs, operating at high clock rates, an connected to external devices it means an iterative process discovery and of trial and error. But, regardless of your design, it's foolish to try and create logic designs without creating verification test bench designs and simulating the combination. My view is that verification is a process of debugging and improving both the designer and the design. Neophytes tend to exclude the human connection to the design expression. All of this has been a bit long-winded. I think that what you really want to know is whether learning an HDL is possible in a short window. Well yes, I think that it's possible. It's certainly not easy. Unfortunately, the only alternatives are the IPI flow and FPGA board supporting code. If you are lucky with both of these you might be able to complete an arbitrary design project with relative ease as what's provided does everything that you need to do. Except for replicating particular designs you should not expect to get lucky. For the IPI flow, when the 3rd party IP doesn't get the job done, all you are left with is your HDL skills and lots of code that is very hard to understand, or impossible to understand because it comes with encrypted sources. I've used a lot of FPGA development boards from a lot of vendors and I've yet to come across one with sufficient support to use all of the board resources and external components for any project that I want to do. The IPI flow might be a quick prototyping experience for a very compenent HDL flow designer wanting to get a feel for how much work his project will be. So now you have a sense of why I recommend the HDL flow to beginners. It's a complicated and arduous journey even when guided by a structured learning environment. It's really hard for most people to learn on their own. It's also, in my opinion unavoidable. There are websites that will assist the self-educational approach to HDL design. Unfortunately, there isn't a lot of online help available for writing effective testbench code for logic simulation. If you look around the Digilent Forums there are design and testbench examples available as templates. More so than with software development design sources, understanding how or why the code works is generally not as easy as just reading the sources. Learning by imitation is a hit or miss proposition for both software and logic design and development.
  13. Additional thoughts. I want to make clear that what follows should not be taken as advice or suggestions as to how you should conduct your project. I'm trying to be careful not to influence your decision making, for a variety of reasons, in fairness to you. One of the most important parts of this kind of project, and I've done a lot of this kind of project, is selecting the best platform on which to implement it. Due to design decisions and implementation decisions the Eclypse-Z7 is not an easy platform to work with. Digilent's software support and their use of Github are, ahem, less than ideal. If you read the Digilent sales claims for the Eclypse-Z7 you might have the sense that it comes with full support to take on any random project with minimum added engineering effort on the part of the user. This would be an assumption that might have horrific consequences for many customers. Since you have the board to play with you can get to know its potential and quirks, warts and all. There are non-ZYNQ platforms with SYZYGY interface connectors...Digilent sells one though I haven't used it. The extra effort and potential problems that a dual track development project entails, that is SW and HW (plus integration), may be worthwhile or just a hindrance. The difference might hinge on respective skill levels and comfort levels, but if a project goals can best be accomplished using just HDL development then that would be high on the list of considerations as one would plan out the implementation. Of course, sometimes the development platform is preordained. Usually, more elegant and less complicated work out for the best. A lot of variables go into such a calculation. I've concluded over 15 projects for the Eclypse-Z7 just to evaluate its suitability as a platform for the kinds of projects that I might want to use to for. I've posted 1 demo project for the board. One advantage ot an FPGA platform without software ( for this discussion specifically ZYNQ ) is that the logic has a direct connection to a large external memory to store captured data. This is ideal for situations where you might want to capture unscheduled, random events, with indeterminate event periods and indeterminate time periods between events, especially if you don't want ot miss any events. You might or might not be able to do the same thing using a ARM based FPGA... but it certainly will be a more complex solution involving more effort. Sometimes projects come with constraints that limit your design and implementation choices. Sometimes, often with programmable logic projects, you are less constrained with choices and free to 'go wild' with almost endless possibilities. The difference is in how many and how well hidden the 'surprises' are. For a project involving a micro-controller you have a lot of fixed options; fixed machine language, fixed hardware resources, fixed software tool-chain, (hopefully) adequate documentation. For a project involving programmable logic there are few such constraints. This makes preparation work all that much more important.
  14. Yup, if you change your bandwidth of interest to 50 MHz then an Fs of 100 Mhz or 125 MHz might be fine. I specifically use the word might because it depends on the details that you are looking for. There's a difference between digitiizing repetitive signals and one-shot signals. I guess that this analysis is part of your project to work out. You are correct that Ethernet isn't necessarily easy. There is a difference, in terms of design tasks, between using an Ethernet port connected to the PS of a ZYNQ based board like the Eclypse-Z7, a soft-processor like MicroBlaze implemented in logic, and an Ethernet PHY driving pure HDL logic. All have their own set of barriers to overcome. Typically, people use Linux as a way to hide the software complexity involved. Unfortunately, for software dependent design choices like hard or soft processor implementations using Linux isn't trivial. You will need to do your development on a Linux host for Xilinx based platforms. 921600 is the upper limit of most OS UART support if you aren't using flow control. It's my default baud rate for debugging and PC connectivity. It's also the upper limit for USB UART Bridge COM/TTY device driver support. The only gotcha is the depth of the data FIFO in the FPGA hardware and OS. Transferring large amounts of data can require flow control. The is easy to overcome in an FPGA design, perhaps more complicated for a Windows/Linux software application. For FDTI UART bridge (*H) devices using the D2XX driver and custom software applications you can do up to 12 Mbaud, depending on the bridge device. The UART is a pretty useful PC interface for most projects. The main issue with a UART is that it's generally restricted to ascii characters. You can, of course convert a single hex digit into two ascii characters in exchange for half the data rate. I have code posted in Digilent forums to demonstrate. Perhaps the most important part of your project is partitioning implementation between software and hardware... assuming that you decide to use software. It is possible to use your Eclypse-Z7 platform to implement your project ( based on what I've been told so far ) without using the ARM cores or any software. My only advise is this. Let the FPGAs do what programmable logic excels at; high speed, tight timing control, parallel process execution, etc. Let any processor do what it's good at; complex state machines like a full TCPIP stack, quick functional modification, using known canned libraries for functionality, etc. As always, the most important part of any design is in the first stage; that is the part where you work out what all of the important details are and how you plan on addressing them. If you don't get this part close to being correct enough everything that follows will become exponentially more difficult or impossible to resolve. The ZYNQ ARM processors are quite fast... fast enough to cause bus faults on poorly implemented AXI busses. It's possible, depending on project requirements, to avoid AXI busses and still connect the PS to the PL. A fast processor and low latency interaction with hardware signals isn't usually practical. Beware of naive expectations unsupported by experimentation. I just had an interaction with another person concerning the ZYNQ and OTG. Be careful.
  15. Suggestions for a good sample project depends on a number of things: What are your goals? Just wanting to see the board do something to show that it works? Want a project that you can modify and build? What is your skill/knowledge level? Do you have experience writing and developing with VHDL or Verilog? There are two basic design flows. The HDL flow uses VHDL or Verilog as source files. The IPI flow uses a GUI to connect someone else's IP into a design. Sooner or later you will need to be competent in the HDL flow so I recommend this as a starting place for beginners. There are example projects that are designed to use specific external interfaces like DDR, LEDs, switches, etc. that are board specific. There are example projects that can work with pretty much any board and include HDL sources. These can be modified to suit a particular board. Generally, the only external interface used is a UART. It would be nice if board vendors provided suitable HDL demo projects for every board that they sell but this isn't always the case. FPGA vendor tools complicate this. So, tell us a bit more about what you are looking for. I keep thinking that I should post a general beginner's tutorial with eample projects for people with posts like yours; but frankly, while I empathize with customers, I am reticent to provide a free service to vendors who don't want to provide the support that their paying customers deserve. You could do a web search for projects for a board that the vendor doesn't support. You might get lucky.
  16. Separating the marketing hype from the facts always requires a bit of spelunking. Often the answers are hidden in some deep remote crevice of the cave. OTG Host mode requires quite a bit of software to support the hardware capabilities. If it were easy Xilinx would have application notes and design examples. They don't. That says something about their commitment to their products and customers. Unfortunately, it doesn't stop them from providing hope in the form of unreasonable expectations. I suppose that these days, that's par for the course. Anyway, before selecting a design architecture, it's always good practice to find evidence supporting idea that a feature, implied or stated, can actually be implemented with a reasonable effort, using the vendor design and development tools. Sometimes, a trip to the ARM website for technical information can provide clues to a path around obstacles.
  17. Yes, I understand what you are trying to do. For background, the Spartan 6 devices had a hard external memory controller supporting multiple channels. When Xilinx abandoned the hard controller in the Series 7 family it also abandoned the multi-channel external memory model, now implemented in logic. That might tell you something. I believe that there might be a path to a multi-channel external memory controller using AXI IP, which is completely at odds with an all HDL design flow in my opinion. I haven't explored this avenue. In theory you might be able to achieve your VDMA goals but it's likely a bit harder than you imagine. That's why I suggest starting off with a basic design project using the Mig IP as at start. In my experience, vital knowledge is gained by struggling with incremental changes in design complexity. Knowledge isn't just what you read; it's how you connect the basic details conceptually, how you understand the subtle interactions and minutiae of what's going on in your logic. You can't get that by skipping ahead to the final result for complex designs. I'd put DDR in the complex design category. The content in your post tells me that you are stuck. To me that says that your experience and debugging skills haven't quite prepared you for this project. Sometimes you can get lucky and solve problems without having an intimate knowledge of the details of the inner workings... usually you don't get to be lucky. Simulating a design using the Mig IP is certainly not impossible. It is a vital skill however, just hard to do due to the way that the IP was implemented. I do encourage you to simulate a basic design usiing the IP. The fruit of the effort will be more than worth the time and energy that it will take. Do use Verilog for your toplevel module if you want to succeed in simulating the Mig. You can read the Mig IP user manual for details about it's debug facility to determine if it's worth the added complexity in source code and implementation. The ILA is a quick and relatively painless way to peek into your hardware as it operates. It's also very limited in triggering capability to capture specific events related to complex logic states. It's amazing what a state machine, a FIFO and an a UART can to to aid complicated debugging that the ILA can't or has a hard time with. Vivado debug facilities can become unusable debugging some designs. I know this from experience. I realize that you see a fairly simple and straight line for accomplishing your project goals. This might be because you lack the comprehension and understanding of how the logic actually operates. Start with a careful reading of the Mig IP user's manual to get a sense of what the moving parts are. Gain skill and knowledge by starting with someting that works and progressing toward your ultimate goal. Broaden range of tools in your debugging toolbox to include simulation and custom debug IP that you've written and tested. The tutorial isn't completed byt the way. I haven't yet included the simulation part. I've simulated Xilinx Mig designs but haven't worked out how to present the information the way that I want suitable for rest of the tutorial. It's complicated and not well documented. One hint is to have an all Verilog design and testbench. Another is to short-circuit the calibration phase as this encompasses too much simulation time to the point where the simulator is too slow to be functional. Some knowledge is available in things that you can read and some knowledge only comes from doing, failing, figuring out an way around obstacles and 'rinse and repeat until success'.
  18. Hi Markus, Sounds like an interesting summer ahead. You've gotten to a good start with the tools and a selected platform. Unfortunately, there are no SYZYGY ADC pods that I know of that support a 100 MHz analog bandwidth of interest, so this should be something to investigate. It's a curious mystery as to why no one has designed a SYZYGY converter pod with sampling rates and an analog bandwidth more suitable to it's capabilities. Even Digilent's high end instruments don't use the Series 7 IO advanced features to get the kind of performance one would expect. You don't mention how many contiguous samples your PD events last. The Eclypse-Z7 has a limited ability to collect contiguous samples. I'm assuming that a PD is a one-shot phenomenon. As for how you report your processed data results the description that you've provided suggests that a 921600 baud UART might be just fine; it depends on how much information, and in what form it's presented. If you decide that the Eclypse-Z7 is appropriate then there's always Ethernet. If that doesn't seem like much of a challenge there's the OTG USB interface. Have you implemented a ZYNQ Ethernet design that transfers large amounts od data between a PC and the FPGA? This might involve more excitement and fun than you expect. There certainly are applications for which the Eclypse-Z7 is an appropriate platform to meet project goals. Those would be a small subset of potential applications due to its design.
  19. "I have an Ubuntu install I can try this out on, but it won't fly in windows without some changes to the Python script" The reason why I use Python is that the same script can be used on either a Linux or Windows host. the only difference is that Windows has COM ports and Linu has TTY ports for UART connectivity. GUI applications are another matter. I developed the project code on Windows and have built it and run it on both Windows and various Linux distributions.
  20. I suppose that you've looked this post over: https://forum.digilent.com/topic/22197-a-guide-to-using-ddr-in-the-all-hdl-design-flow/ Debugging a complex design using the ILA is going to be a slow and arduous sled ride. First, you've gotten a basic design to work in a manner similar to what the tutorial presents, yes? If not that's the place to start. The Mig IP nice because it gives you all of the sources in Verilog. It and the example project, unfortunately, are written in as obtuse and difficult to rost out manner as possible. Trying to turn it into a multi-channel DDR controller is going to be difficult. The first phase of pain will be to simulate the working vanilla Mig-based design as a baseline. You will have to do your design in Verilog. If you can't work out how to do that step, then you are in for some hard times. This is the only rational way to know if your own Verilog modules are working as expected. If you can't do that step then you will have to slog through the IP code and instrument it. Starting from something that doesn't work is most likely going to take more time than starting with something that does work and that you can enhance bit by bit. If you are interacting with the Mig controller properly I wouldn't expect to see the kinds of behavior that you are experiencing. That kind of points to your VDMA code. The Mig has a debug feature that you might want to enable. This may or may not help. I suspect that you are expecting the DDR controller to operate in a linear manner, and that is probably not what it is doing. If you aren't managing all of the read/write bytes per burst operation this would lead to problems. If you aren't monitoring fault or error signals deep within the IP then it's going to be hard to figure out what's going on using an ILA. The ILA isn't ideal for all debugging needs. Sometimes you have to create your own LA IP to capture elusive events and present them in a form that you can understand.
  21. You are incorrect with this assumption. The FT2232H gets enumerated with 2 separate USB endpoints; one for JTAG and one for the UART. Both endpoints can operate 'simultaneously' ( that is they can both be enumerated and used by the same or different software applications at the same time ), The Digilent CMOD-A7-35T is the only FPGA board using an FTDI bridge for JTAG and UART endpoints that I've come across that seems to have issues. Every other such FPGA board that I've used doesn't exhibit this problem. Not everyone has reported the same experience. Digilent isn't the only company to use FTDI USB bridge devices this way. You don't have to know Python to figure out how to use the script that I've provided to interact with the demo hardware. It really helps if you read the /Doc/README.txt file in the demo archive. You cna use Putty on Ubuntu or Windows following the README commentary and glancing through the C7Test.py script. FOr instance, to read the XADC substrate sensor temperature: From C7Test.py: xadc_read_reg = '0' xadc_write_reg = '1' # Display FPGA core temperature in Celsius degrees def get_xadc_temp(): TestData = 'W ' + xadc_write_reg + ' 0' + ' \x0D\x0A' ser.write(TestData) time.sleep(.2) # delay TestData = 'R ' + xadc_read_reg + ' \x0D\x0A' ser.write(TestData) ResponseData = ser.read(regrd_length) n = int(ResponseData[4:regrd_last_data_digit], 16) t = float(n)*2015.0/(16.0*16384.0) - 273.0 print 'FPGA Core Temperature = ', t In your serial terminal program this is what you type: W 1 0 R 0 As the README.txt file mentions, there is a space between the W, 1, and 0 and every command line ends with the Enter key. After sending the command to read the xadc_read_reg (Reg0) the HDL will display the register and the std_logic_vector value of that register at the time the read command was received. To read any of the HDL registers just type: R x , where x is replaced by the register number. What you consider to be a 'bloated UART' is much more than a UART in terms of functionality. It provides a way to use a PC serial terminal application to read and write HDL registers of any length in your configured FPGA device as it operates. Most importantly, the values written or read are in the correct bit order as the logic sees them; regardless of the endianess of the CPU on your PC. That's a pretty handy bit of 'IP' to have around for many purposes. The reason why I posted this project was to provide an alternative to Digilent's typical MicroBlaze based demos. Now those are bloated. Moreover the sources in this project can be used with any version of Vivado that has been released since 2016.2, which was the version that it was created on, to create a bitstream . I'm pretty sure that Digilent can't point to a single one of it's demo projects that can make such a claim. I guess that Digilent's management would rather have their engineers spend their time modifying old project code to work with new tool releases than do work that creates income... not my problem; showing a horse where the water is and making it drink are two different challenges.
  22. I configured my CMOD-A35T from Vivado Hardware Manager on Centos using the bitstream that I just built. I used Putty to read the Artix XDAC temperature sensor using the Python script as a guide. I didn't have any problems. Everything works; it's just a lot easier using a script. I added the dcp file as a source file in Vivado. My demo isn't meant to be used as a source for your project... just as proof that the CMOD-A35T UART does work. The demo doesn't use flow control because it's not required. If one wanted to send large amounts of data via a serial interface then it would be. How difficult this would be to implement depends on what your FPGA UART is connected to. BTW, I always make sure that VIvado Hardware Manager isn't running while using the CMOD UART. For me this eliminates the OS from detaching the CMOD FT2232H endpoints. Usually I just use a TTL USB UART cable because it uses a Silicon Labs USB Bridge and that eliminates a lot of OS and application problems.
  23. I just downloaded the archive posted here: https://forum.digilent.com/topic/2866-cmod-a7-35t-demo-project/ onto my Centos 7 PC and created a bitstream using the free version of Vivado 2020.2. The only thing that I had to do was create a VIvado 2020.2 version of the clk_wiz_0 component instead of adding the CmodA735tDemo.xdc file to my project. The toplevel file has commentary to let you do this easily. I didn't encounter any issues. Just make sure that CmodA735T_Demo.vhd is set as the toplevel entity in your project. If you don't have Python 27 installed on your PC you can use Putty or whatever your favorite serial terminal application is, as long as it does 921600 baud. The README.txt file is reasonably informative about using the demo.
  24. What version of Vivado are you using? Since I posted the original project Xilinx has changed many things that break even PLL/MMCM IP. It's fairly easy to re-create that IP from looking at the source code. I haven't had problems adding dcp netlists created in earlier tool versions into projects compiled with later tool versions. I frequently check posts to projects that I've put in the Digilent Forums and will reply to anyone needing assistance. As for the PYTHON script, Python 2 has been depreciated, so yes that code is obsolete ( unless you install Python 27 on your PC and follow the notes). Having Python 2 and Python 3 on Linux hosts can be problematic without some prep. [edir] BTW, my demo project doesn't require that the PC user use the Python script. The design contains a component that allows the PC user to interact with the FPGA design hardware using ascii text. So you can dispense with the Python script and just use Putty or a serial terminal application. The Pyhon script just makes interacting with the hardware easier. The project notes cover this. The reason that I keep responding to your posts on this thread is that ther is no 'proprietary nonsense' affecting the UART connectivity on Digilent boards. FTDI UART Bridge devices support full-duplex operation as to Silicon Devices UART Bridge devices. Nothing about Digilent's hardware can change that. A good HDL UART certainly can do full duplex operation. What's left is OS issues, and design implementation issues. This is where you are confusing me. You can't use a USB UART without software controlling the upstream USB Root HUB. If the hardware that your design needs to connect to doesn't use a processor and therefore just digital IO then you just need to know how to connect the signals in a compatible manner. I only pointed to my project as proof that you problems have nothing to do with Digilent's board designs. You can certainly prove that your HDL UART works by connecting it in a loopback fashion or to a second FPGA board. USB UARTs are not UARTs, they are USB devices. In the project that I posted, there is no software in the FPGA design. The only IP is the MMCM Wizard and that can easily be replaced with an HDL macro. The only software in the project is for the device connecting to the FPGA design, which in this case, happens to be a PC running Windows or Linux. I recommend that most designers start off using the Vivado MMCM or PLL Wizard to avoid jitter issues as the tool takes that into consideration. Once the design is functioning properly you can always replace the Vivado IP with an HDL macro using the same settings. The macro allows you to select clocking options that might have a negative impact on your design. Sounds like you are using the CMOD USB 12 Mhz as your global clock for your project. To get your project going, you can use an MMCM or PLL to make that any clock. If you are using a 12 Mhz clock for your UART logic that could be a problem for matching baud rates. A good HDL UART will be able to accommodate a good 15% baud period mismatch between DTE and DCE ends. There can be a lot of confusion about what a UART needs to do now that RS-232 is mostly obsolete and USB is how UARTs are connected in modern PCs. If you UART only operates a 9600 baud then clocking your logic at 12 MHz might be OK. Simulation is necessary and good, but only as good as your testbench. Effective testbench writing is no easy, especially when external hardware needs to simulated. My sense is that you are missing something very basic. I've done many designs similar to yours, and with similar requirements. I typically have no IP in my design 'deliverables' unless DDR is involved. I don't use soft processors for hardware design. I think that I understand your design constraints. I do frequently prototype a design using elements that won;t be in the final version, if that speed up development time. At some point in development all of my HDL designs use a USB UART and I've never had USB UART connectivity issues on many platforms from quite a few vendors. The CMOD is not a robust product for the type of project that you are doing for many reasons; 'proprietary nonsense.' has nothing to do with it. Implementing a UART interface in programmable logic using and HDL should be straight-forward unless there is some confusion about the basics. [edit] Debugging UART interfaces that have a USB UART connection can be problematic. Since recent PCs don't have RS-232 or even serial ports you are limited to USB UART connections. Problems caused by OS drivers and application ( e.g. Vivado Hardware Manager ) quirks, can sometimes be avoided by using a separate PC for the design USB UART connection as for the configuration/debuggiing UART connection. This is especially true for Quartus designs as Windows frequently gets FTDI UART bridge devices confused. That's one reason why I like the Adafruit TTL USB UART cable as a debug interface. I can separate tool OS issues from OS driver issues. I heavily use a UART as a debug interface as it often can be more useful than the tool debugging resources.
  25. It's been so long that I've installed a new version of the tools with the intention of using a node-locked license that I forgot that it was the "full" version that I've been selecting. The wording on the official download site certainly has changed. I guess 73 GB verses 53 GB install download is significant. It would seem to be easier to follow Intel's approach and separate the device support install from the tools install. Since the user selects the device support desired at install time this would make things a lot easier for both vendor and customer. Of course, that 20 GB or so in difference between the two versions might not be all device data. I don't have an "enterprise" license so I can't make a comparison.
×
×
  • Create New...