Jump to content

Rants about FPGA tool chain(s)

Recommended Posts


I'd like to continue an ongoing discussion that's now taken place across many forum threads, but I'd like to offer for everyone a simple place to put it that ... isn't off topic.  (Thank you, @JColvin for inviting my ... rant)  :D

For reference, the tools I use include:

  • Verilator: for simulating anything from individual components (such as this UART), to entire designs (such as my Arty design, CMod S6 design, XuLA2-LX25 design, or even my basic ZipCPU design).  (Read about my debugging philosophy here, or how you can use Verilator here.)  Drawbacks: Verilator is Verilog and System Verilog only, and things the Verilate don't always synthesize using Vivado.  Pro's: compiling a project via Verilator, and finding synthesis errors, can be done in seconds, vice minutes with Vivado.  Further, it's easy to integrate C++ hardware co-simulations into the result to the extent that I can simulate entire designs (QSPI flash, VGA displays, OLEDrgb displays, simulated UART's forwarded to TCP/IP ports, etc) using Verilator and (while it might be possible) I don't know how to do that with any other simulation tool.  Time is money.  Verilator is faster than Vivado.
  • GTKWave: for viewing waveform (VCD) files
  • yosys: Because 1) it's open source, and 2) it supports some (the iCE40 on my icoboard), though not all, of the hardware I own
  • wbscope (or its companion, wbscopc): for any internal debugging I need to do.  (Requires a UART to wishbone bus converter, or some other way to communicate with a wishbone bus within your design ...)
  • Vivado: for synthesis, implementation, and any necessary JTAG programming
  • wbprogram: to program bit files onto FPGA's.  I use this after Vivado has placed an initial load onto my FPGA's.  I also use wbicapetwo to switch between FPGA designs contained on my flash.
  • zipload: to load programs (ELF files), and sometimes bit files, onto FPGA's ... that have an initial load on them already.  While the program is designed to load ZipCPU ELF files, there's only two internal constants that restrict it to ZipCPU programs.
  • ZipCPU, as an alternative to MicroBlaze (or even NiOS2, OpenRISC, picorv, etc).  (GCC for compiling programs for the ZipCPU)
  • The only program above that requires a license to use is Vivado, although some of the above are released under GPL

Further, while I am solidly pro-open source, I am not religiously open source.  I believe the issue is open for discussion and debate.  Likewise, while my work has been very much Verilog focused, I have no criticisms for anyone using VHDL.

To start off the discussion, please allow me to share that I just spent yesterday and today looking for a problem in my own code, given one of Vivado's cryptic error messages.  Vivado told me I had two problems: a timing loop, and a multiply defined variable.  The problem turned out to be a single problem, it's just that the wires/nets Vivado pointed me to weren't anywhere near where the error was.  Indeed, I had resorted to "Voodoo hardware" (fix what isn't broken, just to see if anything changes) to see if I could find the bug.  (Didn't find it, many hours wasted.)  Googling sent me to Xilinx's forum.  Xilinx's staff suggests that, in this case, you should find the wire on the schematic (the name it gave to the wire wasn't one I had given to any wires).  My schematic, however, is .... complicated.  Finding one wire out of thousands, or tens of thousands, when you don't know where to look can be frustrating, challenging, and ... not my first choice to finding the result.  I then synthesized my design with yosys this morning and found the bug almost immediately.  +1 for OpenSource.  Time is money, I wish now I'd used yosys as soon as I knew I had a problem.  Did I implement the design yosys synthesized?  No.  I returned to Vivado for ultimate synthesis, implementation, and timing identification..

If you take some time to look through OpenCores, or any other OpenSource FPGA component repository for that matter, you will quickly learn that the quality of an OpenSource component varies from one component to another.  Even among my own designs, not all of them are well documented.  Again, your quality might vary.  +1 for proprietary toolchains, ... when they are well documented, and when they work as documented.

There's also been more than one time where I've had a bug in my code, often because I've mis-understood the interface to the library component it is interacting with, and so I've needed to trace my logic through the library component to understand what's going on.  This is not possible when using proprietary components--whether they be software libraries or hardware cores, because the vendor veils the component in an effort to increase his profit margin.  Indeed, a great number of requests for help on this web site involve questions about how to make something work with a proprietary component (ex. MicroBlaze, or it's libraries) that the user has no insight into.  +1 for OpenSource components, in spite of their uncertain quality, and the ability you get to find problems when using them.

Another digital designer explained his view of proprietary CPUs this way, "Closed source soft CPUs are the worst of two worlds.  You have to worry about resource use and timing without being able to analyze it".  (Olof's twitter feed)  In other words, when you find a bug in a proprietary component, you are stuck.  You can't fix the bug.  You can request support, but getting support may take a long time (often days to weeks), and it might take you just as long to switch to another vendor's component or work around the bug.  +1 for OpenSource that allows you to fix things, -1/2 for OpenSource because fixing a *large* design may be ... more work than it's worth.  ;)

Incidentally, this is also a problem with Xilinx's Memory Interface Generated (MIG) solutions.  When I added a MIG component to my OpenArty design, I suddenly got lots of synthesis warnings, and it was impossible for me to tell if any were (or were not) valid.  +1 for OpenSource components, whose designs allow you to inspect why you are getting synthesis warnings.

I could rant some more, but I'd like to hear the thoughts others of you might have.

For example, @Notarobot commented at the end of this post that, "using design tools introduces additional additional unnecessary risk.  I'd like to invite him to clarify here, as well as inviting anyone else to participate in the discussion,


Link to comment
Share on other sites

Dear D@n

I appreciate your initiative starting this discussion. It might help readers to learn from other people experience. In fact, this is why I often read this forum myself.

In my comment I wanted to make a point that using tools provided by vendors carry lower risk then open source tools. Why? Because tools were in development for many years and were tested by thousands of designers, because implementation process is tuned for these tools and there is technical support from the vendor.

Speaking for myself I have very little time to experiment with other tools unless they offer me substantial advantage. I use many open source tools but not in the HDL development. BTW, Xilinx is using Eclipse in the SDK and this is open source IDE, gcc is also open source tool. I use Python for multiple tasks and love it for its efficiency.

Unfortunately, every tool has specific controls or GUI, thus, requires time for learning and  sometimes corporate training to use it effectively. In my experience I needed to learn and use Matlab, Simulink, Altium DXP, Altera toolchain, Xilinx toolchain, Actel toolchain in relation to hardware and system development not to mention C, VHDL, Java, Python, etc. I don't mention various other proprietary simulation software, design tools and utilities needed for getting work done.

When I need to choose commercial product I prefer the one that has support. This usually cost more but it saved my back and projects many times. This was the key factor in choosing the Digilent. Hardware is cheap these days, people time is the greatest expense. Usually companies recognize this very well.

I think that every case is special and selection of tools should be based on available budgets and schedules. So far I am satisfied with the quality of the Xilinx IP and tools. Availability of ARM processors on Zynq and the block design methodology reduced the development time on my recent project way beyond initial expectations. It also provided transparency or the design and readable documentation. Now my collegues can understand the design. I can't imagine achieving the same results using only HDL in the same time frame.

It should be mentioned that, undeniably, one of benefits of HDL only design is job security, since the debugging or reusing of multiple modules of poorly documented HDL code written by somebody in the past is mostly fruitless effort as well as cruel.

Thank you for reading and hope you find it useful.

Link to comment
Share on other sites


Yes, I found your answer very useful ... you are helping me formulate my own thoughts on these topics.  Indeed, I found many of  your points instructive, and agree with you on several:

  • Both you and I appreciate Digilent's support.  (Thanks guys!)  They've done a good job not only puting this website together, but also creating several pages for each of their products (store page for purchasing, a resource page, reference manual, and schematic.)  Recently, I've started to find links to data sheets for components on their boards ... a nice improvement.  (Thanks, Digilent!)
  • "Every tool has specific controls for its GUI".  So true ... this is why I haven't done much with Xilinx's simulator.  With only one open source simulator (Verilator), I can simulate my logic across multiple platforms.  This is also the reason why I like to use a make file to build my project(s): learn the commands you need once, write it into a script, then ignore it until you need to know more details.  Kudos to Digilent for making the Adept dtjgcfg utility scriptable.

Rather than going over where we differ, though, let me share some of the things this discussion has helped me learn.  Specifically, OpenSource and proprietary vendors each have their niche, and that niche is determined by dollars.

  • Once a Vendors product has been sold, there's no more money left for maintenance.

This is why Xilinx announced the end of life for ISE, and didn't put Spartan 6 support into Vivado.  There was no money in it.  It will remain as broken as it ever was (for example, ISE doesn't support $readmemh to initialize block RAMs on chip).  As for IP, good luck trying to get new, updated, or fixed IP for your old S6 from Xilinx--it's just not going to happen. 

On the other hand, there are a variety of OpenSource cores for the S6 that can still be used ... and remain supported.  I've even read about OpenSource toolchains for the S6, but I've never tried using one.

  • This rule applies to Digilent as well.  (No offense guys, its a fact of life)

Consider a recent request from a customer to have Digilent create a new feature in their software for the Nexys 3 (Spartan S6 design, end-of-life'd product ...)  There's no money in it for Digilent to create new/better software that will only support a product they are no longer selling.  What you have is ... what you are going to get.  (I can point you to the request I'm thinking of if you would like)

The feature the customer requested (nearly) exists within OpenSource.  A small investment, or a touch of work on their own part, and it would exist.

  • "Buy product X.1!  It does everything product X.0 did, but without the bugs!" ... just doesn't sell well.

Sure, I get the point that bug-fixes/maintenance may be required, but the money to pay for the developer's time comes out of the profits for the capability that the developer has already "paid" for.  Further, the moment when the vendor stops selling product "X", support for that product will end. (Unless the source code is released to the public.)

OpenSource is different--OpenSource is rarely funded on a product by product basis, but rather on a contribution for support basis.

  • There's no money in creating a toolchain that produces "better synthesis" results, especially if a poor synthesizer leads your customers to buying bigger and more expensive chips

I'm told that the only reason Xilinx's synthesis tools are as good as they are is because of Synplify giving them a run for their bottom line.

I'm excited to see OpenSource synthesis tools gaining on these vendors.

  • "Buy product X.2!  It has new features that you will never need!" ... doesn't sell well either.

The best example of this might be FPGA toolchains.  Why should you buy a "new and improved" synthesis tool set, when the one you have already synthesizes for your designs for your hardware?  Xilinx's answer so far appears to be:

  1. To be able to synthesize for newer FPGA's
  2. To buy more supported IP cores.

Otherwise, why would anyone upgrade their Vivado version--especially if it risks their current designs no longer building?

Sadly, this is a broken part of Vivado.  The IP packages should be provided separately from the synthesis tool, and if you had the source of these tools, then you would be able to keep using old (working) packages, without being forced to update them when getting support for a newer device or newer set of IP packages.

  • Open Source can offer more choices than Vendor based products

As proof, just look at all of the CPU's Xilinx supports, MicroBlaze, PicoBlaze, ARM, vs the CPU's created via OpenSource: lm32, picorv, OpenRISC, and many more.  (All of that is without mentioning the ZipCPU)

  • Open Source tools can be cheaper than Vendor tools

Three examples:

  1. Xilinx's Chip Scope vs  OpenSource alternatives, such as my own wbscope (although there are several others).
  2. Xilinx's SDK, vs the open source, GCC-support for other soft-core CPUs (such as the ZipCPU)
  3. GCC vs the proprietary compiler's I've purchased in the past (Intel, Microsoft, etc.)

My point is specifically this: it's hard to generalize that all OpenSource tools are poorly supported.  Even with Vendor based tools, your mileage might vary.  Some OpenSource tools have better testing and support than their proprietary counterparts.  For example, I tend to support the ZipCPU via on-line conversations with the individuals who choose to use it.  (GitHub support requests are a possibility too ...)  Indeed, I would argue that OpenSource tools have their niche--even within otherwise proprietary markets--such as GCC or Linux.


Link to comment
Share on other sites

Hi D@n,

You well described your reasons and I totally agree with what you looking for from your point of view.

I am not that deep in the FPGA development. I don't know how did I manage to get Community super-star status?! I respect your skills and knowledge and would never take challenge creating soft-core CPU, for example.

My position and involvement is different. For me Zynq or other FPGAs are just components helping to accomplish project goals. I don't work in mass production and cost of more capable board is fully justified if it saves development time. Components obsolescence is inevitable and can be managed by proper system design.

I guees this concludes my "presentation". Anything else would be repetition.

Thank you for reading !


Link to comment
Share on other sites


Thanks for sharing!  If nothing else, you've helped me to understand more of the relationship of OpenSource and proprietary markets.  For that, I thank you.  ;)

To all others on this forum who may read this ... please consider this an open conversation.  I'd love to invite you to join in and express your views, understanding, or even your own experiences as well!  :D


Link to comment
Share on other sites

About 90% of my posts to the Digilent Forums have been rants, of one form or another, on the topics relative your post. No one has encouraged me to post more rants so I'll accept your invitation. I have been warned not to post particular observations on certain subjects like Xilinx IP and the board design flow by Diligent staff.

As your perspective is Verilog geared I'll pass on anything Verilog specific; except to say that Verilog is more amendable to simulation... but I tend to write in VHDL.

As to Xilinx IP, like Microblaze, we're getting into that danger area that I've been told not to make my opinions known. But as your ZipCPU has been made an official topic I can say this. I agree with you about using a soft CPU that you have total control over and isn't broken with Xilinx toolset version releases. I've used a Atmel-like CPU because it's compatible with a well known software toolchain. That's the key for me; a well supported, standard ( like ANSI ), software toolchain that let's me write code in C or C++. I happen to love the FPGA devices with an embedded hardware ARM core(s). They still aren't an SOC but getting closer. As of my last use of such a device the Xilinx ARM SDK is by far my favoured one. I understand that the ZipCPU is a labour of love and fully support it; just haven't wanted to use it. I find that the good old state machine works best most of the time. When those get too complicated or I need something that can be programmable on the fly ( without reconfiguring the FPGA device ) I can write a nice simple programmable controller. When the ARM makes sense, that's what I use. Just don't believe that you can easily port one FPGA/ARM from vendor to vendor.

I disagree with @notarobot about his comment on job security. HDL only designs are the most portable that you can write. As to poorly written "anything" whether C, RUST, ADA, and HDL, a schematic.... the poor soul who has to turn it into something useful is indeed cursed. Any company that allows ( usually they push people into creating poor code and documentation because they are too cheap and short-sighted to understand how to develop anything ) a developer to provide it with crap deserves both the crap and the developer who may be just incompetent or looking for "job" security. I've seen everything on this subject. The main thing to say about this is: If you demand high quality and are willing to pay for it and have in house standards that must be met then you can find qualified people to provide that. If you want crap at the cheapest price, in the shortest amount of time, have no idea how to do development, consider experienced engineers as over-priced blow-hards who interfere with your way of doing things, then buying crap is a trivial pursuit and you will be well rewarded with those goals.

I use FPGA devices from a number of vendors and all of them have issues with their toolsets. This is particularly true when you mix in a hard CPU core like an ARM or a in-house soft CPU core. The experienced engineer will learn to minimize the headaches and land mines buried in the toolchains. The more control that you want over your work product the more work you will have to do. But the up side is that the more work that you have to do the more you will learn and the bigger your personal IP cache will grow. Follow the easy path and you will become just someone who can use a GUI for one FPGA vendor to do a limited number of things if you have the money for their IP licenses.

That's all for my first reply.

PS There probably should be a special forum for this kind of topic.

Link to comment
Share on other sites


You have the community super-star status because you have passed the 100 post mark. As @zygot has pointed out on a different thread, this can seem to indicate that one has a dizzying level of competence (which I will be the first to point out that I personally do not have a dizzying level of competence although I currently have the highest number of posts out of anybody on this Forum, but that's merely because I've spent more hours on the Forum than anybody and joined back in September 2014).

What I think should be done (and obviously haven't done despite zygot suggesting it months ago XD) would be to assign new labels to the various post count milestones that don't give a misleading connentation (although the connentation generally isn't wrong) and leave it up to the individual to decide a particular users competency level.

As a general clarification; I am not opposed to differing/unpopular/harsh-but-true/whatever-you-want-to-call-them opinions appearing on our Forum. The big thing I (and other Digilent people) want to make sure of is that any rants/other-synonyms are in better suited locations as opposed to confusing or moving away from the original users question, hence the origin of this thread to begin with. I do think it would be good to perhaps have this in a different sort of Forum than "FPGA". I'll create one and move this thread as well as changing the milestone labels.

Please let me know if you have any questions.


Link to comment
Share on other sites


Hmm ... let me grab some fun quotes from your post above:

  • "Verilog is more amendable [than VHDL] to simulation"

I would find this, if try, to be rather sad.

Really, I have no experience in VHDL.  I've always assumed that the two languages are roughly equally in ability and supported.  I would be disappointed, on behalf of the VHDL user, if the tools necessary to support a full up integrated simulation weren't available to him.  (ghdl + cocotb perhaps?)

  • "I find that the good old state machine works best most of the time. When those get too complicated or I need something that can be programmable on the fly ( without reconfiguring the FPGA device ) I can write a nice simple programmable controller"

This was also my own thought when building the ZipCPU.  Hence, the ZipCPU was designed to be a "nice simple programmable controller" (that could run Linux).  It hasn't achieved Linux yet (needs the MMU), although I have an MMU in testing.  (It's been in testing for almost a year now)

  • "HDL only designs are the most portable that you can write."

Can I quote you on this?

  • "Follow the easy path and you will become just someone who can use a GUI for one FPGA vendor to do a limited number of things if you have the money for their IP licenses"

I'm looking forward to a separate "rant" about GUI based design approaches.  Since @JColvin was kind enough to give us our own section of the forum to rant within, that's probably a good topic for a separate rant.  I'm going to wait a touch longer on that rant, so I can collect enough information to sound ... informed when I rant.  Right now, what I know is that the GUI based design approaches seem to create the greatest number of help requests.


Link to comment
Share on other sites

@D@N ,

Two things to note about VHDL and simulation. Integers are signed and the largest possible integer in VHDL is a 32-bit value; after all who'd ever need a signal wider than 32 bits? :o If you want to print out a 32-bit unsigned std_logic_vector to a file during simulation this is a hassle as you need to convert it to an integer first. Of course no one would want to do that would they?  If you want access to a signal buried in a component a level down in the hierarchy you need to modify the ports in the design to see it. ModelSim has ( last I used a fully licensed version ) a tool called Signal Spy to help but Verilog does this innately.

In my world a simple controller doesn't need an MMU or aspire to run Linux; but a hard ARM does and has the infrastructure, toolchain and wide community support to do it well. If you can find the HDL source for a soft CPU core that's compatible enough with a vendors software toolchain for one of their CPUs then that provides a flexible alternative that's hard to pass up for some applications. Again, few mico-controller vendors are going to release software tools that aren't backward compatible with billions of devices in the field; at least not on purpose.

I'm happy to defend the statement that HDL only designs are the most portable that can be written. Also, I've yet to have a version of any vendor's syntheses break an HDL only design as there's no IP to "upgrade" and I've yet to see an HDL version that wasn't backward compatible. And I've yet to have such a design work on one vendor's FPGA but not all of them. Of course when an HDL only design instantiates PLLs, 9K block ram, or similar features that are vendor specific then there will be be some alterations; but these are quite manageable and affect only a small number of source files.

Actually, you can quote me on anything that I write; either to make a point or have a good chuckle at my expense. But if you disagree you better have a cogent argument to make pointing out an error or imprecision. I have no problem being "educated"... it's a life long condition.

As to any part of the design process that requires using a GUI. Let me give you a place to start your rant that you probably haven't though of.

I once didn't get a job at a major defence contractor because I happened to mention to one of the interviewers, who was clearly inexperienced at what she was doing, that I never use the floor planning GUI to create my constraints. Her reply was that I must have never done a complex or high speed design. Though I tried to explain the fallacy of her logic in reaching that conclusion I clearly wasn't having success in the allotted time. I prefer learning the syntax and a text editor. Most FPGA vendors offer GUI tools for assigning things such as pins, timing, etc. Some, like Alters' Quartus make it really hard to avoid the GUI approach. Spending an hour or so repeating the same tedious GUI entry for every design targeting the exact same board using mostly the same pins is beyond my patience; and as a general rule I'm a pretty patient guy by anyone's measure.

Link to comment
Share on other sites

@D@n ,

Here's a secret; I'm whispering because this is just between you and me:

At places where they do a lot of quality FPGA development work no one ever brings up a GUI for anything. All of the toolchain invocation is done using Perl and TCL/TKL. Shhhh. Don't tell anyone....

Link to comment
Share on other sites


I'd have thought that your new thread would have been a lot more "busy". Too bad, as you raise a lot of great issues that anyone doing FPGA development, ( and especially any vendor of FPGA devices or board level product ) ought to be concerned about. Perhaps there just aren't that many people doing useful things who use these forums. I don't want to dominate the discussion and hesitate to post this; but I do want to see activity on this thread.

You are correct about the cryptic error messaging in the Vivado GUI. The Vivado GUI has so many really bad program design decisions that it should have a rant site all to itself. I can say that, if you are willing to look for them there are log files for each processes created **somewhere** that often help and don't make it into the GUI. Still, I hate wasting 10-15 minutes trying to figure out what exactly it is that a tool is having issues with. And this always happens when I'm using the more esoteric features of VHDL. The simulator is the worst and I frequently just go ahead and do the synthesis step before even trying to compile synthesis executables. I do miss the days when Xilinx was willing to pay the ModelSim extortion... not that ModelSim is all that great. Those of us who've been in the embedded DSP and micro world have long had to deal with assemblers that were lucky to point even remotely to the offending code line, or real problem; so I guess that I've been trained to accept it as somewhat normal.

I use my own editor to compose source files and Vivado is not at all happy to oblige me with this "quirk". I can't tell you how many times Vivado bites me in the, uh.. lower posterior, by doing nothing at all ( and taking a good long time doing it.. ) when I think that I'm re-running synthesis or place and routing tools, because Vivado hasn't yet discovered that the source files have changed. If there's a way to make Vivado behave in a way that's friendlier to my personal work preferences, I haven't found it yet. Perhaps there are readers who can offer advice. Then again, it took me many years to figure out that if I want to use Microsoft Word then I have to do things its way or be very unhappy and frustrated.. and not get what I want. I always thought that software should serve the user... not the other way around.

As to Synplify. Yes there are a lot of companies that require it for producing production configuration files regardless of the FPGA vendor. I'm surprised that no one has bought it out. Some places believe that it encourages better coding syntax, at least as far as VHDL is concerned. When this company first appeared, and were just a small group of people, some of whom did development and marketing,  they still had better support and responsiveness than the FPGA vendors. I've also run into synthesis bugs where, in this case it was Quartus, just plain got VHDL wrong. The only way to figure it out was to look at the logic "schematic" in the RTL view, find the part that didn't reflect the source VHDL and replace the line with something different but equivalent.

Link to comment
Share on other sites


So ... since you've mentioned MS wrd, I thought I might share an observation from when I was studying for my PhD.  The graduate students at my school roughly split into two groups: those that used LaTeX and those that used that other thing.  Both groups wanted to create rather complex documents (thesis, dissertation, etc), that stretched the limits of whichever tool they were using.  Invariably, the tool would crash and break during their graduate years.  Indeed, it seemed like every student had to experience a tool crash at some point.  Those who stored their documents in a binary large object (BLOB) format, were unable to figure out what went wrong and recover their documents.  Their best hope, if anything ever went wrong, was to make frequency backups and hope (and pray) that the system never crashed, or that if it did the damage was minimal.  Those who kept their documents in text files (i.e. LaTeX), on the other hand, could often debug the files and find the problems.

I think this same thing would apply to the toolchains.  Those items stored in text, and in particular text with a known and therefore editable format (tcl written to a changing standard doesn't count), are going to be easier to recover and fix when Vivado messes up.

You know, I've even had that happen to me once or twice.  I rebuilt my OpenArty project once with a newer Vivado.  Vivado croaked on the old binary files left around from the older version.  Did it croak on my Verilog files?  No.  It croaked on its own generated files.  I even posted to Xilinx to find the bug.  They couldn't figure it out.  Eventually, I wiped the whole Vivado project directory clean and restarted with my sources.  Once I did that, I had no more problems.

Score +1 for Verilog.  Score: minus (my time) * (my cost per hour) for Vivado's IP


Link to comment
Share on other sites


It's been clear to me that the people, no doubt cheap labour sub-contracted companies, who maintain the Vivado source are doing a lot of very bad things that might make sense to the non-technical person but are a huge issue for the users. One thing is that it maintains database files in memory... and often confuses itself. If you are crazy enough to try and run multiple instances of Vivado because you are working on two interconnected projects at once, or even Vivado and ISE at the same time, you should expect weird awful crap to hit your PC fan and lose hours trying to recover. Early on I lost so much work when first trying to use Vivado. The only thing that forced me to use it was the end of ISE. Thankfully, Vivado doesn't support older device families so I can go back to ISE from time to time. I'm not an ISE fan by any means. In fact ( as far as my experience goes ), ISE and Xilinx documentation sold more Altera silicon than Altera marketing ever could. Oh, did I forget to mention that this is strictly an opinion piece? I just want to be helpful to anyone with ears (OK, so eyes...) and a willingness to use a bit of grey matter to process the content.

Link to comment
Share on other sites


Altera (Intel) and Actel still offer a free version of ModelSim and it's the default tool for simulating HDL that doesn't have Xilinx specific components like PLLs, etc. At least ModelSim is better at directing you to errors; or should I say that it takes less time getting past syntax and dumb errors than the ISIM versions. Even the free version has features not available in ISIM.

I've built a number of my own tools for debugging, such as versions of the UART based debugger I posted in the Project Vault. For large volumes of data I use a USB port. I was unhappy when Digilent abandoned the Cypress CY7C68013A USB 2.0 interface as I created my own optimized Adept compatible HDL interfaces. I highly recommend that anyone wanting to spend a significant amount of time doing FPGA development build up their own private library of debug/verification IP and software. There are cheap USB 3.0 development boards for Nexys Video and Genesys2 users having that very nice FMC connector. While I'm on the subject of FMC connectors I highly recommend that anyone with an FPGA board sporting one buy the Xilinx 105 debug board that I rely on for the Differential PMOD Challenge project. The internal data reader/writer provided in that project is limited but very useful.

I use OCTAVE and SCILAB to process debug data for verification purposes, though sometimes, as the USB interface requires C/C++ I just do analysis in C. This is only only way to test algorithmic code that I know of and can afford. Matlab, unfortunately, isn't interested in selling product to small companies or individuals unless they are matriculated students or in academia.. or very wealthy. But the open source alternatives are pretty good. One positive thing about using, say OCTAVE, is that you are "encouraged" to implement your own functions at a low level. Using canned high-level functionality provided by Matlab certainly lets you do complicated stuff quickly, and perhaps even stuff in a reasonable amount of time.... but you can learn a whole lot more having to worry about the minutiae and small details that are hidden from view by the high-level code.

I've not found a lot of useful open source tools for VHDL simulation or synthesis. I avoid Impact to the extent possible and do occasionally use Vivado ILA... but the Vivado hardware manager does not play nicely with other tools using the JTAG port and usually causes more headaches than help if left running for any significant amount of time. I use the Adept Utility for Windows with Digilent boards whenever possible as it's robust and does play nicely with other software using the JTAG. I do wish that there was a Linux version. So far my pleas for one have been loudly ignored....


Link to comment
Share on other sites


Both of those are new to me; but I will check them out. Here's something to keep in mind when using an HDL. It's not just the expression of logic in a particular FPGA LUT or CLB resources that matter. For large or complicated designs place and route to meet timing becomes critical. I tend to rely on the FPGA vendor to know how to do that best. Xilinx and Altera provide there own solutions. Actel uses  a version of Synplify ( perhaps a good enough reason to play around with their devices.... ). How a simulator interprets your HDL and how a synthesis tool interprets your HDL is not always the same. 

While I do a lot of development using FPGAs they are but one way to accomplish a goal and sometimes not the best choice. Having so many tools to become adept at using and only so many hours in a day, I am usually captive to doing what I'm familiar with. There are so many places to explore other than one specific technology.... With or without significant pain the toolset offered by an FPGA vendor is generally the path of least resistance. I don't know of any affordable vendor agnostic "synthesis/place and route" tools that offer one learning curve and reliable results for someone like me.

Link to comment
Share on other sites

To all,

I am not sure that portability and job security are related. Let me clarify this.

In my experience only large corporation have resources for every task of an elaborate developments process: requirements specification, tests definitions, code development, testing, documentation, version control, etc.. Small companies or R&D projects have to cut corners to meet the time deadline, mostly, in documentation and version control. They have typically one FPGA designer per project and if he leaves the company it will loose ability to upgrade or change design for long because he might be the only one involved in it. This makes the designer valuable and guarantee job for the duration of the project at least. Very limited number of electrical engineers know HDL enough to use it.

In my understanding portability of code means ability to be transferred the a different platform with minimal modifications (rewriting all constrains, at least). How does this help the company to mitigate the loss of the designer?


Link to comment
Share on other sites


I guess that we have remarkably different experiences and perspectives.

Quite a while ago I was part of a small startup that designed and built a 4D Ultrasound cardiac diagnostic machine. It was a "clean room" effort using 5 engineers a manager and eventually a couple of contractors. No one had any medical imaging expertise. We started in the fall of one year and and had images by the following summer.  Except for a DSP board we designed and built everything in house. We designed our own sensors. While we were beginning clinical trials one of the major vendor of medical imaging equipment sent some people to evaluate us for possible patent licensing. They were overheard in the bathroom commenting about how shocked they were so see actual working equipment. They figured that since they were in the middle of a multi-hundred million dollar effort to do what we were doing, using hundreds of engineers across a few research sites we would be presenting a power point show about what we intended to do. We didn't cut corners, we didn't skimp on documentation, we just worked really long hours, worked efficiently, and worked smart. 

If you have unrealistic goals with unrealistic deadlines you simply can't do good engineering. If you don't do good engineering you simply cannot create a good product. I'll pass on the whole topic of how the money people took over technology ( and indeed democracy )... this would require a number of thick books to do adequately.

I agree that there is an art ( heavily dependent on knowledge and experience ) in knowing how much documentation and verification and formal procedures are necessary to accomplish a task. I also know, with absolute confidence, that the basic fundamentals of good  engineering haven't changed at all in the past 35 years. The technology has. The complexity has. The cost structure allowing small companies to compete with wealthy entities has gotten a lot more unfavourable. The decision making has gone from the technically competent to the newly minted MBA guild. But stupid cheating is still stupid cheating. Lies are still lies. If a company starts off with lies about how much it will cost to execute a contract, underbids and wins it, then it's only a matter of time until it will be forced to suppress good engineering practice and fail... and it's always the engineer left holding the bag. I've seen it many many times.

I want to believe that it's still possible to create something new, useful, well designed and well built. I will never believe that if you look hard enough that there's a diamond crapping unicorn out there waiting to be snagged. The bankers and people who see our life savings as their own personal gambling fund believe this and make good money doing so... but their gain is a loss for everyone else.


Link to comment
Share on other sites

In my last post I realize that I got pushed from the edge of the thread's topic of FPGA development tools into a different dimension.

I do feel compelled to say that the back and forth between me and @Notarobot is healthy and presents a totally different but vital concept. That concept is how important it is to have discourse; most importantly with people of different backgrounds, experiences, and perspectives than you. For those capable of growing, this is how one breaks out of the constraints and limited view created by those very things. Just one man's observation.

So, I guess that I fixed one wrong by doing it twice.... Now, about how to use those tools...

Link to comment
Share on other sites

I was recently presented with a quick business turn problem (no time to design/build a board).  Looking over all of the boards and capabilities available to me, I was disappointed to discover I might need to switch vendors to find the I/O capabilities and size I was looking for.

Then I got to thinking about all of the work that I had done with Xilinx, and Xilinx's tools.  Sure, I've worked very hard to make my stuff tool independent, but some of the ICAPE2 things I've been able to do with Xilinx, or even the USB interface--imagine if I only had JTAG to use to talk to my FPGA, and that through a vendor's library that didn't necessarily support any of my applications.  Ouch.  I'd have to rethink my entire interface for interacting with the board.

Now, just think of how much worse it would be if I had proved my competence at digital design using boards and a proprietary toolset from one Vendor, only to get a job (in my case a contract) requiring me to use boards from a different vendor. 

The thing is, the guy who is going to hire you doesn't care whether you are used to using Lattice, Altera, or Xilinx.  To the hiring guru, you are an FPGA designer.  You should be able to use all of the above.  Now, that said, do you want to be the one to delay delivery of a product to a customer because your digital design skills only worked with one vendors tools?


Link to comment
Share on other sites


You write: "The thing is, the guy who is going to hire you doesn't care whether you are used to using Lattice, Altera, or Xilinx.  To the hiring guru, you are an FPGA designer.  You should be able to use all of the above.  Now, that said, do you want to be the one to delay delivery of a product to a customer because your digital design skills only worked with one vendors tools?"

Boy, that's not been my experience. When I first started out on my first job companies hired engineers because, by making it through a curriculum that discarded two thirds of those wanting an EE degree you proved ( in theory ) at least some level of intelligence and work ethic. Companies hired engineers expecting them to "learn the business" and find a place to contribute. A good EE could do analog, digital, analog or anything needed to make money. My how that changed in only a few years. Engineers became just like the components that were used to build products; replaceable. Companies used to train engineers to work to their standards. Now, few do. They want someone who has done the exact same task for another company as they need. Engineers became so specialized that even if a interviewee performed an almost identical task, but using say a different micro, even from the same vendor as they use, the prospect was a "bad fit". Forget knowledge and experience. Employers started wanting people who could be put into a chair tomorrow, finish the work in a few months and be let go once successful with that task. Many companies keep a small staff of engineers who are good at keeping their job and not much else. The real work is basically taking other companies' efforts using "temporary" employees and contractors. It's been a long time since I've run into a company that cared about my knowledge or experience. If they don't think that I'm the immediate fix to their current deadline problem,  they aren't interested.

Here in the east, most employers that I've run into are Xilinx houses, or Altera houses; though some try to play one off the other for a while in an attempt to help with silicon pricing. There's a small part of this that makes sense. Most of the work is knowing what the idiosyncrasies, bugs, flow etc of a particular vendor's tools are. When I started engineering most of the work was engineering. Now, 30-40% is toolchain related.  ( Like how I just got us back on topic???). The same goes for software development. ( and I'm off again...)  I've interviewed with large companies who use one vendor, have a 100+ FPGA engineers, have hired Xilinx FAEs, Xilinx engineers and still waste a high percentage of development time struggling to make timing in a shippable product. I've run into companies that have abandoned FPGA devices because the management can't understand how to do FPGA development. I've been cast aside for work because I haven't used a particular FPGA vendors' tools for months, or because I haven't used a particular device ( do you realize how many companies can't afford to use Stratix or Virtex ? Hint, not a lot so good luck getting experience)

Still, your point about using a vendor specific element like ICAPE2 is important to keep in mind. Block memory, PLLs are less of a problem. Transceivers are complicated enough to cause problems; and not all vendor transceivers are what you may be lead to believe they are. Straight HDL is completely portable from vendor to vendor but often a device is chosen for price, special features, the amount of block ram, etc. and these are vendor and device specific.


Link to comment
Share on other sites


Experience you described is R&D. The amount of requirements for this type of work is not comparable with the production of the system where FPGA is just a piece of equipment. Every requirement should be defined and tested. Every change requirement should documented and traceable. I assume you didn't have "luck" to work in the large corp mechanism.

I wish recruiters were looking for gurus. It is way more pragmatic. Totally agree with @zygot

Thank you for upgrading me to a prolific poster :D, feels great!

Link to comment
Share on other sites


This topic is now archived and is closed to further replies.

  • Create New...