Jump to content
  • 1

Arty A7: Is backbone constraint required?


jarvis

Question

I'm seeing mixed instructions depending on all the various examples online and on the forum regarding Arty A7:

I have the 100 MHz sys_clk pin directly tied to the MIG's sys_clk_i port (nothing else trying to use this clock directly), then the MIG uses ui_clk for AXI/Microblaze stuff, and the MIG's ui_addn_clk goes to a clock wizard for eth_ref_clk.

When I build, it complains about sub-optimal placement, indicating that these things should be in the same clock region but are not (was this a schematic design oversight?): specifically sys_clock_IBUF_inst (IBUF.o) locked to IOB_X1Y76, and the MIG's plle2_i (PLLE2_ADV.CLKIN1) locked to PLLE2_ADV_X1Y0. 

It says I can add the <set_property CLOCK_DEDICATED_ROUTE BACKBONE [get_nets sys_clock_IBUF]> constraint to work around this, but it warns that this is highly discouraged. 

I just wanted to confirm that adding the constraint is indeed what I must do, or if there's a different/best practices solution?

Link to comment
Share on other sites

3 answers to this question

Recommended Posts

  • 0
All FPGA devices have clocking regions, and limitation for clocking infrastructure.

Intel devices historically are more restrictive than AMD/Xilinx devices with regard to clocking options. That's why, most of the Altera/Intel development boards supply clocks to more than one region. Often they use one external clock module and a clock buffer with more than one output. Digilent FPGA boards are designed to be cheap... so they generally only have one external clock source. The problem with DDR is that signals require a Vccio that's well below what general purtpose IO signal need to be. It would be nice if they had provided separate clock for the DDR IO banks. This would make the boards a bit more expensive. Is the a design shortcoming? One could make an argument against a lot of design decisions that were made for Digilent FPGA boards, where a bit of extra cost would make the board substantially more useful. That's for another discussion

My sense ( at least I don't remember running into this issue with in designs with older tool versions ) is that ISE and early versions of Vivado did not treat "sub-optimal clock module placement" as worthy of a bitgen error. But recent versions of Vivado do, so the only way to fix the board design limitations is by using the suggested constraint. Sub-optimal situations don't mean that you can't produce useful FPGA applications.

Designing an FPGA board that is optimized for one specific purpose allow one the possibility of optimal performance. Designing a general purpose FPGA board, especially one that's cheap, and designed to work with PMOD add-on boards, pretty much dispenses with the notion of optimal performance.

[edit]
I realize that I could have provided a better answer to your question.

If you really want to know how clocking works in Series7 devices then you should read UG472 the Series7 Clocking Resources User Guide. If there are any idiosyncrasies for Spartan 7 devices, this should be covered in the device datasheet. This guide informs you about clocking regions, clocking buffers, clock trees etc, plus the rules for using a clock across regions. Yu can also learn about the CMT backbone. It's somewhat complex and involve the clock-capable input pin assignments that board designers select.

I will note that the user experience for Vivado IP that uses an AXI bus may be quite different than someone using the same IP with a native interface. Also user experience with Vivado managed design flow like IPI might be different than that for the HDL designer.

When you instantiate an MMCM or PLL in your design and you can drive the input clock with an MRCC pin or SRCC pin or a clock buffer. Using a pin restricts the MMCM location placement. If you use one of the limited a global clock buffers and instantiate a specific buffer explicitly ( rather than relying on the tool to infer one ) then you might be able to end up with a better MMCM location placement. Generally, Digilent FPGA boards use the Multi-Region Clock Capable (MRCC) pin for external clock modules and oscillators on their designs. Even then there might be restrictions, as documented in the AMD/Xilinx user Clocking and Select IO User Guides that might determine you design choices. Generally, using the CLOCK_DEDICATED_ROUTE BACKBONE constraint will not be a problem in how the bitstream works on hardware. Edited by zygot
Link to comment
Share on other sites

  • 0

Well I'll play the devil's advocate here: If you put a DDR chip on a board, then obviously at least one application goal is to use the MIG/DDR memory controller, and if that's the highest speed speed device you have on the PCB, then I'd think extra care would be taken to make sure it's following all the rules. But I think I get your general gist: these boards were a quick and cheap design, extra design care comes with extra development costs. 

Is the cost of an extra oscillator really a valid point, because wouldn't the optimal fix to be to just route the existing oscillator to a pin in the proper bank? Or is there some other downside to doing that?

I'm happy to accept that "it is what it is", but really my question was: What exactly is the official/accepted workaround here? Just include the BACKBONE constraint, or something else?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...