Jump to content
  • 0

MMCM Configuration


hlittle

Question

Hi,

Can someone explain in simple terms how the Clocking Wizard chooses the suggested configuration to produce a given frequency out of the many possibilities? I understand the calculation and constraints, but my question is really, given two valid configurations that will produce the desired output frequency, what are the rules of thumb to choose one configuration over another?

How is the output jitter calculated? How can we minimize output jitter for a requested output frequency?

Thanks

 

Link to comment
Share on other sites

9 answers to this question

Recommended Posts

  • 0

PG065 provides some details about the Clocking Wizard. UG472 the Series7 Clocking User Guide also has some useful information. Obviously, the place to direct a question like "How is the output jitter calculated?" is to the AMD/Xilinx Community site.  I doubt that you'll get the answer that you are looking for but it might be worth trying.

Jitter isn't a specification, it's more of a family of related specifications. The Clocking Wizard deals with period jitter.

Obviously, it's possible to have a different set of multipy and divide ratios that can create the same output clock frequency from a particular input clock frequency. A lower VCO frequency would likely have a lower power dissipation than a higher one. You also have the choice of MMCM or PLL

If Xilinx has ever published an exhaustive description of how the input jitter filter works I've not come across it. There are a lot of details about programmable logic that vendors might see no upside to publishing and a lot of downsides. Engineers usually don't make this determination.

Anyone can read though the vast quantity of documentation, errata, and engineering notices to find information that isn't easy to run into. Your question would seem to be one that is best answered from the source rather than from random posters like me. If you work for a company that is a AMD/Xilinx customer there' the possibility of getting information, through an NDA, that isn't going to be made available to the general public.

Edited by zygot
Link to comment
Share on other sites

  • 0

Hi @hlittle

If you start asking questions about jitter and how to optimize it, it suggests you're doing a design where this kind of thing is really quite critical. Can I ask why?

I.have had some success in the past by measuring behavior, rather than trying to do exegesis from the (limited) documentation provided by Xilinx. I'm lucky enough to have a Keysight 53230A at my disposal that can measure jitter-related parameters in the picosecond range, and there you indeed see differences in behavior when testing multiple configuration options that produce the same frequency. There is no substitute for just looking how the damn thing works in real life, but the equipment to do that is unfortunately quite expensive. If you do this for work and it's important, perhaps you can pitch the acquisition of such a device.

Personally I do not rely on the clocking wizard in Vivado, I instead took a deep dive in the MCMM/PLL documentation provided by Xilinx to get a useful-enough mental model to understand how these things work under the hood; and I do my instantiation of MCMMs and PLLs explicitly in my VHDL code, setting all parameters by hand. I also wrote a few Python scripts that allow me to figure out different options for how to make clock frequencies given the resources available -- often times you will need to make chains of multiple MMCMs/PLLs. As it turns out, most of the time you end up with merely a handful of serious options, and if low jitter is truly important (which it often isn't) I just measure to select the most appropriate one. 

 

Link to comment
Share on other sites

  • 0

Thanks for the info. I have read about the clocking resources, and I have written an app that can enumerate all the configurations to achieve a given frequency. The question is given two configurations, which one is better (where I define better as lower jitter)? For example, is a higher VCO frequency better for output jitter? Are integer divisors better than fractional divisors? It is not clear to me from the docs.

The application is digital audio. When transmitting digital audio, it is desirable to minimize jitter on the signal. My thinking is that if I have two possible configurations to achieve a desired output frequency, then I might as well choose the configuration that provides the lower jitter.

In the clocking wizard, changing the jitter optimization between "minimize output jitter" and "balanced" options, causes the mmcm configuration to change, and even to change the output clock frequency sometimes.

So, as an example, suppose I need 98.5MHz. I found three configurations that can generate 98.5MHz from a 100MHz clock (in the Nexys A7):

Found 98.5     = 100.0 / 4 * 24.625 / 6.25 (615.625)   [309ps]
Found 98.5     = 100.0 / 5 * 49.25 / 10.0 (985.0)   [246ps]
Found 98.5     = 100.0 / 8 * 49.25 / 6.25 (615.625)   [514ps]

The VCO frequency is in brackets, and the reported jitter is in square brackets. To get the jitter values, I entered each configuration into the clocking wizard manually.

So, given the different configurations have different jitters. Why does the second configuration have lower jitter? How is the jitter calculated?

The clocking wizard, when set to "balanced" jitter optimization, yields the second configuration. When set to "minimize output jitter", it wants to give me 98.52941MHz:

98.52941 = 100.0 / 1 * 8.375 / 8.5 (837.6)   [140ps]

The reported jitter is lower at 140s. Is it lower because the initial divisor (ie 1) is lower than 4, 5, or 8 above?

So maybe my question really boils down to how is the clocking wizard calculating the jitter value. And I agree that maybe a question for Xilinx.

In the meantime I was hoping someone might have a rule of thumb about which configuration would minimize the jitter.

Link to comment
Share on other sites

  • 0

Hi @hlittle

A bit of googling appears to show that the perception threshold of humans for jitter is in the order of a few hundreds of nanoseconds (ref: https://www.researchgate.net/publication/242508896_Detection_threshold_for_distortions_due_to_jitter_on_digital_audio), and the jitter reported by the clocking wizard is three orders of magnitude below that.

I know that jitter in the audiophile world is a big thing but it could be that this is more of a persistent cult belief stemming from the early days of digital audio than something that's supported by proper testing with modern clocks and DACs. So while I think it is interesting to think about these things, and I have been involved in work where it actually does matter, it could be that the difference between 200 and 500 ps of jitter is irrelevant.

If I was to compare different options I would not put a lot of value in the numbers output by the clocking wizard. The model they use may suck, and/or it may have unrealistic assumptions. We have no way of knowing because it's not publicly documented, and the extent to which it was validated is also unknown. In cases like this I'd rather trust a measurement than a spec.

EDIT - one problem with the paper I linked to, but also with the jitter reported by the Clocking Wizard, is that they only focus on uncorrellated sample-to-sample jitter which, it turns out, is way below human perception. But jitter is timescale-dependent (see Allan variance), so it's effectively a spectrum of timescale vs deviation; and the usefulness of reporting on / optimizing for only the shortest timescale (edge-to-edge) is probably somewhat misguided when thinking about audio perception. It may well be that human perception is sensitive to jitter on other timescales, but that would take a different experiment and a different clocking wizard to figure out.

Still, what remains is that the edge-to-edge jitter as reported by the clocking tool is a few orders of magnitude below what's perceptible, so it is probably safe to say that the numbers reported there indicate that the difference between the solutions can be ignored for the purpose of audio.

Edited by reddish
Link to comment
Share on other sites

  • 0

If you want to get down and dirty with conrolling clocking you can use the MMCM_DRP module. You can use this method with an HDL though the documentation suggests that you need a processor and AXI bus.

I ran into this from XAPP888:

Filter Group
This group cannot be calculated and is based on lookup tables created from device
characterization. There are effectively two tables, one for each bandwidth setting. The feedback
divider setting (CLKFBOUT_MULT) acts as the index to the chosen table. There are three
bandwidth settings allowable in the tools (High, Low, and Optimized), but in effect there are
only two. High and Optimized use the same table, while the Low bandwidth setting uses a
separate table. The filter group has an effect on the phase skew and the jitter filtering capability
of the MMCM. The lookup table is located in the reference design within mmcm_drp_func.h.

The Series 7 devices are pretty complicated. Finding obscure details can be a lengthy journey without a good map. I suppose it boils down to how well one can speed-read though a lot of documents that are hard to find and notice the right reference.

I suppose that if you have access to the right test gear you can infer internal clock jitter performance from an output pin, but few of us do. Personally, the clocking wizard is one of the few FPGA device IP that I use consistently, regardless of vendor, though I rarely have a need to calculate clock jitter for applications that I do. If you are using an advanced audio standard like AES3 I suppose tha you need to. Most people don't even bother to verify that the particular clock module on a board meets the default input clock jitter setting. I would suggest the external clock source is where you need to start to analyze jitter at some downstream point in a system. It can be interesting to change that default jitter value for clk_in and see what output clock options the wizard provides.

The reference above points out a reality that we don't usually consider. The end result of how your FPGA performs is not just a function of the device architecture and internal design, but is a function of how the tools manipulate that hardware to provide an outcome that is consistent with what the documentation says. For some device features this is more apparent than for others. I've noticed what appears to be more control in the tools over how the hardware performs in the UltraScale devices. This has two consequences for users. One, is that it's harder for users to use certain features in a fine-grained way. Two, the vendor has more control over what the user can do with the device by following published documentation. The result is that user is more dependent on tool IP that doesn't reveal source code. Maybe I'm the only person to see a trend here....

 

Edited by zygot
Link to comment
Share on other sites

  • 0
4 hours ago, reddish said:

If I was to compare different options I would not put a lot of value in the numbers output by the clocking wizard. The model they use may suck, and/or it may have unrealistic assumptions.

Can you elaborate on this? Do you have empirical data that has driven this conclusion? I'm not trying t be snarky... sometimes device vendors supply bad documentation or not enough documentation. Sometimes devices in a particular flavor have bugs. I'm curious.

Link to comment
Share on other sites

  • 0
5 hours ago, reddish said:

A bit of googling appears to show that the perception threshold of humans for jitter is in the order of a few hundreds of nanoseconds

This statement reminded me of an experience that I had as an engineering student working as an intern at a power plant many years ago. One of the engineers was an audiophile. I remember him showing off his very expensive  turntable and audio gear to me and elaborating on the nuanced details that he could pick out with the new gear as opposed to lesser gear. A few weeks later everyone at the plant was given a hearing exam a part of the company wide directive. Turns out that the man was deaf beyond 0.5-12 KHz. Coal fired power plants are places with a high level of constant background noise and occasional peaks well above that. He should have worn his hearing protection....

Human perception is a strange thing. Personally, I'd treat any attempts to characterize it empirically with some reservation.

Edited by zygot
Link to comment
Share on other sites

  • 0
3 hours ago, zygot said:

Can you elaborate on this? Do you have empirical data that has driven this conclusion? I'm not trying t be snarky... sometimes device vendors supply bad documentation or not enough documentation. Sometimes devices in a particular flavor have bugs. I'm curious.

Well not about the Xilinx or the clocking wizard in particular. It's a general skepticism -- if people tell me a number without being open about how they obtained that number, I just tend to put a low amount of trust in it. Modelling is hard, some engineers are not highly competent or are tasked with developing stuff beyond their core competency, and so on. I work in a scientific environment currently with high-end equipment, and I see lies on datasheets all the time.

Link to comment
Share on other sites

  • 0
2 hours ago, reddish said:

I see lies on datasheets all the time.

Well, Yeah! If you work as a design engineer** for any length of time and take datasheets, documentation, or tools at face value you've either managed to led a very cloistered existence or lost a few neural connections. Converter datasheets are the worst in trying to confuse potential customers to use a product.

I was just wondering if you knew something that I didn't about Series 7 clocking. the problem with using primitives is that the information about  how to use them is very limited and usually insufficient. Also, primitives and uni-macros in my experience are generally restrictive. I've recently been emulating Alice down the rabbit hole trying to use the DSP48Ex. My, oh my.... it's not fun down here using any recent version of Vivado.

At least the MMCMs have a few options, depending on what you need to do with them. Whether or not extra work using the low level MMCM_DRP is will get you to where you want to go probably depends on a lot of factors. Different tool versions introduce different bugs, though no FPGA vendor is going to admit to it. I'd expect those things to effect primitives, macros, and synthesis more than programming hard block register settings.. if you can figure out how to program them.

I think that if you have to budget worst-case clock jitter in a design you probably have to use the MMCM_DRP. I didn't notice any mention of a PLL_DRP module. Start with the external clock source, try and find the optimal MMCM/PLL settings ( or better yet use an ideal clock source so that your logic doesn't need a derived clock ) and verify the results. If you are going to do professional quality work you need to rent or own suitable test equipment for the very last verification step.

** at some companies design engineers just spit out 80% design concepts and never get to see what made the concept into a final product. This falls to the verification and test engineers, who sometimes are a lot smarter than the design engineers. Having to fix your own mistakes is vital to one's education.

Edited by zygot
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...