Jump to content
  • 0

[Help] UART Protocol complete but basic explanation (And guides from newbie perspective)


latot

Question

Hi all, I'm trying to learn verilog, I'm new in the FPGA world, I got the very basics of verilog, and one very important thing is how to communicate the FPGA with a computer.

To who is written this.., well anyone interested in newbies difficulties, who want to write guides, and for Digilent too.

I want to ppl understand here, why is hard for a newbie learn UART, even before the implementation. I want you to understand why I can't understand. And later, in case I succeed to show you that, help me and others, to learn UART in a better experience and with that, open more the FPGA world, I still can't understand very well UART.

Sorry if this text is too long, or even sounds too pretentious.

Personally, has been too hard to get how works verilog, verilog is not difficult, but is hard to find documentation that understand its own common sense, ppl a lot of times tries to explain (with good will) but didn't notice that a lot of words, concepts, logic states of the guide/project are still not explained, so ppl will not be able to understand what they just don't know, well I found a doc about verilog very good about it.

I think is the most complete I have found, read from the start to get it, special for newbies like me:

https://verilogguide.readthedocs.io/en/latest/

Oks, go to the topic, I have checked a lot of videos, longs and short ones, codes of github, guides, and there is always a point where something is not explained and I ends in confusion. I would appreciate don't says, "read this one", one reason why I can't understand very well this is due the same as above in verilog, there is common sense and experiences that a newbie does not have way to know. You will understand this better later, I wrote that down :)

I want to understand UART, I think is great  to then can construct more robust projects, so the point is not just test and found "it works".

I like Digilent due to the opportunity to learn FPGA, I think would be great have projects in different languages, the repertory of type of projects is big, but usually every project is in one or other language, the blink led is in verilog, the GPIO (and UART) in VHDL, does not help to learn D:

A hardest part of the UART protocol, is not the protocol it self, FPGA controls bit a bit. Well, an FPGA does not need to come with UART, but we can buy a PMOD and connect it (or come with one integrated how the one I have), and we can know UART also has a protocol, now some questions, who follow the protocol?, how much of the protocol the PMOD handle, and how much we need to implement in the FPGA? Maybe even, the PMOD uses the UART protocol for the PC but uses other protocol in the FPGA connection! but why? maybe who design it found a easier way to send the data from the FPGA to the PC using the UART protocol.

Then there is the same questions for the UART from the PC... Maybe this is silly for you, but for a newbie like me this is just a mist of confusion.

To can understand UART is not enough know Verilog and UART protocol, there is a mix of experience, and common sense, things that are usually not written.

I though, well, what would be a newbie explanation that anyone can understand about the UART data flow? I want to take advantage of my current newbie status to get a clear way of it.

My answer now was, examples!, what? there is a lot of examples out there!, yea, they explain a lot of things, but did you remember? usually every example has a sections that are not explained, add what I wrote above, is not like a newbie can fully understand any example.

I think before the example, explain the steps is good, but probably we will not be able to understand the big picture.

Imagine an example like this, and we want from the FPGA send '11101010101', what would be flow of information? what would be the step 1?, step 2?, step 3?, how is used and splited the data? you maybe will says, there is a lot of guides that does that exercise, but..., not completely, when you are forced to show this step by step with that data you can't skip anything, need to show what bits sends and why, justify every bit of information!

FPGA  ---------- PC

'010010001010' -> (explain where it comes that data)

'1010100111010' -> (explain where it comes)

The idea would be explain how the data is send to the PC, where and how every rule is applied, a good example should be able to use all the rules of the protocol, with examples where we can see the difference between them. There is the proposal of start with the minimum applicable rules (like skip parity bit), and is good to know for example when or how it works, how a pc know there is parity or not and similar with the other rules.

One advantage of something like this, is that we don't need know verilog, we only need to know the workflow of UART, this helps to understand how much of the UART protocol we need to implement in verilog, and how much its handled by the UART PMOD. Then, we can start thinking..., how can I write this workflow in Verilog?.

Then, the same would be to can read data from the PC.

FPGA  ---------- PC

<- '001010010101'  (explain how we can read the data)

Maybe the simplest part, is the baud rate generator, just because every FPGA has its own frequency, and UART need some specific baud rate to work and synchronize with the PC. The only weird thing about it would be that seems did not works any baud rate.., weird, but how to construct one with +-5% of error is pretty simpler, then just connect the wire, but not trivial to put for a newbie.

I think I wrote everything I had in mind to explain from my newby perspective why is so hard learn UART, I decided to write like this, and give a feedback from someone who know about algorithm but not FPGA why would be so hard to understand this things, would be great if this feedback is considered in Digilent guides.

I would appreciate if someone can helps to create examples to help to understand this for newbies :)

The bits I choose in the transmitter are intended, a number of bits that is not 10 nor 18 (full block of data), and need at least two packets :)

Thx!

Edited by latot
Link to comment
Share on other sites

11 answers to this question

Recommended Posts

  • 0

Hi @latot

It is very understandable that you can feel a bit lost in all this. There's a lot of information our there, in books and video's -- some of it is good, some of it, not so much. And as a beginner you can't tell te difference, yet.

From your post it is unfortunately not easy to precisely get where your understanding hits the rocks. So I will try to figure it out below.

Before that, you write that you think Verilog is "easy". That's surprising, because while the syntax may be pretty easy, the semantics of the language (i.e., what do all the statements and constructs mean, precisely; and what happens when a Verilog program executes) are normally pretty hard and unintuitive, for beginners.

Okay. The first thing we need to clear up is what you mean by UART, because if you mean something different with it than I do, it will be hard to talk about it.

From your post I get the feeling that you think "UART" is a protocol, i.e., an agreed-upon convention to send information between A and B over a bunch of wires. Well, that's normally not how the term is used. The name for the protocol is "the serial protocol", or perhaps the "RS-232 protocol". The term UART (Universal Asynchronous Receiver/Transmitter) is used for a device that knows how to handle the sending and receiving, i.e. a device that "speaks" the protocol. If A and B want to talk over a serial protocol, both of them need their own UART. You can actually buy dedicated "UART chips" to do that. Alternatively, microcontroller chips often provide one or more "UART peripherals", i.e., a small area of the chip is dedicated as an UART. And, of course, you can build your own small UART inside an FPGA.


The important thing to understand first is the serial protocol. This has been in used for well over 50 years now, and when it was originally devised (as "RS-232") it was very complex. An RS-232 connector had no less than 25 pins. It turned out that most of these pins were not used most of the time, so later on a simpler hardware standard was devised with just 9 pins between A and B. As it turns out, things can be simpler even than that. For serial communication between A and B you really only need three wires:
 

          TX  --------> RX
(side A)  RX  <-------- TX   (side B)
          GND --------- GND

The ground is just a passive wire to make sure that side A and side B agree on what "0 volts" means. This is important, because the signals on the other two wires are generated and interpreted relative to this "GND" level. The wire needs to be present for things to work, but it doesn't otherwise play a role in the serial protocol.

Personally I think even the picture above is complicated, because it supports bi-directional flow of information (from A to B and from B to A). As it turns out, those two sides are essentially identical, and for the current discussion it helps to just focus on what's needed to get information from A to B:

(side A)  TX  --------> RX   (side B)
          GND --------- GND

This is really as simple as it gets: a serial link that allows A to send information to B. Remember that the GND line is just passive (but still important); the interesting stuff happens on the TX-->RX line. There, side A keeps a constant signal level if there's no information to send, but when it wants to send, its transmitting-side UART blurts out a little train of 0s and 1s onto the TX that travel to RX at the other side where the receiving-side UART picks it up.

Before we turn to the little train of 0s and 1s that make up a packet of information, let's consider the hardware for a second. In the context of FPGAs, "side A" could be an FPGA and "side B"  could be a PC. How do the cables actually look like?

Well, it used to be that all PCs came with a "serial port", which is one of those 9-pin connectors that I talked about previously. Nowadays that's not standard anymore to have such a connector on a PC, but if you go shopping for it you can still buy it.

You can hook your FPGA up to a PC equipped with such a connector via one of the FPGA's PMOD connectors, by inserting this little beauty:

image.png.68d0b5a8b183066b5ab344396e475e9e.png

 

This is Digilent's PMOD-to-RS232-serial converter. If you hook this up to a PC equipped with a similar 9-pin port using a serial cable, you can communicate with it.

However, nowadays, most of the time you're using serial communications between an FPGA and a PC, you will use a USB cable. This makes matters quite a bit more complicated:

                            +-----------+                           +-------------+
          TX  --------> RX  | FTDI chip |      USB cable to PC      | USB chip    |
(FPGA)    RX  <-------- TX  | on FPGA   | <------------------------>| on PC       | ---- Virtual COM port
          GND --------- GND | board     |       (USB protocol)      | motherboard |
                            +-----------+                           +-------------+

Here's the dirty little secret about serial communications over USB: you only talk the serial protocol between the FPGA and the FTDI chip sitting on the FPGA board ! The communication between the board and the PC is strictly USB. When the FPGA sends a byte, the FTDI chip's built-in UART accepts it, and at the next opportunity it gets it will send out a USB packet containing the byte to the PC. On the PC-side, the software driver recognizes the packet as something that contains serial information, unpacks it, and offers it for reading on somethinmg that is called (on Windows) a "Virtual COM port". This is a software-only device that "acts like" a real, hardware COM (serial) port.

This links in to a question you also asked: how do both sides in serial communications know what parameters to use (like number of databits, parity or no parity, length of the stop bit, and baud rate)?

The answer is simply that the protocol doesn't help there -- it does not provide facilities for "link negatiation". This is in contrast to more modern protocols like USB and Ethernet, that do have facilities for that.

So lacking that feature, the burden is on the user and/or designer of the system to make sure both sides agree. Usually, the "peripheral" device (so not the PC) will just advertise the setting it supports ("I am an 9600 baud device with 8 databits, no parity, and 1 stop bit; and if you want to talk to me you better configure those settings"), and on the PC side you have to comply.

This is also what you'll be doing with the FPGA: you will usually implement fixed serial parameters (a usual choice would be: 115200 baud, 8 databits, no parity, 1 stop bit), and on the PC side you will have to configure your serial terminal program to use those same settings.

Incidentally, it's interesting to consider what happens if you do that in your terminal program on the PC, in the scenario where you use a USB-cable. You tell the program you want "baudrate 115200; word format 8N1". The program forwards this to the "Virtual COM port" software driver. The driver forwards the request over the USB connection to the FPGA side, where it is received by the FTDI chip. Finally, the FTDI chip will configure its small UART to use those parameters for the small stretch of distance between itself and the FPGA. Given the complexity, it's pretty amazing that this all 'just works', most of the time.

Right, we now come to the real interesting part which is how to actually send the 0s and 1s to get some communication going.

But I'm a bit tired right now so I first want to hear back from you if you understand all that I wrote so far.

Another quite important question is this: are you comfortable enough with your FPGA and Vivado to do something much simpler than implementing a UART? For example, can you make a LED blink at a chosen rate (like 2 Hz or 5 Hz)?


 



 

Link to comment
Share on other sites

  • 0

Historically, because a UART is an Asynchronous protocol, you aren't sending bits serially relative to a "fixed" time period like a clock. That's why people refer to data speeds as baud rate instead of bps for UART interfaces. Bit time in a UART is measured in baud periods, not clock periods. This is why traditionally, RS-232 serial interfaces ( long before USB came along ) needed to account for a wide variation in what a baud period might mean for equipment on one end of the cable and on the other end of the cable. Different equipment used different clocks of varying frequency to create a "baud clock" that could be divided down to produce baud periods close to the standard rates, like 150, 300, 600, ...9600. Early on  9600 was pretty darn fast for a serial interface that could work over wires hundreds of feet long without sophisticated drive/receiver support. Speaking of clocks, even though you can buy cheap clock modules and oscillators with decent accuracy and stability, no two clock modules put out exactly the same frequency regardless of what the manufacturer says that is nominally. This frequency changes with temperature and as it ages. So, even today, there are issues with sending data from one peice of equipment to another. So, there are two general ways to handle this difference. One is called source synchronous where a copy of the source clock is sent with the data to help the receiver recover the data correctly. The other is asynchronous and may or may not involve a reference clock being sent with the data. In this case the receiver has to oversample the data and control signals from the source  to recover the data properly. If a reference clock is one of those signals, then it makes the work for the receiver easier. There are still applications where completely asynchronous communication is preferable to synchronous communications. I should note that even today, UART interfaces in micro-controllers and even high end CPUs use a baud clock to produce a given standard baud period. The clocks that the baud clocks are derived from are just more accurate than they were in the early days of RS-232. Still, those baud clocks vary from device to device and are generally able to be modified. This means that even for standard baud rates the actual baud period is within a certain tolerance.

BTW, designing and manufacturing clock modules and oscillators is a pretty fascinating topic to explore.

I also like the UART for FPGA applications. You don't have to use 7-bit or 8-bit data for asynchronous communications. Check out this project: https://forum.digilent.com/topic/20479-inter-board-data-transfer-project/

[edit] And now, I notice that you are learning Verilog. Great! These days I think that Verilog is preferable to VHDL for those just starting out with HDL design, even though I'm a lot more comfortable using VHDL. Mostly, the two languages treat logic the same but use slightly different words for object. Verilog has wires and registers, VHDL has std_logic_vector, etc. Verilog uses "C" like syntax and VHDL is a derivative of Pascal/ADA like languages. If you are comfortable with VHDL it's probably harder to learn Verilog because the syntax can be more terse. I'm mentioning all of this because the project mentioned above is written in VHDL. You can read the project documents to get the general idea of how the design works, but you can probably figure out what the VHDL is doing as well.

[edit2] Since RS-232 has been mentioned, and it should since modern USB UART devices still suffer from choices made back whe RS-232 was a standard PC interface, I should mention one other thing. Early RS-232 wasn't a digital signal in terms of 1s and 0s. It was +V and -V. V started out as +/- 15V, then +/- 12V, then +- 7V. When it was adopted by the PC such voltages were supplied by a special interface chip to turn digital 1s and 0s into +V and -V signals. A problem with large voltage swings is that it's hard to do at higher baud rates so the value of V stared falling and the baud rate capabilities started climbing. This is all well and good if you are connecting two PCs together ( most of the time ) but not if you want to connect a PC to old equipment. So, while the protocol is the same, the electrical connection is a different animal compared to the USB UART.

[edit3] UART communication has a problem that's always existed. Sometimes the equipment receiving data isn't able to receive data for some reason. How, can it tell the sender to stop talking? Well there are two methods to do this. One uses hardware controls. CTS and RTS are used for this purpose. This means that you now need 4 wires instead of two for full-duplex communications. When modems became one of those pieces of equipment even more control hardware wires were added. There's an alternative. Even though RS-232 generally used 7,8 or 9 bit data, it only supported ascii symbols, which is 7-bit words. This allows for the transmitting control symbols like XON/XOFF as an alternative to extra wires. This is still part of modern PC serial UART communications even though you won't find a PC in the store with a 9-pin or 25-pin serial connector on it. So when you use a COM or TTY device on your computer with a terminal application there are setting for not only baud rate, the number of start/stop/parity bits, bit also flow control. This can be hardware or XON/XOFF flow control and is still supported by USB UART devices.

Edited by zygot
Link to comment
Share on other sites

  • 0
16 hours ago, latot said:

there is common sense and experiences that a newbie does not have way to know.

I whole-heartedly agree with you that learning Verilog or VHDL without guidance can be very difficult. There are a few good texts, but these are generally expensive.

Before programmable logic companies started using Verilog, System Verilog or VHDL as a way to describe logic designs you had to use schematic capture (Xiliinx) or a proprietary language like AHDL (Altera) to describe your design. Verilog and VHDL were around as languages, mostly for simulation of, well anything. So, Verilog and VHDL had some unique concepts of time that you won't find in regular computer programming languages meant to be compiled into machine code and processed sequentially. So, for people who have some familiarity with a programming language like Pascal or C, there are concepts that can be hard to grasp. Only parts of Verilog and VHDL can be used for constructing logic by the synthesis part of a tool like Vivado. Also, concepts like when statements result in sequential or concurrent order may not be obvious. I encourage beginners wanting to use and FPGA to get some familiarity with basic logic design. Not just understand boolean logic, but understanding clocked verses combinatorial logic, and how combinatorial logic can become mulit-level. Also understanding basic clocked logic concepts like setup and hold time, delay, etc. After all, what and HDL does, once  the synthesis tool gets finished is to implement digital logic. A really good way to start is to learn how to use the logic simulator provide by your FPGA vendor ( for Xilinx it;s in Vivado for free ) and try out some simple designs, even simpler than turning on an LED on a board, and seeing how the simulator understands your code. You can also synthesize your designs, and run through implementation and Vivado will show you a schematic of what it thinks you indended, if that helps.

I think that the Digilent Forums is a pretty good place to support individual experimentation and ask for help when confronted by a particular problem that you don't understand. Back in the day, before you could buy a cheap PC, people had similar problems learning how to put together a home-brew computer and program it. This resulted in small groups of people getting together as a community educational effort to learn together. Everyone is unique in how they learn new things.

Link to comment
Share on other sites

  • 0

Hi!, I'm really emotional of the great help and answers here :)

I already read all of them, but I think I'll need to read again some parts. Great explanations @reddish and @zygot! Now I feel I know better with what I'm treating :D

Due to the story of Serial and UART is needed to know the history to know how to implement it.

I first started with blinking a led, then playing with the frequency and buttons, very basic, I want move to more complex projects, that is why I want to lean UART, well the true is I search in google how to connect FPGA with the PC and the answer was UART, maybe there is a easier or better way, but I think get the experience can be great. I think have a way to connect FPGA with PC is the minimum to test more complex projects. There is Ethernet too, probably I'll check it, but later.

I'm still not used to use the simulation of Vivado, I'll work on it, thx for the advice.

I'll be checking later the project Zygot, thx for the reference!, I want first understand the basics of UART.

I think Verilog is not hard, let me explain it, we can know play chess, or at least learn the rules, but know the rules does not do us good in chess, here is similar, verilog is the pieces, rules and board, while our opponent, would be the... our goal and the nature of the hardware (different to software). While we learn to play better with the FPGA we learn to use better the pieces, use the board, take advantage of them with the rules. So as a first part learn Verilog is not that hard, while we can do the mental switch of "you are working with hardware, connections not software" and have a place that explain use how it works clearly. To understand well verilog the tutorial I put above helps a lot :)

In my case, probably can't call my self newbie in programming, designing, modeling and that type of things, that would be an advantage I have. From there, if anyone would like to enter to learn FPGA with verilog without know algorithms, I think there will even more hard parts than the ones I wrote. Maybe, would be easier to learn the basic in python to know algorithms and how to think/design them than start directly to FPGA/Verilog, is a lot more flexible and less technical, and easy even if you learn a lot of libs.

I didn't put here, I'm using an Arty A7-100T.

Thx for the help!

Link to comment
Share on other sites

  • 0

Hi @latot

For most purposes, serial communication is indeed the way to go when connecting an FPGA to a PC. It is the easiest interface to implement by far. Only if you need the higher bandwidth, you should consider Ethernet and "real" USB (by which I mean, direct USB communications rather than serial-over-USB, where all the complexities of USB are taken care of by a dedicated chip on the FPGA board).

For completeness I will continue where I left off with a description of the serial signal that goes from TX to RX. But before I do that, some words on things that @zygotmentioned:

  • The signal levels that travel over a 'real' RS-232 cable are defined by the standard, and pretty high, so they can travel over long distances. However, you will not normally deal with those RS-232 levels. If you use a PMOD-to-RS232-serial converter, you will just talk to it in terms of normal I/O voltages that your FPGA can generate and accept (eg 0V for 0 and 3.3V for 1, which is called "CMOS-level signaling"); the converter will take care of translating and scaling these voltages to a level that is compliant to the standard. In case you use serial-over-USB, the physical voltage levels that the standard prescribes will never be used; between the  FPGA and the FTDI chip you just use CMOS-level signaling, and over the USB cable, voltages are dictated by what the USB standard prescribes.
     
  • For simplicity I omitted discussion of handshaking. This is effectively a mechanism for the receiver to tell the sender "slow down! I can't keep up with the data you're sending!". This pushback mechanism can be either implemented in hardware (having an extra line in each direction) or in software (the receiver sends special XON and XOFF characters to the sender to request halting or resuming data sends). However, when connecting an FPGA to a modern PC, both sides have ample speed and buffering to handle full-speed serial communications even at high baud rates, so handshaking is normally not needed. Many FPGA boards don't even connect the handshaking lines that the FTDI chip offers to the FPGA; when they do, just tie the outgoing one to the value that indicates "I'm happy to receive data!" and ignore the incoming one. That will work in almost all circumstances.

Right, now for the signalling. Let's start with the uni-directional transfer of  character from the FPGA to the PC. The relevant connections look like this:

(FPGA)  TX  --------> RX   (PC)
        GND --------- GND

 Even if you are using USB-over-serial, this is the appropriate model; all the USB and "virtual com port"  business that I described in my previous post is essentially unimportant. That is because, in that case, the picture above is not entirely right; what you're actually doing is this:

(FPGA)  TX  --------> RX   (FTDI chip on FPGA board that talks to the PC)
        GND --------- GND

No matter the precise connection (serial-over-USB or "real" serial), in this FPGA-to-PC communication scenario, the FPGA is the transmitter, and is therefore in the driver's seat. It can initiate communication to the PC (or FTDI chip). It does this by setting its digital "TX" output to values 0 and 1 at appropriate times.

One important case is when the FPGA has nothing to send. This is called the IDLE state; and in that case, the FPGA should just put the constant value '1' on its TX pin.

The other case is when the FPGA actually wants to send something. What happens then depends on the serial parameters of the link. To summarize them once again:

- The baudrate defines a baud period, which will be the length of a single bit. For example, at 9600 baud, a single baud period is 1/9600 seconds, or about 104 microseconds.
- The number of data bits is the number of bits in a single word to transfer. Nowadays, that's almost always eight -- we're sending data one byte at a time.
- The parity convention. It is possible to add a single bit to each word, that allows the receiver to check with some confidence if the bits were received correctly. Choices are Even parity (E), Odd parity (O), or no parity (N). The last option is by far the most common nowadays.
- The number of stop bits. This is the number of baud periods that the sender promises to wait after sending the last bit of the data, before starting to send a new word. Nowadays, this is mostly 1.

So the full specification of a single serial connection could be: a 9600 baud link, with 8N1 data words (meaning 8 data bits, no parity, 1 stop bit).

In most circumstances today, you would use the "8N1" convention for the data word. Baud rates on older hardware could be 1200, 2400, or 9600 for example; nowadays, a baudrate that is quite standard and well-supported is 115200. But you can go beyond: I have succesfully pushed data out of an FPGA to the PC over a USB-to-serial link at 12000000 baud (12 MBaud)! This only works if your FTDI chip supports it and your OS can handled it (this worked fine on Linux, but it was a disaster on Windows -- anything above 115200 is risky there, as you may start to see data loss).

Okay, so with the parameters established, let's see what happens precisely when the FPGA wants to send two ASCII 'A' characters ("AA"). A single A character is represented in ASCII by the value 65, which is 01000001 in binary. We will send those bits from least-significant to most-significant.

This is what it looks like on an 8N1 serial link, with the TX value shown from left to right as time increases:

11111111110100000101010000010111111111111111
----------==========----------==============
  IDLE      FIRST A  SECOND A   BACK TO IDLE

For a single A character as 8N1:

- First zero is a start bit.
- 8 data bits, least significant bit first
- Return to IDLE level for (at least) 1 period (1 stop bit)

Assuming 9600 baud, each of the 0s and 1s is about 104 microseconds long. We start with a possible long stretch of '1' bits, indicating that we have nothing to send. Then we want to send the first A. We start by sending a single-period START bit, that tells the receiving side: "pay attention! data incoming!". After this, the 8 data bits follow. And after the last data bit, we go back to the IDLE level (1).

Because we're on a 8N1 link (with 1 stop bit), we now need to wait at least for a single baud period until starting to send another character. We do that, and then proceed immediately with the second A: first a start bit, then the data bits, followed by at least a single IDLE baud period to honor the stop-bit setting. But in this case we have nothing else to send, so we just stay in idle.

And that's it. That's all there is to serial communications.


If you are going to practice this on your own FPGA, I propose two experiments before starting to work on your own UART:

  • First, simply tie the TX and RX pins together on your FPGA (meaning: drive the TX pin directly from what the FPGA sees on its RX pin). This creates what is called a loopback: anything that the PC sends, the FPGA will immediately send back to the PC. If you run such a design, and you open a terminal program on the PC, you should see characters you type being echoed back to you. The fun thing is that this works independently of the serial port settings on the PC; an exercise is to understand why that is the case.
  • Second, pick a usual baud rate (such as 9600 baud) and generate a signal on the FPGA's TX pin that toggles from 0 to 1 and back every baud period (so, for 9600 baud, that's once every 104 microseconds). If you set up your terminal program on the PC to 9600 baud, 8N1, you should see an infinite string of incoming 'U' characters at full speed. Given the explanations above, can you explain why?

These warm-up exercises are not only useful to sharpen your understanding but also to verify that all hardware connections are working and that your PC terminal program is set up correctly.

Once completed, you're ready to start working on your own FPGA-based UART in earnest. It is best to think of the "transmitting" and "receiving" sides as two completely independent devices; and I recommend you start by implementing a "transmitter" in the FPGA that is capable of sending data to the PC first, without worrying about a receiver.

The receiver is a bit trickier. One thing you need to take care of is that, after detecting the start bit, you need to wait for 1.5 baud periods before sampling the first data bit, and sample data bits with 1 baud period intervals after that. The initial wait of 1.5 baud periods ensures that you're sampling the data bit in the middle of its period, giving you maximum robustness against clock mismatches and edge transition effects.

I hope this helps!
 

 

 

 

 

Edited by reddish
Link to comment
Share on other sites

  • 0

Yay!, things is getting form, due to some explanations before, I wasn't able to get a very basic thing, UART sends bit a bit, one every baud.

All the start bits must be 0.

All the end bits must be 1.

'U' has the next bin code '01010101', so, send a lot of 0 and 1 interspersed will show us the 'U'!

On 12/18/2022 at 6:49 AM, reddish said:

The receiver is a bit trickier. One thing you need to take care of is that, after detecting the start bit, you need to wait for 1.5 baud periods before sampling the first data bit, and sample data bits with 1 baud period intervals after that. The initial wait of 1.5 baud periods ensures that you're sampling the data bit in the middle of its period, giving you maximum robustness against clock mismatches and edge transition effects.

:O that is a nice way! That also means, the UART keeps the signal in the baud time, independent from the CPU/FPGA clock, in that case.., would be good check this behavior is keep from the FPGA to the FTDI.

The actual CPUs does not has a stable clock time, they vary, but I don't know if the FPGA has a stable frequency or not. At least from the CPU perspective, the only way I'm thinking it can works with UART is having an extra/independent clock to sync the port in the pc.

If the FPGA does not has a stable block..., that could cause several issues, thinking in the future to put it to work very hard :) (cooling it would be a point too)

Maybe I would need an external clock too? maybe due to variations of the clocks (FPGA/PC) we need a +-5%? I think I don't get very well how the UART keeps the sync in longer times, maybe with time is easier to loss one bit and delay all the data we are reading breaking the start/stop bit for example. I don't know if UART has a way to tell use, "hey! this packet is broken send this again!".

Thx!

Link to comment
Share on other sites

  • 0

No, the timing isn't super critical. Your RX module should restart with the leading edge of the start bit so that any accumulated error only potentially affects the bits within the byte, not byte to byte. If you use a high enough frequency input clock you can simply count clock cycles to time everything, the error will not accumulate. Also, you want to use a metahardening block in the same clock domain on the rx pin input. On the TX side you will probably want a large enough fifo so as to avoid having to handle the back pressure up stream. 

Link to comment
Share on other sites

  • 0
2 hours ago, latot said:

At least from the CPU perspective, the only way I'm thinking it can works with UART is having an extra/independent clock to sync the port in the pc.

Well, perhaps a better way would be to create logic that can tolerate a baud period that's a bit different than what the receiver is expecting. At low standard frequencies, this isn't too hard. If you want to connect to the serial port of a uC, like the the ZYNQ PS UART, or a Raspberry Pi BCMxxx at 10 Mbaud, or higher things become problematic if you don't take this approach. Usually, in the uController documantation there's detailed information about standard baud rate errors in their devices for a particular baud clock frequency.

2 hours ago, latot said:

don't know if UART has a way to tell use, "hey! this packet is broken send this again!".

Unlike, say Ethernet that has a whole physical layer infrastructure, there's nothing in basic RS-232 protocol that's designed to handle things like bad words, fragmentation, or packet issues. There are no packets, just 7,8 or 9-bit words. The UART does have a parity bit, if you care to use it. But since you can organize blocks of UART words into any kind of high level structure that you want, you can build in any kind of data correction scheme that is appropriate. UARTs are full-duplex, so they can transmit in both directions simultaneously. For completeness, in the past some uControllers supported 9-bit data words to implement an addressing scheme kind of like Ethernet, though not as sophisticated. If you are designing a UART if makes sense to taylor you design to fit a specific application. Do you want to connect to a PC? Do you want to connect to a remote uController in a harsh environment? You can simplify your design or make it as complicated as necessary depending on how you want to use it. That's the beauty of programmable logic. There are almost no limits to what you can do.  In the project link provided above the goal was simply to implement a board to board interface using PMODs that may or may not have a clock-capable pin assigned to one of the header pins. In that application word width can be 8, 22 bits, or 32 bits or just about anything you need.

1 hour ago, Richm said:

Your RX module should restart with the leading edge of the start bit so that any accumulated error only potentially affects the bits within the byte, not byte to byte.

Or, perhaps you might want to have logic in your UART that finds the center of a baud period and sample the state of the RxD signal at that time. There a lot of UART implementations based on a different theory that work. It depends on how much effort and resources your UART is worth. This generally depends on the applications. Sometimes, it depends on interest.

[edit] In general, there are 2 reasons for doing anything. One is that your boss or professor has given you a time limited assignment to complete. The other is that you are curious with a lot of questions. This has a lot to do with how you approach any topic. Work can be fun and interesting if there are no time limits and you are more concerned about questions than thinking that you have answers. Personally, I've found that getting answers to questions just creates more questions...

Edited by zygot
Link to comment
Share on other sites

  • 0

Hi @latot

The timing can be a bit confusing....

At the transmitter side, note that the only restriction on the IDLE time between successive words is that it needs to be at least as long as the link's "number of stop-bits", times the baud period. There is NO requirement that the next word's start-bit is an integer number of baud periods after the end of the previous word; it can start at any time (as long as you honor the stop-bits requirement).

On the RX-side, it is useful to think of it like this. You sample the RX bit at a high frequency (eg 50 MHz), and as soon as you see that it goes down (indicating the beginning of a START bit), you start a stop-watch, and you sample the data bits at 1.5, 2.5, 3.5, ... times the baud period after the initial falling edge of the start bit. The nice thing about this is that if your clock is a bit slower or faster than the clock of the machine at the other end of the line, it doesn't matter too much -- you will drift a bit away from the "middle" of the data bits that you sample, but it's only 8 data bits, so you should still be safe.

This is, at least, the simplest scheme. Some more advanced UARTs sample data bits at multiple points in time, to increase resilience against noise. But under normal circumstances (eg when talking from your FPGA to your FTDI chip a few centimeters away, on the same PCB), that's overkill.

About clock accuracy: most devices will work with a regular clock crystal, which will have a deviation from perfect in the order of 50 ppm (0.005 %). So that shouldn't be a big worry in most applications.

@Richm mentioned a very important idea, but perhaps you will not understand what he means by "metahardening". Here's the thing: the incoming RX bit that you sample has no relation to the clock that your own UART uses, so you have a small but finite chance that the RX signal is transitioning just around the time that your clock rises (and your own design samples its inputs). This is a source of deeply subtle issues, and sometimes manifests itself as behavior that you think is "impossible", because different parts of your design, at the very same clock cycle, have a different idea of the value of some external pin.

The way to guard against this is to ALWAYS /ALWAYS/ *ALWAYS* put external signals into a register (flip-flop) that /is/ synchronous to your clock domain, and then, inside your design, only ever use that registered version when you need to look at the value.

Even this is not enough in theory, because an effect called meta-stability can still happen if a transition of an input signal reaches a flip-flop at just the wrong time, and this effect can even propagate through multiple flip-flops if you're unlucky enough! Some designers will put all external signals through a small cascade of flip-flops to minimize the probability that they run into this. For your applications, using a single flip-flop stage should be sufficient. But using NO flip-flops, i.e. just using unregistered, asynchronous signals in your clocked design, is an invitation to small disasters.

Edited by reddish
Link to comment
Share on other sites

  • 0
Uh-oh, someone used the 'M' word.

Now we aren't talking about Verilog or VHDL or even UARTs. This is basic logic design. Meta-stablity can't be avoided. It can't be corrected. But you can design logic that recovers from a meta-stablilty event quicker than normal clocked logic techniques might. There is no HDL keyword for dealing with meta-stablilty. There are no synthesis parameters for dealing with it. This is the responsibility of the designer. It's an advanced topic that probably doesn't belong in this discussion, and I guarantee that any comment about meta-stability will generate heated criticism from another reader.

When logic is clocked it always produces an output state with memory, that is a register. This is the cause of meta-stablilty. If the input signal being sampled violates the setup or hold specifications of the logic architecture ( there are a lot of different kinds of logic based on transistors ) then there is the possibility of the input changing from a valid 0 state to a valid 1 state at during a time when a clock edge is sampling the signal. The result can be an oscillation in the output state that can last for hundreds or thousands of clock cycles. Clocking a signal that can change state at any point in time relative to the clock edge receiving the signal can cause meta-stability. A simple, but usually good-enough way to shorten the time that a clock output is unknown is to use multiple registers. While, if you think about it, the leading edge of two clocks of different frequency will periodically pass one another as the constantly changing phase relationship of the two clocks changes with time. What this means is that the accuracy of either clock source has nothing to do with meta-stability. Even in systems where all of the registers use the same clock can exhibit meta-stability if the delay of the combinatorial logic being clocked approaches or exceeds the clock period. This situation is more prevalent than most logic designers suspect.

Meta-stability events are insidious because, for some applications they might show up once a day, or once a year, and catching errors caused by meta-stability, unless your logic design detects such occurrences, is almost impossible. Signals that change state close to the clock frequency will encounter timing violations in a shorter time period than signals that change state once a day.

But this does bring up a possible point of confusion. What is synchronous and what is asynchronous? The two concepts can appear to overlap. But I can safely say that no one implements a UART in an FPGA only using non-clocked combinatorial logic. Personally, I think that this discussion is best left for a different thread.

It might be worth noting that depending on how you write your HDL, the synthesis tool provided by your FPGA vendor may infer a latch, or state with memory when you don't intend such a thing. This is also an advanced topic, but might be more relevant to any design implementation discussion. Both topics would seem, to me , to be suitable for a different thread. These concepts do support my contention that HDL constructs that can be synthesized should be considered more of a logic design effort than a computer language design effort. HDLs simply are a way to describe a logic design concept. All of this is pretty far afield from the gist of the original post. The ensuing discussion does illustrate how complicated even simple questions can become when people are trying to be correct, complete enough, and still describe things in words that don't require specialized training to understand.

My last word on this. If you are designing expensive equipment then the specification sheet for that product will include a MeanTimeBeforeFailure number based on both simulation and empirical data from exhaustive testing. For most people visiting this website this isn't much of a concern. Most of us, for educational projects, simply aren't that concerned with such topics. Having said that, even for hobby projects. there are good design practices that should be followed to avoid frustration with designs running on actual hardware. It is certainly a discussion for the future if you are just learning an HDL for the purpose of logic design and trying to figure out what basic UART serial communications involves. Edited by zygot
Link to comment
Share on other sites

  • 0
On 12/17/2022 at 10:49 PM, latot said:

In my case, probably can't call my self newbie in programming, designing, modeling and that type of things, that would be an advantage I have.

As I mentioned earlier, both Veriog and VHDL were originally developed for simulation and modeling purposes, not logic synthesis. This means that they embody concepts of time that are different than a computer language like C or Fortran. But it also means that they don't embody digital logic design concepts natively either.  In theory you can simulate an airplane in VHDL including the software, electronics, hydraulics, fluid dynamics, etc. I know that people have tried.

While standard libraries have been added to both Verilog and VHDL to support logic synthesis it's important to note that they don't support all of the important concepts directly or the same. For instance, in VHDL, if you consider the std_logic type as a 1 bit signal, you can concatenate std_logic elements into a group of type std_logic_vector.  A std_logic_vector doesn't represent either a signed or unsigned value, but there are libraries to handle those types as well. As you can see, things can get somewhat complicated as you try and represent things taken for granted in computer languages in an HDL. An important difference between Verilog concerns types having memory. VHDL does not differentiate between signals of type std_logic_vector having memory ( a latch or register ) and those that have no memory. In verilog, this is not the case. Signals are assigned type wire or reg. A wire has no memory, a reg does. As I mentioned earlier, when you use a clock to capture the state of a signal, you create an output with memory. This is the same in VHDL as it is in Verilog. Usually, VHDL is associated with strongly typed computer languages like ADA. But in this case it's Verilog that's more fussy about types capable of having memory. Where this really gets messy is that combinatorial logic, which is not clocked, can have memory. If you have a process in VHDL that is all combinational logic and creates a state with memory, the synthesis tool can identify this and infer a latch. Sometimes this changes your logic to be something that you didn't intend. A problem with combinatorial logic is that the output is dependent on delays of all of the various signals in your design as well as the boolean elements in it, and might require multiple levels of logic to implement. What this means is that the output, for some periods of time, after the inputs have stopped changing, does not agree with the logic statement(s) describing it. This is referred to as a glitch, or runt pulse and represents an error that is simply the result of delays in signals going into and out of the boolean elements of the design until everything reaches a steady state. Clocked logic gets rid of the glitches as long as the total delay of the combinatorial logic is less than the clock period ( assuming that the same edge of the clock is used throughout the design ). In VHDL an inferred latch is a warning. I assume, but I don't do enough Verilog design to state this, but I'd expect a Verilog design to throw an error instead of a warning about inferred latches for combinatorial logic , if the type receiving the result is a wire instead of a reg.

So, when you say that Verilog makes sense to you at this point, was this something that you understood from your tutorials? Some treatments of using an HDL for logic design are pretty good and others, not to much.

It's not just syntax that's different between Verilog and VHDL. Some basic ideas are treated differently as well. This makes it hard to learn more than one HDL at the same time. But really, what's important are the subtle concepts that might not be obvious from reading text on how to write "code" in an HDL.   

Edited by zygot
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...