UART – Universal. Asynchronous Receiver/Transmitter. – with FIFOs. January, Product Specification. RealFast Intellectual. UARTs (Universal Asynchronous Receiver Transmitter) are serial chips on your PC Dumb UARTs are the , , early , and early The AXI UART core performs parallel-to-serial conversion on characters received from the AXI master and serial-to-parallel conversion.
|Published (Last):||11 December 2007|
|PDF File Size:||13.50 Mb|
|ePub File Size:||6.25 Mb|
|Price:||Free* [*Free Regsitration Required]|
Finally we are moving away from wires and voltages and hard-core electrical engineering applications, although we still need to know quite a bit regarding computer chip architectures at this level.
While the primary focus of this section will concentrate on the UART, there are really three computer chips that we will be working with here:. Keep in mind that these are chip families, not simply the chip part number itself. Computer designs have evolved quite a bit over the years, and often all three chips are put onto the same piece of silicon because they are tied together so much, and to reduce overall costs of the equipment.
So when I sayI also mean the successor chips including the, Pentium, and compatible chips made by manufacturers other than Intel. There are some subtle differences and things you need to worry about for serial data communication between the different chips other than thebut in many cases you could in theory write software for the original IBM PC doing serial communication and it should run just fine on a modern computer you just bought that is running the latest version of Linux or Windows XP.
Modern operating systems handle most of the details that we will be covering here through low-level drivers, so this should be more of a quick understanding for how this works rather than something you might implement yourself, unless you are writing your own operating system.
For people who are designing small embedded computer devices, it does become quite a bit more important to understand the at this level. Just like thethe has evolved quite a bit as well, e.
The differences really aren’t as significant as the changes to CPU architecture, and the primary reason for updating the UART chip was to make it work with the considerably faster CPUs that are around right now. The itself simply can’t keep up with a Pentium chip.
Remember as well that this is trying to build a foundation for serial programming on the software side. While this can be useful for hardware design as well, quite a bit will be missing from the descriptions here to implement a full system. We should go back even further than the Intelto the original Intel CPU, theand its successor, the The newer CPUs have enhanced instructions for dealing with more data more efficiently, but the original instructions are still there.
When the was released, Intel tried to devise a method for the CPU to communicate with external devices.
In thethis meant that there were a total of sixteen 16 pins dedicated to communicating with the chip. The exact details varied based on chip design and other factors too detailed for the current discussion, but the general theory is fairly straightforward. Since this is just a binary code, it represents the potential to hook up different devices to the CPU.
It gets a little more complicated than that, but still you can think of it from software like a small-town post-office that has a bank of PO boxes for its customers. The next set of pins represent the actual data being exchanged. You can think of this as the postcards being put into or removed from the PO boxes.
An pin signals whether the data is being sent to or from the CPU. This was a source of heartburn on those early systems, particularly when adding new equipment. This has some problems, including the fact that it chews up a portion of potential memory that could be used for software instead. When you get down to actually using this in your software, the assembly language instruction to send or receive data to port 9 looks something like this:. When programming in higher level languages, it gets a bit simpler.
And this really is a warning.
What is UART (Universal Asynchronous Receiver-Transmitter)?
At the minimum, it will crash the operating system and cause the computer to not work. Worse yet, in some cases it can cause actual damage to the computer. This means that some chips inside the computer will no longer work and those components would have to be replaced in order for the computer to work again. Damaged chips are an indication of lousy engineering on the part of the computer, but unfortunately it does happen and you should be aware of it.
Finally we are starting to write a little bit of software, uarr there is more to come. There are a few differences between the CPU and the Higher bits of the port number being ignored, this made multiple port number aliases for the same port.
In addition, besides simply sending a single character in or out, the will let you send and receive 16 bits at once. The chips will even let you send and receive bits simultaneously.
We will not cover that topic here. The chip designers at Intel got cheap and only had address lines for 10 bits, which has implications for software designers having to work with legacy systems.
There are other legacy issues that show up, but fortunately for the chip and serial communications uzrt general this isn’t a concern, unless you happen to have a uatt driver that “took advantage” of this aliasing situation.
This issue would generally only show up when you are using more than the typical 2 or 4 serial COM ports on a PC. The CPU and compatible chips have what is known as an interrupt line. Within thethere are two kinds of interrupts: Hardware interrupts and Software interrupts. There are some interesting quirks that are different from each kind, but from a software perspective they are essentially the same thing.
The CPU allows for interrupts, but the number available for equipment to perform a Hardware interrupt is considerably restricted.
There are a total of fifteen different hardware interrupts. Before you think I don’t know how to count or do math, we need to do a little bit of a history lesson here, which we will finish when we move on to usrt chip. At the time it was felt that was sufficient for almost everything that would ever be put on a PC, but very soon it became apparent it wasn’t nearly enough for everything that was being added. The point here is that if a device wants uzrt notify the CPU that it has some data ready for the CPU, it sends a signal that it uary to stop whatever software is currently running on the computer yart instead run a special “little” program called an interrupt handler.
Once the interrupt handler is finished, the computer can go back to whatever it was doing before. If the interrupt handler is fast enough, you wouldn’t even notice that the handler has even been used. In fact, if you are reading this text kart a PC, in the time that it takes for you to read this sentence several interrupt handlers have already been used by your computer.
Every time that you use a keyboard or a mouse, or receive some data over the Internet, an interrupt handler has been used at some point in your computer to retrieve that information. We will be getting into 16605 details of interrupt handlers in a little 61650, but now I want to explain just what they are.
Interrupt handlers are a method of showing the CPU exactly what piece of software should uary running when 116650 interrupt is triggered. The advantage of going this route is that the CPU only has to do a simple look-up to find just where the software is, and then transfers software execution to that point in RAM.
This uuart allows you as a programmer to change where the CPU is “pointing” to in RAM, and instead of going to something in the operating system, you can customize the interrupt handler and put something else there yourself. How this is best done depends largely on your operating system. For a simple operating system like MS-DOS, it actually encourages you to directly write 166550 interrupt handlers, particularly when you are working with external peripherals.
Other operating systems like Linux or MS-Windows use the approach of having a uarg that hooks into these interrupt handlers or service routines, and then the application software deals with the drivers rather than dealing directly with the equipment.
How a program actually does this is very dependent on the specific operating system you would be using. If you are instead trying to write your own operating system, you would have to write these interrupt handlers directly, and establish the protocol on how you access these handlers to send and retrieve data. Before we move on, I want to hit very briefly on software interrupts.
Software interrupts are invoked with the assembly instruction “int”, as iart. From the perspective of a software application, this is really just another way to call a subroutine, but with a twist. The “software” that is running in the interrupt handler doesn’t have 16605 be from the same application, or even made from the same compiler. Indeed, often these subroutines are written uary in assembly language.
Depending on the values of the registers, usually the AX register in the in this case, it can determine urt what information you want to get from DOS, such as the current time, date, disk size, and just about everything that normally you would associate with DOS. Compilers often hide these details, because setting up these interrupt routines can be a little tricky. Now to really make a mess of things.
The difference here is that software interrupts will only be invoked, or have their portion of software code running in the CPU, if it has been aurt called through this assembly opcode.
Serial Programming/ UART Programming – Wikibooks, open books for an open world
The chip is the “heart” of the whole process of doing hardware interrupts. External devices are directly connected to uatt chip, or in the case of the PC-AT compatibles most likely what you are most familiar with for a modern PC it will have two of these devices that are connected together.
The purpose of these chips is to help “prioritize” the interrupt signals and organize them in some orderly fashion. There is no way to predict when uarg certain device is going to “request” an interrupt, so often multiple devices can be competing for attention from the CPU.
Generally speaking, the lower numbered IRQ gets priority.
There are exceptions to this as well, but let’s keep things simple at the moment. When it was built, there was only one chip on the motherboard. Since there was still only 1 pin on the CPU at this point the that could receive notification of an interrupt, it was decided to grab IRQ-2 from the original chip and use that to chain onto the next chip. The nice thing about going with this scheme was that software that planned on something using IRQ-2 would still be “notified” when that device was aurt, even though seven other devices were now “sharing” this interrupt.
This is mainly of concern when you are trying to sort out which device can take precedence over another, and how important it would be to notified when a piece of equipment is trying to get your attention. If you are dealing with software running a specific computer configuration, this priority level is very important. Usually the software really doesn’t care, but on some rare occasions you really need to know this fact.
We will visit this concept a little bit more when we get to the chip. For a typical PC Computer system, the following are typical primary port addresses associated with the This primary port address is what we will use to directly communicate with the chip in our software.
Most of these are used to do the initial setup and configuration of the computer equipment by the Basic Input Output System BIOS of the computer, and unless you are rewriting the BIOS from scratch, you really don’t have to worry about this. Also, jart computer is a little different in its behavior when you are dealing with equipment at this level, so this is something more for a computer manufacturer to worry about rather than something uagt application programmer should have to deal with, which is exactly why BIOS software is written at all.
I’m going to spend a little time here to explain the meaning of the word register. When you are working with equipment at this level, the electrical engineers who designed the equipment refer to registers that change the configuration of the equipment.
This can happen at several levels of abstraction, so Yart want to clear up some of the confusion. A register is simply a small piece of RAM that is available for a device to directly manipulate. In a CPU like the or a Pentium, these are the memory areas that are used to directly perform mathematical operations like adding two numbers together.
These usually go by names like AX, SP, etc. There are very few registers on a typical CPU because access to these registers is encoded directly into the basic machine-level instructions.
1650 When we are talking about device register, keep in mind these are not the CPU registers, but instead memory areas on the devices themselves. How you deal with the device is based on how complex it is and what you are going to be doing.
In a real sense, they are registers, but keep in mind that often each of these devices can be considered a full computer in its own right, and all you are doing is establishing how it will be communicating with the main CPU.