What is "word" of a CPU

In my Operating Systems class the teacher cited a term that left me a little confused, which is word of a CPU (Central Processing Unit) and he did not go into the explanation regarding this term, said that it may have different sizes with respect to bits.

Question

I would like to know what is word and what relationship does it have with the CPU?

Author: Comunidade, 2017-02-20

2 answers

Initial definition

Word is a natural data of an architecture (processor).

Just as in human natural language we have the letter as the smallest given, the syllable as the first grouping of the smallest given and then the word coming next in the quantities, in the computer we have the bit as the smallest given, and the smallest grouping the byte (okay, it may not be quite so), and then we have the word. But in the language the words vary in size, currently all architectures of computers have words with the same number of syllables (bytes) and since the syllables are also fixed, we have the same number of letters (bits).

When we speak in word we are talking about a data that has a fixed bit size/length/width that that architecture works best.

In general we are talking about the size of the processor logger. At least from the main registrars. There may be other side effects for specific activities, such as calculation floating point, vectors, encryption, etc.

Sizes

Can range from 1 bit (rare) to 512 (rare, may be higher in the future). The most common today is size 64. 32 is also quite common. On small devices 16 or 8 still have space. Nothing prevents having broken numbers, it does not have to be only power of 2, even if it is the most common.

It is common, but not mandatory, for the word to also determine the theoretical maximum memory addressing size. If the largest possible address has 32 bits it is better for the processor to have a register with a 32-bit word so that the pointer fits in the register and can be accessed simply and quickly. Older architectures and some very simple (embedded devices) may need more than one registrar to handle addresses. An architecture that requires precise calculations may have a logger larger than the largest possible address (e.g. 64 word and 32 word addressing).

In general this is the size that the processor works best with numbers. Eventually a smaller number can be as efficient, but there are cases that there is more consumption to do alignment. A larger number will need more than one logger and it is more complicated for the processor to handle it, it is slower and often loses the atomicity of the operation.

There are architectures that use the word as a measure for data transfers, but again, it's just not a coincidence because it can simplify some operation.

Another point is that the statement size tends to be the word size, at least in RISC architectures. This happened more in the past, today the instruction tends to be less, at least in architectures with big words.

Memory allocations often occupy multiples of the size of a word.

There are architectures that can have word size variations. On Intel x86, for example, it started with 16 bits, then it went to 32 bits and now it is 64 bits that can have these 3 sizes of words, respectively called WORD, DWORD, QWORD.

In the past a word tried to be equal to the size of the character, but this no longer makes sense.

Table of several known architectures .

 17
Author: Maniero, 2020-06-11 14:45:34

Processors respond to program (or, by extension, programmer) commands through machine language, in the form of binary numbers, representing 0 = 0 Volts and 1 = 5 Volts, for example. This language is nothing more than the interpretation of a "table" of instructions where each instruction ("opcode") has a task to perform inside the processor.

These "opcodes" or instructions are stored in program memory (ROM or RAM) and the processor will read, decode and run sequentially one by one.

The entire sequence of events occurring within the microprocessor "chip", since the energization of the system, is controlled by the clock ("clock"), which sends pulses to the electronic components arranged in such a way as to constitute a complex state machine. Each 0 and 1 stored electronically in program memory initializes and runs this state machine, giving instructions for the next state.

It usually takes several clock cycles to fully satisfy (or stabilize) the system, depending on the type of "instruction" it was powered with.

The amount of instructions desired by the system designer will determine the minimum number of bits (zeros and ones) needed to complete the set of these instructions. So with 1 bit we only have 2 possible States or instructions. With 2 bits, 4 instructions (00, 01, 10, 11). With 4 bits, 16 instructions, and so on.

This amount is the word of the processor.

But do you mean that with 64 bits more than 18,000,000,000,000,000,000 instructions are possible?

Yes, but to better understand why that big word is, let's move on...

The operation with each instruction is usually done in two steps: fetch ("fetch"), where the instruction is transferred from memory to the instruction decoder circuit; and execution proper. See instruction Cycle .

Taking as an example the 8-bit microprocessor 8085, the fastest instructions, usually of only one byte, are executed in four clock cycles ("clock"), the slowest, those in which the processor needs to search the memory two more bytes of data, in up to 16 cycles. In all, this processor has 74 instructions and the clock reached a maximum of 5 MHz .

As we can see, the old processors were little effective with respect to instruction processing time. The highest performance can be achieved: by increasing the clock frequency ("clock"), having physical limitations (electrical and Magnetic) of the buses (interconnections); by increasing the number of external bits , which also have limitations in terms of physical space; by reducing the number of cycles to execute each instruction , which currently it is done by chaining the instruction search cycles with the decoding one and / or by the use of "cache" memory; by the parallel execution of instructions or multiprocessing or, finally, by the increase in the number of internally processed bits, that is, the ALU (unit of logic and arithmetic) the registers and the accumulator(s) with greater capacity: 16 bits, 32 bits, 64 bits...

Reviewing the history of microprocessors, the first, 4004 from Intel, it had 4-bit word. The instructions were divided into two" nibbles", that is, 4 bits or half a byte: the first was the" opcode", the second the modifier. Two more "nibbles" could compose the address or larger Instruction data. See the PDF manual for this chip at 4004 datasheet . Although it had instructions equivalent to an 8-bit processor, it could only perform calculations (it was designed for Calculator!) directly with no more than 4 bits.

Currently processors no longer decode instructions only by means of physical logic devices, but by microprograms, and use the most advanced and complex architecture.

Inside each" opcode " is embedded much more information than those old instructions. In addition, the processor, by the way, each of the various processors is able to manipulate and perform calculations with numbers with much more digits and decimal places, in favor of a higher efficiency.

 1
Author: Paulo de Tarso, 2018-05-06 15:01:55