Using "bytes" and " bits"

What is the use of bytes and bits ?

I'm reading some C and C++ programming books that talk about bytes and I wanted to know what these terms are used for.

For example, in this table below it talks about bytes:

Tipo  Descrição                            Tamanho             Intervalo
char  caractere                           1 bytes -128     a 127 ou 0 a 255
Int       inteiro                             2 ou 4 bytes    -32768 a 32767 ou -214783648 a 214783647
float Ponto flutuante                     4 bytes         -1.7E38 a 1.7E38 (precisão de 6 dígitos)
double    Ponto flutuante com 2x de precisão  8 bytes         -1.7E38 a 1.7E38 (precisão de 16 dígitos)
void  vazio   0 bytes
Author: Maniero, 2019-01-31

1 answers

A bit ( BInary digiT) is the smallest unit we can find in the abstraction of computing. Computers work with pulses (all of which have been in real use since the end of World War II are electric) with a status call sign on or off. In modern computers this is given by a higher or lower voltage. The logic of the computer is binary, so it is fast and accurate. Unlike the decimal we use (based on our fingers) the binary only has two states, false or true and everything is composed through it.

So we have base 2, and we represent all possible digits with 0 and 1. The next number would be 2 in the decimal, in the binary it will be 10, because to represent a third number only with an extra digit. Then the next one will be 11, and then there will be 100, which is the same as 4 in decimal.

When you have 8 different digits you have one byte (in all current architectures it is so, but could have chosen another size). It was defined like this because it is a round number (programmer works thinking about binary and round numbers are 2, 4, 8, 16, 32, etc.) that meets the needs of the main numbers well. One of the reasons is that it can be used to map the characters we need to use, such as the table ASCII, which has 128 different characters. This would give 7 digits to represent everything (2 raised to 7 gives 128), rounded to 8.

There is the nomenclature of nibble to 4 bits, but there is no real data only with this amount of bits, it is used because some things can be represented only with them, which can even be some compression and keep two values in the same byte (BinarY TErm), just as we can keep 8 different boolean values in 1 byte. In the past it has had architectures with 4 or 8 bits in each byte.

Hence everything we do is composed of bytes, even some data types are already composed of a specific amount of bytes, for example an integer has 4 bytes (typically), or an extended character in UTF-16 format has 2 bytes, or a date usually has 8 bytes, or a pixel can have 1, 2, 3 or 4 bytes (the last two most common today) and so on. In a 4-byte integer we have 32 bits, so it is 2 raised to 32, or just over 4 billion, which is the maximum amount of different numbers that this can be represented in this type.

A floating point type has a more complex calculation, but it has 32 or 64 bits and it is possible to represent about 4 billion or 16 quintillions of different numbers, although it seems that it is more by the way it calculates this, it goes further but jumps a lot number.

Processors are able to better handle certain types of data with these formats and byte quantity.

The minimum size we can store, handle or transport is 1 byte. In practice depending on what you are doing can be at least one word , or even something bigger. It happens that the minimum is 4KB. So a type that only needs 1 bit (because it is boolean ) will take up at least 1 byte.

A curiosity is that writing KB is wrong. KB would be 1000 bytes. But since we need round numbers in binary a kilo of bytes for us is 1024 bytes, and there it must be correctly represented by KiB. Of course, everyone understands that KB is 1024, and not 1000. Only then the person buys a HDD and it comes measured in KB real and the person thinks the manufacturer is stealing it.

Another curiosity: 1 KiB is 1024 bytes, 1 KB is 1000 bytes and 1 Kb is 1000 bits, and of course, 1kib is 1024 bits. People exchange this wrongly.

The subject is simple but gives a chapter of a book If I'm going to talk about everything, I imagine I just wanted a summary. And the site is full of extra information, as I have already shown in some links .

Bytes are used to measure the space occupied by some data, and it always it will be a set of 8 bits (States of true or false). It is an abstract term that was created to give understanding to the minimum unit of value that we actually deal with on the computer.

 8
Author: Maniero, 2020-08-25 16:28:01