What are Widening and Narrowing conversions?

I've heard this term in type conversions in .NET, but I don't know if this applies to other platforms and/or languages.

I have no idea what these terms are, so what is a conversion Narrowing and a Widening? What are their differences and where do I use them?

Author: CypherPotato, 2019-08-03

1 answers

Widening

Is to take a value and treat it as something broader, that is, you take a value that has a smaller magnitude and put it in a type that allows a larger magnitude, so there will never be data loss. Some people think that there is always a conversion in these cases, but this is not true, it exists in some cases, but not in all. The most important thing is that is safe to do this type of operation .

It is not complicated to understand, if you it has a number that is saved in 2 bytes, and it has a location in memory that is prepared to accommodate a number that occupies 4 bytes, it puts the 2 bytes there and completes the other 2 bytes with non-significant zeros in the other bytes so as not to misrepresent the value (the exact location depends on the architecture, but Something like this (exemplifying with decimal to visualize better, but in memory it's all binary) think you have room for 8 digits in some place, the original number only has 4:

1234
00001234

A very common example is to put a value int in a variable or slot in an expression that expects a long, this value will be treated as a long normally without problems.

It is common for languages to let do this implicitly (some do not). But it does not mean that it never has problems, there is no loss, but it can generate unexpected results in some more specific situations, for example when it changes from float to double or decimal to double since these types have accuracy problems. In addition am some cases the operation of widening can generate overflow in others that could not.

Narrowing

Is the opposite, so it's trying to put a value of greater magnitude where only one of lesser magnitude fits. Obviously in many cases there will be loss of data, whether accuracy or accuracy, in some cases with large differences, since there is no room for save all necessary information. There are cases that the value can be accommodated in the smaller type without major problems. It also does not mean that there is conversion, but without a conversion the chance of error is greater.

Here it is more complicated to understand a little so I will try to use the previous example to show what happens. Now you have the original number with the same 4 digits, but only have space for 2 of them in memory, what happens?

1234
12

You noticed the loss, right? It gets terrible, the value passes have no sense at all. A typical example is to try to put a value int into a type short that is half the size in memory and therefore can represent values much smaller than int can.

Another example that people do not realize and so the use is dangerous is to try to put an unmarked value (unsigned) in a flagged type, although they have the same amount of bits one of these bits is used for the signal so the maximum value it allows is less, it has cases of a loop becoming infinite because of this, since the value will grow until an hour it "out of nowhere" becomes negative, and then it never arrives in the condition that would close it and you did not even see the value change. This is one of the motivations that is spoken to avoid unsigned to the maximum.

It is also common for an arithmetic operation to create a value that bursts the normally used type and an operation of narrow can happen implicitly, for example: multiplying 1 billion by 1000 does not fit in an integer though originally 1 billion fit and if everything was hoping to use it was a int some loss may occur because there was a widening implicit in the multiplication and then there was a narrowing to fit in the space it originally had.

No completely sane language should let it do this implicitly (but almost all let it one way or another, some change the representation of the number to fit just like that, but not in .NET). complicated for languages to prevent in all circumstances and would leave everything very inefficient, C # has like turn this on in some circumstances (does not solve all problems). Even explicitly one must be careful because there can be loss if you are not sure what you are doing, something like this: (short)1234. But there are cases that have no problem, example here occurs the narrowing from 8 digits to 4 and has no loss:

00001234
1234

Of course, the subject is more complex, they involve other types that allow their values to be interpreted as other types with or without loss because the target type is more or less broad, but the basis is this.

Hierarchy

This can happen with hierarchy. If an object derives from another it is common for it to have new fields that the derivative did not have (not as common as that, but reasonably), if you are trying to put the derived type in a place that was expecting the base type does not fit then something will be lost. It might not cause a big problem right now because your code won't be able to access anything of the derived type itself, but if you later try to take that object as the derived type the specific data of the derivative won't be there. This is called slicing .

Luckily .NET doesn't let you do this, as it could only do something like that on types by reference so it always has room for the whole derived type to be in memory (thank indirection ) and types by value not it has inheritance so there is no way to happen this phenomenon ( in C / C++ can ). So I didn't give more details.

Then it starts to get confused because when you convert from a derived type to a base it has loss, even if not of data, but of behaviors (some methods that exist in the derived type may not exist in the base type, very common), call it widening since the type is more general.

And when you convert the base type to a derived type it is called narrowing since he pass be a more specific type, but this is a case that he gains ability instead of losing.

But I understand that the motivation is because doing the widening always works (at least if it passes in the compilation), after all the original type is always compatible with its base, and the opposite does not always work since the object needs to be compatible and it may not be, so it needs to be explicitly, so narrowing , is consistent in this respect.

I think in these cases it should just call upcasting and downcasting respectively.

 5
Author: Maniero, 2020-06-11 14:45:34