Binary coded decimal is a way of representing binary numbers into a decimal system. So we have a number that looks like follows:
0000 0000 0000 0000 (16 bits or 2 bytes) where each set of 4 bits represents a decimal place. So the first 4 bits represent the thousands, the next 4 bits the hundreds and so on.
This is not a literal conversion between binary and decimal, rather kind of a decimal binary system. The only problem with this system is that it often leads to harder addition and problems where there are overflows.
And so on and so forth. So if we have for example
0001 0001 0001 0001 in BCD it would be 1111 in decimal. If we had this on binary, the number would be 0001000100010001 which is 4369. So one is not the same as the other!! Do not mix BCD with binary.
Now let us talk about an example of addition. We are adding the first column to the second
Now maybe you noticed this might lead to issues because there is an overflow when adding 1001 and 0111 on the most right. This can be solved by adding 6.
This video will clarify this situation:
So why use BCD instead of decimals?
- Easy conversion between decimals and binary.