Skip to main content

Decimals

As we saw in floating point, they're fast but not accurate due to binary division. This is exactly why programming languages also have Decimal data type.

How decimals work?

Programming languages take full control of the Decimal data type. It first scales the numbers in the operation to the value such that both operands are in the same level. Then performs the actual arithmetic operation and then scales the result down.

This is exactly why Decimal is accurate and used in use cases such as balance where accuracy matters. But it's slow.

10.5+0.1234VA=105,scale=1VB=1234,scale=4VA(new)=105ร—10ร—10ร—10VA(new)=105,000105,000+1,234=106,234Finalย Result=106,23410410.6234\begin{aligned} 10.5 + 0.1234 \\ V_A = 105, scale = 1 \\ V_B = 1234, scale = 4 \\ V_A (\text{new}) = 105 \times 10 \times 10 \times 10 \\ V_A (\text{new}) = 105,000 \\ 105,000 + 1,234 = 106,234 \\ \text{Final Result} = \frac{106,234}{10^4} \\ \mathbf{10.6234} \end{aligned}