I've noticed that most programming languages, like C and Python, primarily use float and double types for representing non-integer numbers. This often leads to floating-point precision errors, especially with 4-byte types. I believe there's no real downside to using a primitive type that limits values to rational numbers, since any rational number can be expressed as a pair of integers (numerator and denominator). This approach could avoid precision errors inherent in floating-point types, given that a rational type could match the size of a double with a 4-byte integer type in C. Additionally, operations on floats and doubles are encoded in a binary format using scientific notation, which complicates bitwise operations. I'm curious why industries that prioritize precision, like NASA, don't leverage rational types more frequently.
5 Answers
There are a couple of reasons for this. First off, rational data types are generally not supported natively in hardware. Floating-point and integer types usually meet the needs of most applications. Libraries can handle the few instances that need rational types, and expecting everyone to adopt specialized libraries adds unnecessary complexity for most programmers.
Right? And I've seen instances where standard libraries don’t even incorporate rational types, which makes using them in languages like C or C++ even trickier.
Ultimately, rational data types do have their applications, but their growth in mainstream programming is limited. The balance between performance and precision favors floats, particularly for general computing needs. You can find libraries that implement rational types in languages like Python if you need them, but they aren't likely to become standard as a default in most languages anytime soon.
Exactly, and trying to find the best trade-off between precision and performance is key for most software developments. Rational types are more niche.
Right, and if a rational type does become necessary, it's typically for very specialized tasks, not something most software needs to deal with da aily.
Speed is another major factor. Often, precision errors in floating-point calculations just aren't a concern for many applications. If you’re working with measurements that can vary significantly, you may not need absolute precision. Developers also favor quick results, and nobody wants to deal with delays, especially with large datasets.
Exactly, managing fractions can lead to a lot of overhead. Plus, in many fields, like engineering, the precision offered by floats is generally adequate.
Totally! Faster computations are often prioritized when the precision loss is negligible. Performance is key in most software.
Interestingly, IEEE 754 floating-point numbers are themselves rational numbers. Both formats (floating-point and rational) can approximately represent a range of rational numbers, but floating-point is designed to maintain consistent performance across a vast scale of values. The benefits of floating-point operations often outweigh the issues with precision errors, especially in the scope of modern applications.
That's a good point! And as you said, the range that floats provide is crucial for many use cases that require handling very small or very large numbers effectively.
True, but making sure you reduce your rational types on each calculation is an added complexity that most developers would prefer to avoid.
Believe it or not, NASA uses a mix of various numeric representations suited for specific tasks. They've historically worked with floating-point numbers that provide the required precision for trajectory calculations, often allowing for minor errors that they calibrate right away in real-time adjustments.
Yeah, they handle approximate values all the time, where minor inaccuracies are simply part of the equation. Getting things correct within a set tolerance usually works.
Exactly, and given the distances involved in space missions, their margin for error is much more about the equipment's reliability than strict numeric precision.

Exactly, and as the denominator gets larger in rational representations, speed can suffer, plus they can consume more memory.