I'm experiencing an issue when dividing decimal values where the returned value is a value that's very close to the intended value but not correct. For example, if you check the attached screenshot, the first 2 rows return values that end in trailing .9s when they should have clean answers (like the 3rd row).
In my research I'm realizing this might be an issue with floating-point math as a whole. However there are usually solutions to issues like this by using a decimal type instead of the float type for true decimal values (which for my purposes, these numbers are. Is there a way to define these numbers with a decimal data type so these division calculations come out correctly? If not, is there another clean solution aside from needing to round every division calculation going forward?
For added context, there are cases where divisions need to be done as a middle step to more significant calculations, as a result I'd strongly prefer not to round in the middle of a process if it isn't needed.
Please sign in to leave a comment.