Division by 0 must always be a CPU Exception
In my previous blog about binary division, I stated the first rule about doing division is being "an Exception" if the divisor is equal to 0. This is different from returning a result. No longer will the CPU of the computer be executing normally after such an operation:
Quotient = dividend / 0
Modulus = dividend % 0
The operation will return a quotient and remainder of a binary division (two results). If the absolute (positive) divisor is higher than the absolute (positive) dividend, the quotient will always be 0, and the remainder equals the signed (the original) dividend.
The quotient will be negative if the sign of dividend and divisor was different. This can be done with a bitwise XOR (Exclusive OR) check:
Is Negative Quotient = (Dividend AND signbit) XOR (Divisor AND signbit)
The modulus will be negative if dividend was negative.
The division itself is done on the absolute value of dividend and the absolute value of divisor.
So those are all fine. However, dividing by 0 is not. It simply must not happen. Here is why:
In order to find out what dividing by 0 means, one must first try different approaches:
If multiplication is a series of addition, then division can be seen as a series of subtractions (not always true). If we try to subtract the divisor from dividend to reach 0, the number of times we do so will yield the quotient (hopefully). If we try to do it with a divisor of 0, we will keep subtracting forever until we have reached 0. The quotient can then be seen as infinite (∞) as the subtraction goes on for infinite times.
However, there are more ways to look at division. Like making the divisor smaller until it is close to 0, or 0, and look at the results:
1 / 1 = 1
1 / 0.1 = 10
1 / 0.01 = 100
1 / 0.001 = 1000
...
1 / 0 = ∞
So a division by 0 equals ∞ (infinity) then? Well, let's try another dividend:
2 / 1 = 2
2 / 0.1 = 20
2 / 0.01 = 200
2 / 0.001 = 2000
2 / 0 = ∞
So 1 / 0 equals ∞ and 2 / 0 equals ∞. Then, from this equation, 1 equals 2 then? 1 = 2... That's absolutely fail! Let's try this:
1 / -1 = -1
1 / -0.1 = -10
1 / -0.01 = -100
1 / -0.001 = -1000
...
1 / 0 = -∞
Now division by 0 equals -∞. There is no number that makes sense for these results!
The answer to division by 0 is "undefinable". Computers must always work with "definable", so the correct thing to do is tell the CPU to enter an exception "Divide By Zero". From there, someone (not us) will know what to do, or the computer will have to be turned off and turned on again to get the CPU out of that exception.
Everyone makes mistakes, and programmers will sometime divide by 0, even when they didn't plan to. Such a mistake costs the most. When the computer runs out of precision for very low floating point numbers, the number will more than likely be a 0.0, causing a "Divide By Zero" exception.
A CPU will, during an exception, execute code to find a way to fix the original problem that led to the exception happening. That code must not do an exception. If the exception code divides by 0, a "Double Fault" exception will occur. If during that exception, yet another division by 0 happens, the CPU enters a "Triple Fault" exception and shuts down. (That means what it means. If the CPU of the computer is in charge of a critical operation, such as a nuclear facility or worse, that's truly the end)
- 3
3 Comments
Recommended Comments
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Join the herd!Sign in
Already have an account? Sign in here.
Sign In Now