I recently watched a video that discussed the formulas used for damage falloff in games, and while the math seems reasonable, I feel like it might be inefficient. This is particularly about calculations within the falloff range since outside of that range, the damage is fixed.
The formulas presented were:
(Player distance - near falloff) ÷ (max falloff - near falloff) = n
Then, using that, the damage dealt would be calculated as:
n * MinDmg + (1.00 - n) * MaxDmg = DmgDealt
While this makes sense mathematically, why not simplify it to have a variable for FalloffDmg such that MaxDmg - MinDmg = FalloffDmg? This way, the minimum damage value remains easy to adjust while only needing to be calculated once. Then the formula could shift to: MaxDmg - (n * FalloffDmg) = DmgDealt.
Even if the falloff isn't linear, I think the challenge would be in calculating n, not in the damage dealt formula.
Thanks for your thoughts!
1 Answer
Honestly, the compiler or JIT (Just-In-Time compiler) will optimize your code anyway, so just write it in whatever way is easiest for you to understand. If you find it runs too slowly later, you can always revisit it then.

I still have a lot to learn about compiling! I get that variables are treated differently in the compiled version, but is it just compressing the functions instead of rewriting them? So, the original formula has more steps than mine?