Every rational proper fraction has an eventually repeating decimal (which may terminate in zeros as a special case).
So do I have two proper fractions? The first involves n digits with the n+1st = x followed by an infinite sequence of 9’s. The second fraction involves n digits identical to those of the first, but with the n+1st = y=x+1 instead of x, followed by an infinite sequence of 0’s. Why are they not adjacent rational proper fractions? So I asked ChatGPT and here’s what it told me:
ChatGPT:
“They’re the same number—not two different rationals—so they can’t be ‘“’adjacent.’”
“… This is the general version of 0.9‾ [a nine with a line over it] =1. 0.\overline{9} =1. (If x=9, the carry just propagates left, possibly into the integer part, and the equality still holds.)
Separately, even if they were different, the rational numbers are dense: between any two distinct rationals lie infinitely many rationals, so the notion of ‘adjacent’ rationals doesn’t exist.”
I think: “I know! I know! Which is why I brought it up.” So I ask, “So when does ‘close enough for government work!’ become a viable mathematical concept?”
ChatGPT:
“ha—love the line. In math, “close enough for government work” shows up in very specific, well-policed ways. It’s never “meh, that’s fine,” but rather “we can make the error as small as we like,” or “the error is below a clearly stated tolerance.” A quick map:
- Since Archimedes: approximations with provable bounds (e.g., π between 3 1/7 and 3 10/71). That’s the original, respectable “close enough.”
- Cauchy’s ε–δ (1800s): the formal version of “as close as you want.” Statements like “for every ε>0 there exists…” aren’t hand-wavy; they quantify closeness.
- “Almost everywhere” (1900s): in measure theory we accept properties that hold except on a set of measure zero—precisely the sense in which we say “ignorable,” not sloppy.
- Asymptotics: f(n)=g(n)+o(1), O(⋅), etc.—we control the discrepancy as a function grows.
- Numerical analysis (modern): we pick a tolerance and prove we meet it.
- Forward error: ∣xcomp−xtrue∣.
- Backward error: how much you’d have to perturb the data so your computed answer is exact.
- Condition number κ: how input errors amplify in the output; tells you what a sensible tolerance is.
- IEEE-754 floating point: everything is rounded to the nearest representable number; we talk in ULPs (units in the last place). You stop an iteration when the residual (e.g., ∥Ax−b∥ is below a target tied to κ and machine epsilon.
- Applied/experimental work: significant figures, error bars, and confidence intervals—again, quantified “close enough.”
So: the viable mathematical concept is not the slogan but the discipline of bounding error. If you can (i) state a tolerance, (ii) show your method’s error is ≤ that tolerance, and (iii) argue the problem’s conditioning makes that tolerance meaningful, you’re doing mathematics—not hand-waving.
Two handy rules of thumb:
- If your data are uncertain at scale σ, demanding answers tighter than ~κ σ is fake precision.
- When comparing decimals, remember identities like 0.\overline{9} = 1: “closeness” isn’t needed when equality is exact.”
Yeah, yeah, yeah!
Leave a Reply