##
Another math illiteracy moment *August 15, 2009*

*Posted by lumixedia in General, history of mathematics, math education, number theory.*

Tags: math illiteracy, number theory

trackback

Tags: math illiteracy, number theory

trackback

I was recently informed that the Goldbach conjecture is popularly known in China as the “1+1=2” conjecture. As in, “every positive even number can be written as the sum of two primes. For example, 1+1=2.” [Edit–I was told this by a Chinese person who might nevertheless not be representative of how this nickname is understood–see comments.]

When I mentioned that this nickname is not in fact accurate, the person who so informed me got rather annoyed with my pointless pedantry. Why shouldn’t 1 be prime? Why not define a “prime” to be a positive integer with at most two distinct divisors, rather than a positive integer with exactly two distinct divisors? Clearly the “1+1=2” conjecture sounds way cooler than the “2+2=4” conjecture to a layman, and we are talking about popular mathematics here, so why not?

Okay, I guess it might not be immediately obvious why current notation is preferable. Maybe. From a certain perspective. It is also admittedly true, according to Wikipedia, that 1 was indeed widely considered to be prime by mathematicians up to a few hundred years ago. Fine. So let’s temporarily redefine “prime” to mean a positive integer with at most two distinct divisors, and see if it’s acceptable today.

Well, first of all, the conjecture that started this discussion is going to completely change for even numbers of the form 1+p where p is a prime, as in it will become trivial where (as far as I know) it previously was not. If we want to preserve the original Goldbach conjecture, we should state it this way: every even number greater than 2 can be written as the sum of two primes which are not 1.

That’s not too bad, I guess. We can live with that. And the Goldbach conjecture is not (as far as I know) *fundamental* in the study of number theory, so adding the four words “which are not 1” is no big deal.

Let’s look at something *fundamental*, then. Let’s look at the *Fundamental* Theorem of Arithmetic. It is no longer possible to say this: every integer greater than 1 has a unique prime factorization. We should now say this: every positive integer has a unique prime factorization up to powers of 1.

Oh, well. It’s only adding three words (five if you remove the “greater than 1” in the original and consider the prime factorization of 1 to be the empty product). Besides, as the person who inspired this post retorted, who cares about unique prime factorization? Only head-in-the-clouds number theorists, that’s who.

I can’t technically disagree with that. Yeah, only people interested in math care about unique prime factorization. But considering that “prime” is a math term, you’d think people interested in math should be the only ones whose opinions matter anyway. I’m getting off track, though. The question here is why we care.

The answer is obvious. At the same time, it’s probably sufficiently deep and multifaceted that I won’t actually give it to the satisfaction of anyone who reads this post, but that’s fine. I look forward to much better explanations in the comments.

The first point is that “unique” is much, much better, just conceptually, than “unique up to powers of 1”. I mean, seriously, do you really want to consider to be an acceptable prime factorization of 12? It would completely defeat the purpose of the prime factorization of a number as a decomposition into, in a sense, simplest possible parts. The moment you start trying to compute basic number theoretic functions, such as the number of positive divisors or the sum of all positive divisors of a given integer, you want to exclude the possibly present power of 1 in the integer’s prime factorization. And that’s only the beginning of what prime factorization is used for.

Maybe I’m being unfair here. The Fundamental Theorem of Arithmetic can be easily rescued by restating it like this: every integer greater than 1 has a factorization into primes greater than 1. This might be followed by a sentence like “we define this factorization to be the integer’s *basic* factorization”, or whatever. Now we just search-and-replace “prime factorization” by “basic factorization” in any number theory works that mention the former, and we’re okay.

That’s not all we have to do, though. This is a consequence of the Fundamental Theorem of Arithmetic:

To be precise, we should restate it like this:

In general, every time Wikipedia’s list of arithmetic functions deals implicitly or explicitly with either a fixed prime or a set of primes, we have to add the qualifier “greater than 1”. (Or, I suppose, we could instead opt to break all the additivity and multiplicity properties. I’d rather not, though.) In specific cases we can probably include 1 or restate things more efficiently, but still it gets to the point where we’d really be better off coming up with a specific term for all primes other than 1. Let’s do that. Let’s define an “oink” to be a number with exactly two distinct factors—that is, any prime greater than 1.

Fast forward a reasonable amount of time in which this becomes accepted notation. Schoolchildren and any non-mathematical adults become hopelessly confused over the difference between a prime and an oink. People who attempt to explain that 1 is not an oink are yelled at for being pointlessly pedantic. Mathematicians have stopped using the term “prime” altogether since “oink” makes more sense in every context you might care to name. Eventually prime drops out of the mathematical vocabulary, oink takes its place, and we have this entire conversation all over again. And again. And again.

Moral of the post: mathematical pedantry usually exists for a really, really good reason.

Being a Chinese myself, I think your explanation for 1+1 = 2 is wrong. Many people call the Goldbach conjecture the 1+1 conjecture, and the 1 here represents one prime. (I haven’t seen people calling it 1+1=2 conjecture actually) Chen’s theorem is often called 1+2.

In UFDs (e.g. polynomial rings over the integers or a field), it doesn’t make sense generally to talk about primes “greater” than one. There may also be numerous units, though in the only ones are . So talking about “unique factorization” becomes very clumsy if we take units as primes. For instance, then would be two different prime factorizations in .

I think it is also conventional in general rings to exclude the unit ideal as a prime ideal.

Ah. You beat me to it. I was just about to mention this. I like the Gaussian integer case: . This is a UFD, but .

What we can do is call two elements “associates” if they are the same up to a unit multiple. The UFD statement is then that prime factorization is unique up to associate (in the sense that no matter what associate you pick, the number of primes in the factorization and the powers of the primes are the same). The units in the Gaussian integers are , and 5 is not a prime.

@soarerz–that’s a relief. My grandfather who just came from China referred to the conjecture casually as “1+1=2” and expected me to understand what he was talking about (he was not the person who then started arguing with me about it, though). That’s how I got my information. Maybe he just misunderstood the name himself/it’s a regional thing/I don’t know.

Thanks for the expanded context, Akhil! I’m slightly confused about your example–how does *not* considering units to be prime prevent (x+1)(x-1) and (2x+2)(0.5x-0.5) from being distinct prime factorizations?

Well, I guess they are different still, but you have to pick one prime in each “equivalence class” of primes (two primes being equivalent if they differ by a unit). Let the set of such chosen elements be .

Then if belongs to this ring, there is a *unique* expression

, where almost all , and is a unit.

(This is the definition of a UFD. Hilbertthm90 above posted an example besides and polynomial rings.)

As has already been mentioned, there are good reasons not to consider units primes from an abstract point of view. Another is the implication that maximal ideals are prime ideals together with the usual convention that maximal ideals must be proper. We want the quotient of a commutative ring by a maximal ideal to be a field, but the quotient by the entire ring is the trivial ring, which is again by convention not a field because the field axioms require that 0 is distinct from 1.

So the question becomes, why is this a reasonable axiom? Well, any ring in which 0 = 1 must in fact be the trivial ring, and in the trivial ring 0 is invertible. This messes up a lot of other nice theorems we want to be true of fields, so it’s just a bad idea.

But just so I get to play devil’s advocate: one way to state unique prime factorization is to decompose the multiplicative group of as . In other words, there is a sense in which is an “extra prime,” and at least one mathematician I’ve read considers it the prime associated to the real valuation, i.e. an “infinite prime”.

“But just so I get to play devil’s advocate: one way to state unique prime factorization is to decompose the multiplicative group of as . In other words, there is a sense in which is an “extra prime,” and at least one mathematician I’ve read considers it the prime associated to the real valuation, i.e. an “infinite prime”.”

Interesting, but how does it generalize to other number fields to complete the analogy?