Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

1 is not prime.


I agree, modern definitions exclude 1 since "we lose" unique factorization. It's interesting to note [1] that this viewpoint solidified only in the last century.

[1] https://mathenchant.wordpress.com/2025/04/21/is-1-prime-and-...


No, 1 is excluded for reasons closely related to, but not conceptually identical with, the one you mention.

The "intuitive" argument that 1 is prime is that, as with prime numbers, you can't produce it by multiplying some other numbers. That's true!

But where the primes are numbers that are the product of just one factor, 1 is the product of zero factors, a very different status. The argument over whether 1 should be called a "prime number" is almost exactly analogous to the argument over whether 0 should be called a positive integer.†

It's more broadly analogous to the argument over whether 0 should be called a "number", but that argument was resolved differently. "Number" was redefined to include negatives, making 0 a more natural inclusion. If you similarly redefine "prime number" to include non-integral fractions (how?), it might make more sense to consider 1 to be one.

† Note that there is no Fundamental Theorem of Addition stating that the division of a sum into addends is unique. It isn't, but 0 is the empty sum anyway.


“ But where the primes are numbers that are the product of just one factor, 1 is the product of zero factors, a very different status.”

What do you mean?

The factors of 3 are 3 and 1. The factors of 1 are 1?


6 is the product of the members of the set {2, 3}.

3 is the product of the members of {3}.

1 is the product of the members of the empty set.


3 is also the product of the sets {3, 1}, {3, 1, 1}, etc.

We’re excluding the unit when defining these factor sets (ie, multiplicative identity) because it removes unique factorization.

That 1 is the unit is also why it’s the value for the product of the empty set because we want the product of a union of sets to match the product of a product of sets. But we don’t exclude it from the primes for that reason.


What.

Oh! So it’s like Python’s `reduce(multiply,s,initial=1)`, such that s={} still gets you 1. Alright, that makes sense.


No, you're wrong. The factors of 3 are 3. 1 has no factors.


To be clear, you are talking about "prime factors". 3 and 1 are both "factors" of 3, but 1 is not a prime factor.


"1 is the product of zero [prime] factors"

This seems to be circular since it assumes that 1 is not prime


darn, I just dated myself back to 1914...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: