Examples of common false beliefs in mathematics

  • The first thing to say is that this is not the same as the question about interesting mathematical mistakes. I am interested about the type of false beliefs that many intelligent people have while they are learning mathematics, but quickly abandon when their mistake is pointed out -- and also in why they have these beliefs. So in a sense I am interested in commonplace mathematical mistakes.

    Let me give a couple of examples to show the kind of thing I mean. When teaching complex analysis, I often come across people who do not realize that they have four incompatible beliefs in their heads simultaneously. These are

    (i) a bounded entire function is constant;
    (ii) $\sin z$ is a bounded function;
    (iii) $\sin z$ is defined and analytic everywhere on $\mathbb{C}$;
    (iv) $\sin z$ is not a constant function.

    Obviously, it is (ii) that is false. I think probably many people visualize the extension of $\sin z$ to the complex plane as a doubly periodic function, until someone points out that that is complete nonsense.

    A second example is the statement that an open dense subset $U$ of $\mathbb{R}$ must be the whole of $\mathbb{R}$. The "proof" of this statement is that every point $x$ is arbitrarily close to a point $u$ in $U$, so when you put a small neighbourhood about $u$ it must contain $x$.

    Since I'm asking for a good list of examples, and since it's more like a psychological question than a mathematical one, I think I'd better make it community wiki. The properties I'd most like from examples are that they are from reasonably advanced mathematics (so I'm less interested in very elementary false statements like $(x+y)^2=x^2+y^2$, even if they are widely believed) and that the reasons they are found plausible are quite varied.

    I have to say this is proving to be one of the more useful CW big-list questions on the site...

    The answers below are truly informative. Big thanks for your question. I have always loved your post here in MO and wordpress.

    wouldn't it be great to compile all the nice examples (and some of the most relevant discussion / comments) presented below into a little writeup? that would make for a highly educative and entertaining read.

    It's a thought -- I might consider it.

    Most examples are fantastic especially for those preparing for qualifying/comprehensive exams.

    In addition to common false beliefs, I find something somewhat amusingly alleged to be a common false belief: Some time around 2003 or 2004, when Wikipedia was less developed than it later became, its article about the product rule asserted that the derivative of a product of two functions is different from what "most people think" it is. Then it said "Most people think that $(fg)' = f'g'$.

    It's almost surely time for this to be closed. Flagging for moderator attention.

    I would vote to close at this point if I didn't have superpowers. It is a great question, but perhaps 17 months is long enough.

    Sorry for being late. Two common false beliefs: 1. Any ring epimorphism is surjective. 2. Suppose given a short exact sequence X'->X->X'' in an abelian category A. If a full subcategory B of A contains X' and X, but not X'', then X'->X does not have a cokernel in B. (Wrong for A = Z-mod, B = Z-free, (X'->X->X'') = (Z -2-> Z -> Z/2).)

    I vote not to close

    @Matthias: the epimorphism thing might stem not so much from a false belief as from unfortunate terminology. For many people, the **definition** of epimorphism **is** surjective homomorphism. Presumably this definition predates the category-theoretic one by many decades.

    @Thierry: As far as I know, "epimorphism" is Bourbaki terminology. I think Weil insisted on not mixing Greek and Latin at this point. So yes, you're right, since Bourbaki's point of view is "sets with structure", the definition via surjectivity is the original one.

    Dear @Matthias, what was the proposed mixture of Greek and Latin ?

    @GeorgesElencwajg I think the point is that surjective homomorphism would be such a mix (the former being 'Latin' and the latter 'Greek', at least in an ethymological sense).

    @quid: yes, that's a possibility. I know that long ago some purists objected to *television* for the same reason.

    @Georges Elencwajg: if I recall correctly, someone suggested "unimorphism" (Latin/Greek-mixture), but Weil insisted on "monomorphism".

    This is such a wonderfull question!

    Over $200$ false beliefs so far… maybe true beliefs are even more, but certainly not as popular!

    one typical mistake in matrix algebras: positive matrices must have positive entries. (However, for example $\begin{pmatrix} 1 & -1 \\ -1 & 1 \end{pmatrix}$ is positive as well, since this matrix is self-adjoint and has non-negative eigenvalues)

    Students thinking that the field $\mathbb{F}_4$ is the ring $\mathbb{Z}/4\mathbb{Z}$...

    People think that in a complete lattice $T$, if $M\subset T$, then $\operatorname{inf} M\leq \operatorname{sup} M$

    I'm voting to close this question as off-topic because enough false beliefs already

    wow, this will soon reach 666 votes... a nice score for a question about false beliefs

    I'm voting to close this question as off-topic because the most downvoted answers are the best ones. After all, a large number of downvotes means that the mathematics community holds to the misconception as well.

    I'm voting to close this question as off-topic because I think this question has outlived its usefulness.

    Nice (counter-)examples in arithmetic geometry are given here : https://mathoverflow.net/questions/91546. Typically, two elliptic curves over a number field $K$ can have the same $L$-function without being isogenous (this is true if $K = \Bbb Q$, but not if $K = \Bbb Q(i)$).

  • For vector spaces, $\dim (U + V) = \dim U + \dim V - \dim (U \cap V)$, so $$ \dim(U +V + W) = \dim U + \dim V + \dim W - \dim (U \cap V) - \dim (U \cap W) - \dim (V \cap W) + \dim(U \cap V \cap W), $$ right?

    getting bad flashbacks about this one... good example, though

    Wait, that isn't true?

    Take three distinct lines in R^2 as U, V, W. All intersections have 0 dimensions. The LHS is 2, the RHS is 3. The problem is that $(U+V)\cap W \neq U\cap W + V\cap W$.

    Take 3 lines in $\mathbb{R}^2$...

    This is perhaps a shameful comment for math overflow, but: ROFL (in the best possible sense) :-) excellent answer!

    This is actually true for Euler Characteristic.

    100 upvotes! The first "Great Answer" badge! (Besides Anton's fluke from the moderator election.)

    Just out of interest, is there a (true!) formula for the dimension of $U+V+W$ if one knows only the dimensions which appears in the false formula above?

    @Mark: Given three distinct lines $U,V,W$ through the origin, you can compute the RHS but not the LHS.

    Is this related to Stein's Example?

    pity one can not add the answer to favorites list)

    Just last week I made this mistake in a "proof". Clearly I should check this list more often.

    $dim(U+V+W)=d(U+V)+d(W)-d((U+V)\cap W)$ then the equality above is true iff $dim((U+V)\cap W)= dim(U\cap W)+ dim(V\cap W)- dim(U\cap V \cap W)$ i.e. iff $dim((U+V)\cap W)=dim ((U\cap W)+(V\cap W))$ iff (in finite dimention) $(U+V)\cap W=(U\cap W)+(V\cap W)$.

    @Tilman: Only a remark not related to the topic: The identity $$\dim (U + V) = \dim U + \dim V - \dim (U \cap V)$$ is valid only for finite dimensional spaces, but if one writes it as follows $$\dim (U + V) + \dim (U \cap V)= \dim U + \dim V$$ it is valid for all vector spaces.

    The statement is true iff there exists linearly independent subset which gives basis of any subspace when intersected with subspace

    For three subspaces the theorem fails, but in a highly controlled way, as pointed out in comments. But for 4 subspaces it fails wildly. This is because three subspaces is reps of a D4 quiver which is a Dynkin diagram, but 4 subspaces is a quiver that’s not a Dynkin diagram.

    @NoahSnyder what? How is being or not being a Dynkin diagram relevant to this question? My comment is not meant to be aggresive but that of sheer ignorance. Do you have a reference where the relevance of Dynkin diagrams is explained?

    There’s a whole theory of representations of quivers. A quiver is an oriented graph and a representation of it is a vector space for each vertex and a map for each edge. Each of these questions translates into representations of a certain quiver. The general theory then tells you when you can get classifications. See Gabriel’s theorem: https://en.m.wikipedia.org/wiki/Quiver_(mathematics)

  • Everyone knows that for any two square matrices $A$ and $B$ (with coefficients in a commutative ring) that $$\operatorname{tr}(AB) = \operatorname{tr}(BA).$$

    I once thought that this implied (via induction) that the trace of a product of any finite number of matrices was independent of the order they are multiplied.

    Indeed. I never thought much about this before, but clearly this only implies the trace of a product is invariant under *cyclic* permutations. I bet there is some fact from the representation theory of the symmetric group lurking here, but am too lazy to think about it...

    In fact Tr$(AB)=$Tr$(BA)$ holds also for non-square matrices $A,B$ for which both $AB$ and $BA$ are defined. Now for determinants, det$(AB)$=det$(BA)$ holds for *square* matrices, but of course *not* for non-square matrices (consider the case where $A$ is a column vector and $B$ a row vector).

    @Nate: If you want high-powered generalities, the most general situation I know where one can prove this statement is in a ribbon category. These have a graphical calculus where tr(ABC...) corresponds to a closed loop on which A, B, C... sit as labels in order, which clearly shows that the only invariance one should expect is under cyclic permutation. See, for example, the beginning of Turaev's "Quantum Invariants of Knots and 3-Manifolds."

    Also, in Penrose's diagrammatic notation, composition AB is represented by a line from the top of B to the bottom of A, and the trace of A is a line from the top of A to the bottom of A.

    @Marcos: using Penrose's diagrammatic notation for things with only two indices is a bit of an overkill. It also doesn't show that generically the only invariant we expect is from cyclic permutations, since sometimes weird tangles of lines in the diagram can be unraveled...

    @Harry, if you think about what happens when you split a product $abcdefgh$ in the middle and interchange the two halfs, you'll see where Nate is going...

    @unknown: nonetheless, the characteristic polynomials of AB and BA are the same up to a power of $\lambda$ (A is m by n and B is n by m), which generalizes both properties

    @Victor Protsak: Nice! BTW, one way to get what you say is from det$(I_m+AB)=$det$(I_n+BA)$, which funnily doesn't hold for the trace in case of non-square matrices (there is a difference of $m-n$).

    If M is a matrix permuting coordinates, then $tr(M)$ is the number of fixed points of the relative permutation!

    AB and BA share the same invertible part : http://www.artofproblemsolving.com/Forum/viewtopic.php?f=349&;t=112209

    In fact, the result applies to the eigenvalues: the eigenvalues (non-zero eigenvalues if you allow non-square matrices) are invariant under cyclic permutations. That is sometimes very useful.

    Yes, but losing that property is a small price for being able to say "cyclicity of the trace".

    @QiaochuYuan could you provide an easy to access (explicit aproach) material for 'I know where one can prove this statement is in a ribbon category. These have a graphical calculus where tr(ABC...) corresponds to a closed loop on which A, B, C... sit as labels in order, which clearly shows that the only invariance one should expect is under cyclic permutation'

    @QiaochuYuan does it have anything to do with ribbon graphs?

    @NateEldredge There are some nontrivial application for the fact you mentioned9invariance of the trace under full-cyclic permutation).There is an operator theoretical proof of the Gauss Bonnet theorem in the book NCG by Alain Connes. In that proof the invariant of trace is uded.

    Me too! And using this incorrectly, I got stuck in a geometry homework for 5 hours (just couldn't get where I was mistaken!)

  • Many students believe that 1 plus the product of the first $n$ primes is always a prime number. They have misunderstood the contradiction in Euclid's proof that there are infinitely many primes. (By the way, $2 \cdot 3 \cdot 5 \cdot 7 \cdot 11 \cdot 13 + 1$ is not prime and there are many other such examples.)

    Much later edit: As pointed out elsewhere in this thread, Euclid's proof is not by contradiction; that is another widespread false belief.

    Much much later edit: Euclid's proof is not not by contradiction. This is another very widespread false belief. It depends on personal opinion and interpretation what a proof by contradiction is and whether Euclid's proof belongs to this category. In fact, if the derivation of an absurdity or the contradiction of an assumption is a proof by contradiction, then Euclid's proof is a proof by contradiction. Euclid says (Elements Book 9 Proposition 20): The very thing (is) absurd. Thus, G is not the same as one of A, B, C. And it was assumed (to be) prime.


    Nb. The above edits were not added by the OP of this answer.

    Edit on 24 July 2017: Euclid's proof was not by contradiction, but contains a small lemma in the middle of it that is proved by contradiction. The proof shows that if $S$ is any finite set of primes (not assumed to be the set of all primes) then the prime factors of $1+\prod S$ are not in $S$, so there is at least one more prime than those in $S.$ The proof that $\prod$ and $1+\prod$ have no common factors is the part that is by contradiction. All of this is shown in the following paper: M. Hardy and C. Woodgold, "Prime simplicity", Mathematical Intelligencer 31 (2009), 44–52.

    When I was 11 y.o. I was screamed at by a teacher and thrown out of class for pointing this out when he claimed the false belief stated (it wasn't class material, but the teacher wanted to show he was smart). I found the counterexample later at home. I didn't let the matter drop either... I knew I was right and he was wrong, and really had a major fallout with that math teacher and the school; and flunked math that year.

    @Daniel: Sorry to hear that. When my daughter Meena was the same age (11), her teacher asserted that 0.999... was not equal to 1. Meena supplied one or two proofs that they were equal, but her teacher would not budge. Maybe this is another example of a common false belief.

    @Daniel: I've heard a worse story. A college instructor claimed in Number Theory class that there are only finitely many primes. When confronted by a student, her reply was: "If you think there are infinitely many, write them all down". She was on tenure track, but need I add, didn't get tenure.

    @Ravi More like an example of the fact that most schoolteachers in today's world-even at good schools,let alone the pathetic joke most mainstream grade schools are in America-don't really know math.

    @Andrew: It's an apocryphal story, so it may be a common false belief *among the schoolteachers*

    If you wanted to convince someone that this isn't true, wouldn't an easier example be 2*3*5*7-1 = 11*19? Sure, it has -1 instead of +1, but that doesn't matter, does it?

    To Daniel Moskovich: our class had some serious disagreements with our biology teacher over simple probability problems, disguised as genetics.

    This false belief leads to a proof of the Twin Prime conjecture: For every $n$, `$(p_1 p_2 \cdots p_n -1, p_1 p_2 \cdots p_n +1)$` are twin primes, right?

    Daniel, about the same age, I was asked to leave class for claiming that pi is not 22/7. The math teacher said that 3.14 is an approximation and while some people falsly believe that pi=3.14 but the true answer is 22/7. Years later an Israeli newspaper published a story about a person who can memorize the first 2000 digits of pi and the article contained the first 200 digits. A week later the newspaper published a correction: "Some of our readers pointed out that pi=22/7". Then the "corrected" (periodic) 200 digits were included. Memorizing digits of pi is a whole different matter if pi=22/7.

    Ravi, you yourself are laboring under a false belief about Euclid's proof. As I have pointed out elsewhere on math overflow and in an article in the _Mathematical Intelligencer_, Euclid's proof was not by contradiction. What the students you refer to is not a misunderstanding of Euclid's proof, but a misunderstanding of a variation on Euclid's proof, which is not as good as the proof that Euclid actually wrote.

    @Gil, After showing a colleague the integral $\int_0^1 \frac{(1-x)^4}{1+x^2} dx = 22/7 - \pi$ he assigned its calculation as an exercise. One student carried it out correctly up to $22/7-\pi$, then concluded $=0$. When asked, he truly believed, from his high school training, that $\pi = 22/7$.

    I had the ``$\pi=22/7$'' false belief at age 14 (don't know when I lost it, exactly), and I know exactly how it happened. I had long known that $\pi$ is about $3.14$, of course, and then in my math textbook I encountered a phrase along the lines ``using $\pi=22/7$, this expression simplifies to...''. I remember the satisfaction at finally learning what ``aproximately equal'' meant. It's unclear if this was a false belief successfully reproducing, or if it was spontaneous generation.

    Wait, that integral comes out to $10/3-\pi$...

    @Harry Altman, the integrand should be $\frac{x^4 \cdot (1 - x^4)}{1 + x^2}$.

    Having thought I understood Euclid's proof, I was puzzled to learn that it fails for the ring of power series in one variable over a field. The teacher pointed out how crucial it is to check in the argument whether 1 + abcd....ef is a unit, something usually glossed over. Actually when I recalled Euclid's argument, he exclaimed "Yes, but Eucid was intelligent!"

    I don't think I'd ever heard of the approximation 22/7 for $\pi$ until I moved to the US. Probably the people who designed the school curriculum in France decided that people would get confused over this, and I guess they were right! I've read horror stories like this lone English major sticking up for the truth in a room full of math teachers who seemed to believe that $\pi$ was rational. And anyway, you might as well use 3.14 as an approximation.

    Both Robert Bruner and muad have this wrong: The integral is $$ \int_0^1 \frac{x^4(1-x)^4}{1+x^2}\,dx = \frac{22}{7}-\pi. $$

    Does anybody else remember Archimedes Plutonium (or whatever his name is now) from USENET? He was convinced that Euclid's proof was wrong and that his trivial modification was the first correct proof. This example came up in discussion, when people suggested that ironically AP's proof was wrong (although actually it was also correct).

    I was going to rollback because it would seem the edits should be comments (especially as they are not by the OP, even if this is a CW). I think the edits should probably be comments.

    @RaviBoppana Actually 0.999... is not equal to 1 for the very reason that the first number is is an infinite sequence of the integer 9 and the second one is a sequence of one integer: 1. They are 2 different sequences. But in arithmetic, in the decimal model, for the sake of cohenrence, the infinite development 0.999... is identified to 1. Thus, the professor of your daughter was not really wrong, the sentence "0.999... is not equal to 1" is not precise enough (it's a trap) and contains an innuendo not familiar for people outside the strict community of self-called "mathematicians" ;-)

    @Patrick 0.999... is not a sequence, it's a number (if you don't get this, you haven't understood decimal notation). The expression $a_0.a_1a_2a_3...$ (where the $a_i$ for $i>0$ are digits, i.e. numbers between 0 and 9) denotes the real number $a_0+\sum_{n\in\mathbb N}\frac{a_n}{10^n}$. It is possible to check that, if $a_0=1$ and $a_i=0$ for $i>0$, we get the same real number that we get when $a_0=0$ and $a_i=9$ for $i>0$. I fail to see how this is "a trap", it's clear to me that the teacher that we're talking about was wrong (and, being a teacher, should have known better)

    @DavidFernandezBreton An infinite sequence defines a unique number, as you write correctly. The map $\Phi : (a_n)_{n=1}^\infty \mapsto \lim_{N\to \infty} \sum_{n=1}^N a_n 10^{-n}$ associates a **sequence**, for example 9,9,9 ad infinitum, to a **number** (limit of a converging sequence). Two different sequences can define the same number. Your notation 0,999... is just a way you refer to the infinite sequence of 9. Now, (999...) and (1) have just the same value by $\Phi$. Therefore, the decimal numbers are the set of infinite sequences after identification by $\Phi$. That's all I said.

    I understand that, but I disagree about what denotes what. There are many ways of denoting sequences (for example, an infinite sequence of 9s can be denoted $(9,9,9,\ldots)$ or $\langle 9,9,9,\ldots\rangle$ or even just $9,9,9,\ldots$), but a number written in decimal expansion (as in $0.999\cdots$) denotes the number, not the sequence (that is, once you write the sequence without commas and with a decimal point, you're already referring to the number, i.e. to the image of the sequence under what you call $\Phi$).

    Hence, $\langle 1,0,0,0,\ldots\rangle\neq\langle 0,9,9,9,\ldots\rangle$ but $1.000\cdots=0.999\cdots$.

    @Patrick: Be careful not to conflate "number" with "numeral". Standard notations don't distinguish which is meant when you write a string of characters: you have to infer from context. (and usually, "number" is what is meant)

    @Hurkyl Hi there, I use the word "number" in the meaning defined by Dedekind, that is, a cut in the rational numbers. (Here for example http://www.amazon.com/Essays-Theory-Numbers-Dover-Mathematics/dp/0486210103)

    @DavidFernandezBreton: It's kind of late, but I think what Patrick meant is something along the lines of The Treachery of Images.

    Well, certainly a string of symbols is not a number, but it can stand for a number (or in general, for a mathematical object) and so it makes sense to enquire whether two different strings of symbols actually represent the same number (or mathematical object), in just the same way that it makes sense to say that "David FernandezBreton" and "the only current postdoc in logic at UofM" are actually the same person. That's the reason we use the "=" sign in Mathematics.

    @becko : M. Hardy and C. Woodgold, "Prime simplicity", _Mathematical Intelligencer_ 31 (2009), 44–52. $\qquad$

    Can someone post a counter-example of the original assertion? I don't get it.

    The original post already gave a counterexample. Namely 1 plus the product of the primes up to 13 is not prime. It's equal to 30,031, which factors into 59 times 509.

  • The closure of the open ball of radius $r$ in a metric space, is the closed ball of radius $r$ in that metric space.

    In a somewhat related spirit: the boundary of a subset of (say) Euclidean space has empty interior, and furthermore has Lebesgue measure zero. (This false belief is closely related to Gowers' example of the belief that there are no non-trivial open dense sets.)

    More generally, point set topology and measure theory abound with all sorts of false beliefs that only tend to be expunged once one plays with the canonical counterexamples (Cantor sets, bullet-riddled squares, space-filling curves, the long line, $\sin\left(\dfrac{1}{x}\right)$ and its variants, etc.).

    I remember being assigned as an exercise to find a counterexample to the first statement, but I can't remember where. Rudin?

    What about a space with 2 points a distance 1 apart, and the open/closed ball having radius 1? I don't remember seeing this before, though.

    @Terry Really good examples. We can count on you to do anything but waste our time with a post, Terry. I hope you keep finding the time to post here and lend your support!

    These seem to be more "interesting mistakes" than "false beliefs", especially the last part.

    Peter: actually the simplest counterexample is the open/closed ball of radius $0$, empty set vs a singleton.

    (True statement) A subset of $\mathbb{R}^n$ is Peano-Jordan measurable if and only if its boundary is Peano-Jordan measurable with measure zero.

    In response to Qiaochu's comment, I'm surprised nobody ever mentioned that a canonical counterexample to the first claim is given by the $p$-adics: there every ball is clopen, and if the "closed radius" of the ball is $p^{-n}$, the "open radius" is $p^{-n+1}$. This is because the image of the distance function is discrete (except at distance 0).

    "The closure of the open ball of radius $r$ in a metric space, is the closed ball of radius $r$ in that metric space".It seems to me that this **ought** to be true. Since it's not, I'm led to ponder whether we have the right definitions of "metric", or "closure". How we can we develop any intuition in a situation where things that seem "obviously true" are actually false.

    @bubba: at most you may complain about the *terminology*, certainly not about the axioms of metric spaces, and it would be foolish changing them in order to make that statement true (besides, *closed ball* and *closure of the ball* are well distinct expressions, so there is no ambiguity).

    it is interesting that although this seems plausible at first sight, after being told it is false, it takes only a moment to think of a counterexample. I guess the key is that we tend to assume wrongly that an open ball is non empty, or even has lots of points in every direction.

    @QiaochuYuan Erwin Kreyzig's Introductory Functional Analysis with Applications does that, and the intended counterexample there is a discrete space. Later on he also notes that the statement is true in normed spaces.

    @PietroMajer Most elementary textbooks usually define balls to be of radius $r>0$.

    @bubba To a certain extent you’re right (with respect to terminology). The classic example of this is that a set can be both “closed” and “open”.

  • Here's my list of false beliefs ;-):

    • If $U$ is a subspace of a Banach space $V$, then $U$ is a direct summand of $V$.
    • If $M/L, L/K$ are normal field extensions, then the same is true for $M/K$.
    • Submodules/groups/algebras of finitely generated modules/groups/algebras are finitely generated.
    • The Krull dimension of a subring is at most the Krull dimension of the ring.
    • The Krull dimension of a noetherian domain is finite.
    • If $A \otimes B = 0$, then either $A=0$ or $B=0$.
    • If $f$ is a smooth function with $df=0$, then $f$ is constant.
    • If $X,Y$ are sets such that $P(X), P(Y)$ are equipotent, then $X,Y$ are equipotent.
    • Every short exact sequence of the form $0 \to A \to A \oplus B \to B \to 0$ splits.
    • $R[[x,y]] = R[[x]][[y]]$ as topological rings.
    • $R[x]^* = R^*$, even if $R$ is not a domain.
    • Every presheaf on a site has an associated sheaf. (Hint: the index category of the usual colimit has to be essentially small!)
    • (Co)limits may be computed in full subcategories. For example, $Spec(\prod_i R_i) = \coprod_i Spec(R_i)$ as schemes because $Spec$ is an antiequivalence.
    • Every finite CW-complex is compact, thus every CW-complex is locally compact.
    • The smash product of pointed spaces is associative (this is even false for CW complexes when you don't use the compactly-generated product!), products commute with quotients, and so on: Topologists often assume that everything behaves well, but sometimes it does not.

    +1: you had me at "Here's my list of false beliefs".

    I'm sure you'll have me kicking myself in a moment... but how does a short exact sequence of the form 0 --> A --> A + B --> B --> 0 fail to split? In any Abelian (indeed, additive is enough) category, since A + B is a biproduct, there's a paired map (0,1_B): B --> A + B, and a copaired map [1_A,0]: A + B --> A, which split each half of the sequence... don't they? Or were you thinking of a context for this example that's wider than Abelian categories?

    $A \to A \oplus B$ does not have to be the inclusion; likewise $A \oplus B$ does not have to be the projection. Thus the error here is: Two chain complexes, which are isomorphic "pointwise", don't have to be isomorphic. This occurs sometimes.

    I once made this very mistake, and it invalidates one of the main theorems of a published article I once quoted. A good reason in my opinion to specify what are the arrows when writing a sequence or a diagram: they are usually what you think they are, but hey, let's check.

    Ooh, very nice --- a classic "check your implicit assumptions" example. Good point!

    Is the one on Spec false or true-but-not-because-of-the-obvious-thing?

    The left side is quasi compact, but the right side only when $R_i = 0$ for almost all $i$. The difference can be made precise if $R_i$ are fields. Then $Spec(\prod_i R_i)$ is the Stone-Cech-compactification of the discrete space $\coprod_i Spec(R_i)$.

    Your fifth example reminds me of an even more plausible false belief I once held: if $A \otimes A = 0$, then $A = 0$.

    @Reid Barton: Could you please provide a counterexample?

    @Regenbogen: Take the abelian group $\mathbb{Q}/\mathbb{Z}$.

    The point about presheaves and associated sheaves is one of those unimportant size issues that can be rectified by using universes and is a technical point that depends on a specific choice of set-theoretic formalism (For this reason, I suspect that Grothendieck ignores this issue in SGA4). I don't know if it really warrants inclusion on this list, since the rest of the list is so good.

    @Harry: No, it's a real problem because often you want to stay in the fixed universe; otherwise mathematics becomes pathological. For example, you could claim that every continuous functor has a left adjoint since the solution set is satisfied if we make the universe large enough for the solution set condition. But then we are not talking anymore about the same categories and functors between them!

    @Martin: No, that doesn't really matter as long as we keep track of relative size differences. There's no pathology there.

    I may be stupid, but what is a non-constant smooth function with df = 0 everywhere?

    $f$ is just locally constant ;-)

    I saw doctorate thesis defence when one of reviewers, prominent one, claimed that: "If f is a smooth function with df=0 , then f is constant." is true, and then work has serious flaw. f in this work was topological invariant with df=0 but clearly there was different topological charges here, not only one ( and it was shown in the work)....

    I would like to know more about $\mathcal{P}(X)$ equipotent to $\mathcal{P}(Y)$ not implying $X$ and $Y$ being equipotent. Is there no proof with the axiom of choice? It seems the gen. continuum hypothesis should imply it. Can you point me to some reference?

    Generalised continuum hypotesis implies this statement, while Martin Axiom + negation of continuum hypothesis provide a counterexample $P(\aleph_1)=P(\aleph_0)$ hence this misbelief is in fact independent of ZFC

    Olivier, you might want to check out Easton's theorem in forcing: http://en.wikipedia.org/wiki/Easton%27s_theorem

    Amazingly enough, the splitting belief IS true if you add the innocuous-looking condition that $A$ and $B$ are finitely generated modules over a commutative Noetherian ring. (Theorem 1 from T. Miyata, Note on direct summands of modules, J. Math. Kyoto Univ. 7 (1967) 65-69)

    @Harry: Very late addendum, but for an explanation of why sheafifying over large sites really is problematic even when you assume universes, see Waterhouse (1975) - _Basically bounded functors and flat sheaves_. The point is that the result of sheafification _depends_ on the choice of universe when you use universes to construct them, i.e. it is no longer intrinsic.

    Hey, I currently share a half of these allegedly false beliefs!

    As a positive result, if $0\to A\to A\oplus B\to B\to 0$ is an exact sequence of finitely generated modules over a commutative Noetherian ring, then the exact sequence does split.

    I think the "topologists assume" sentence in the last bullet is unfair; it implies topologists are making mistakes. Certainly competent topologists are not making such rookie mistakes, and are well aware of the standard counterexamples.

    Wait a sec, what would be a counterexample for "The Krull dimension of a subring is at most the Krull dimension of the ring"?

    @Michael: $\mathbb{Z} \subseteq \mathbb{Q}$

    @MartinBrandenburg: Oh! I was thinking in terms of function rings of algebraic varieties.

    @Martin: the statement "The Krull dimension of a noetherian domain is finite." is my false belief today :) . doesn't this implied by *KRULL'S PRINCIPAL IDEAL THEOREM*? I mean if $R$ be noetherian ring, height of every maximal ideal is finite. and $\dim R$ is $sup$ of these heights.

    @mohan do you have a reference? isnt this contradiction with Martin's statement?

    @user1 I do not have a reference, but it was mentioned with reference here earlier by others (e. g. Graham Leuschke) too. The proof, while not trivial, can be worked out and I would be happy to post one somewhere (how?) if you so desire.

    @user1 but there could be maximal ideals of many different heights!

    Could you give an example for two sets whose power sets are equipotent whereas they are not equipotent?

    @FawzyHegab: It depends on the choice of the model of set theory whether this is true or not.

    The fact that the inclusion $\mathbb{Z} \subset \mathbb{Q}$ does not preserve dimension can be expressed by saying that $\mathbb{Q}$ is zero-dimensional, but not hereditarily zero dimensional. These are studied in the book Zero-Dimensional Commutative Rings, edited by David Dobbs.

    @MartinBrandenburg: I don't understand the locally-constant hint. Would you mind giving an actual counterexample?

    @Mehrdad take a space consisting two points, and a function that 0 on one point and 1 on the other.

  • I don't know if this is common or not, but I spent a very long time believing that a group $G$ with a normal subgroup $N$ is always a semidirect product of $N$ and $G/N$. I don't think I was ever shown an example in a class where this isn't true.

    Argh! Me too! What *is* a good example?

    umm Z/4Z contains Z/2Z?

    It is a sad state of things, but my impression is that most people coming out of the standard introductory course to groups have more or less the sam belief :(

    This suggests that we do a terrible job of talking about semi-direct products no?

    Schur--Zassenhaus says that this *is* true if $N$ and $G/N$ have coprime orders, so there is some intrinsic pressure in the subject towards this. Coupled with the fact that it is true for the first non-trivial non-abelian example ($A_3$ inside $S_3$), it's easy to see how this misconception arises.

    Remember being confused by this too. It became much clearer when I formally was taught about short exact sequences. Then you can see exactly the obstruction to such a decomposition.

    It took me a long time to realize that was false as well... Still being an undergrad, I often catch myself trying to use that "theorem".

    maybe the easiest way to see this, as suggested by Kevin's example, is to think of abelian groups and ask whether every subgroup is a direct factor. I.e. this has little to do with true semi direct products, and more to do, as Fabrizio observed, with splitting maps.

    I tripped up on this one for a VERY long time too! Given experiences cited here, I would strengthen Kevin's comment and say it proves :) we do a terrible job explaining semidirect products. And I second the comment about short exact sequences.

    This is false, but something very nice is true and related to this: $F : G \text{-Grp} \rightarrow G \uparrow \text{Grp}$ where $\phi : G \rightarrow \text{Aut}(H)$ is sent to $G \rightarrow H \rtimes_{\phi} G$ where $g \mapsto (1, g)$ is a left adjoint. So the exact sequences $0 \rightarrow N \rightarrow G \rightarrow G / N \rightarrow 0$ which come from semi-direct products are the "free" ones in a sense.

    Or another condition for an exact sequence $0 \rightarrow N \rightarrow G \rightarrow H \rightarrow 0$ to be isomorphic to one of the form $0 \rightarrow N \rightarrow N \rtimes H \rightarrow H \rightarrow 0$ is if $G \rightarrow H$ has a section.

  • These are actually metamathematical (false) beliefs that many intelligent people have while they are learning mathematics, but usually abandon when their mistake is pointed out, and I am almost certain to draw fire for saying it from those who haven't, together with the reasons for them:

    The results must be stated in complete and utter generality.

    Easy examples are left as an exercise to the reader.

    It is more important to be correct than to be understood.

    (Applicable to talks as well as papers.)

    Reasons: 1. Von Neumann is in the audience. 2. This is just a generalization of Lemma 1.2.3 in volume X of Bourbaki. 3. The results are impressive and speak for themselves.

    IMO "It is more important to be correct than to be understood" is not a false belief.

    I definitely agree with the OP that "It is more important to be correct than to be understood" is false - in the context of giving mathematical talks. Or perhaps, it's fairer to say that being understood is more important than being 100% correct. Talks are about the listener, not about the speaker.

    @GregMartin: When you are giving a talk, sure. When you are giving a lecture, maybe, but you should give an indication of where you are imprecise. When you are writing a paper, most definitely not.

    @Michael: Victor and you are both right: you should be correct $\mathbf{and}$ understandable. So that the audience can understand, that you are correct.

    I'm not I like the usage of 'metamathematical' here because the word can have a precise and formal meaning. I can't think of anything else credible, though. Somewhere between mathematical and pedagogical?

    It depends what you mean by 'correct'. If a proof has an error which can easily be avoided, that's not too problematic: these show up often enough in published papers. An irreparable proof of a (true) statement is much worse. An (irreparable) proof of a false statement is even worse still.

  • a student, this afternoon: "this set is open, hence it is not closed: this is why [...]"

    The terminology *is* rather unfortunate.

    Yikes,that student needs a sit-down about the facts of life in topology.

    Either that or topologists need a sit-down about the facts of life in life, where they are told how unfortunate their notation is...

    Munkres is fond of saying "sets are not doors."

    On the other hand, one can say "open the door" and "close the door" in reference to a door that is slightly ajar.

    So are you saying sets are closed, open, clopen, or ajar? ;)

    some students are a rich source of false beliefs. Try asking whether the product of two odd functions on R is odd or even.

    On my office door I once put "clopen the door"

    I like that "Sets are not doors", I can say that I have thought too fast and made this assumption and ended up proving something that couldn't possibly be true ><

    When is a set not a set?

    Actually, topologists have studied spaces where every set is open or closed (or both, of course), and they're called "Door spaces"....

    The mere existence of the adjective "half-open", as in "the half-open interval [1,2)", is a fairly good antidote to this, even if the notion of half-openness _per se_ does not extend particularly well beyond the interval case.

    I think we need more detail to be fair. Was this afternoon's student perhaps contemplating a non empty proper subset of R?

    @NateEldredge Sorry, I am just an undergrad student reading this out of interest. I want to make sure I know where the mistake is here: An open set is $(2, 4)$, a closed set is $[2, 4]$, but the student failed to take into account sets such as $(2, 4]$ and $[2, 4)$, which are neither open nor closed?

    @Ovi No, that is not right. The student said: The set is open, hence not closed." This is wrong because there are sets which are open *and* closed, not because there are sets that are neither. For instance, in $\Bbb R$ (equipped with its standard topology), the sets $\Bbb R$ and $\varnothing$ (the second is the empty set) are both open and closed. In fact, they are the only open and closed sets in $\Bbb R$, since $\Bbb R$ is connected.

    To all the people who find fault with topologists' terminology, sets should be compared with *rooms*, not doors in the first place, should they not? And the room analogy fits this bill well - rooms can be open, closed, partially open or partially closed to any degree.

    The terminology is poor, let it be doors or rooms or whatever. It's common sense that objects which can be open or close, usually are in only one of these states: doors, chests, safes, lockers, etc. These words are considered opposites! It's like define some sets to be *hot* and *cold* and then say there are some sets which are hot and cold at the same time. Please, if this is the case, just don't use these words. Good notation and good terminology are important.

    @N Unnikrishnan : Yes, and in this analogy, the doors are quite explicitly the points of the boundary.

    The terminology is unfortunate, but it's very easy to see that a set can be both open and closed, if you show that to someone the terminology should be clear.

  • Here are two things that I have mistakenly believed at various points in my "adult mathematical life":

    For a field $k$, we have an equality of formal Laurent series fields $k((x,y)) = k((x))((y))$.

    Note that the first one is the fraction field of the formal power series ring $k[[x,y]]$. For instance, for a sequence $\{a_n\}$ of elements of $k$, $\sum_{n=1}^{\infty} a_n x^{-n} y^n$ lies in the second field but not necessarily in the first. [Originally I had $a_n = 1$ for all $n$; quite a while after my original post, AS pointed out that that this actually does lie in the smaller field!]

    I think this is a plausible mistaken belief, since e.g. the analogous statements for polynomial rings, fields of rational functions and rings of formal power series are true and very frequently used. No one ever warned me that formal Laurent series behave differently!

    [Added later: I just found the following passage on p. 149 of Lam's Introduction to Quadratic Forms over Fields: "...bigger field $\mathbb{R}((x))((y))$. (This is an iterated Laurent series field, not to be confused with $\mathbb{R}((x,y))$, which is usually taken to mean the quotient field of the power series ring $\mathbb{R}[[x,y]]$.)" If only all math books were written by T.-Y. Lam...]

    Note that, even more than KConrad's example of $\mathbb{Q}_p^{\operatorname{unr}}$ versus the fraction field of the Witt vector ring $W(\overline{\mathbb{F}_p})$, conflating these two fields is very likely to screw you up, since they are in fact very different (and, in particular, not elementarily equivalent). For instance, the field $\mathbb{C}((x))((y))$ has absolute Galois group isomorphic to $\hat{\mathbb{Z}}^2$ -- hence every finite extension is abelian -- whereas the field $\mathbb{C}((x,y))$ is Hilbertian so has e.g. finite Galois extensions with Galois group $S_n$ for all $n$ (and conjecturally provably every finite group arises as a Galois group!). In my early work on the period-index problem I actually reached a contradiction via this mistake and remained there for several days until Cathy O'Neil set me straight.

    Every finite index subgroup of a profinite group is open.

    This I believed as a postdoc, even while explicitly contemplating what is probably the easiest counterexample, the "Bernoulli group" $\mathbb{B} = \prod_{i=1}^{\infty} \mathbb{Z}/2\mathbb{Z}$. Indeed, note that there are uncountably many index $2$ subgroups -- because they correspond to elements of the dual space of $\mathbb{B}$ viewed as a $\mathbb{F}_2$-vector space, whereas an open subgroup has to project surjectively onto all but finitely many factors, so there are certainly only countably many such (of any and all indices). Thanks to Hugo Chapdelaine for setting me straight, patiently and persistently. It took me a while to get it.

    Again, I blame the standard expositions for not being more explicit about this. If you are a serious student of profinite groups, you will know that the property that every finite index subgroup is open is a very important one, called strongly complete and that recently it was proven that each topologically finitely generated profinite group is strongly complete. (This also comes up as a distinction between the two different kinds of "profinite completion": in the category of groups, or in the category of topological groups.)

    Moreover, this point is usually sloughed over in discussions of local class field theory, in which they make a point of the theorem that every finite index open subgroup of $K^{\times}$ is the image of the norm of a finite abelian extension, but the obvious question of whether this includes every finite index subgroup is typically not addressed. In fact the answer is "yes" in characteristic zero (indeed $p$-adic fields have topologically finitely generated absolute Galois groups) and "no" in positive characteristic (indeed Laurent series fields do not, not that they usually tell you that either). I want to single out J. Milne's class field theory notes for being very clear and informative on this point. It is certainly the exception here.

    Milne also, in his notes on field and Galois theory, takes the time to point out (and prove using Zorn's lemma and the group $\mathbb{B}$ above) that the absolute Galois group of $\mathbb{Q}$ has non-open subgroups of index $2^n$ for all $n>1$. He adds as a footnote a quote of Swinnerton-Dyer where he mentions the "unsolved [problem]" of determining whether every finite index subgroup of $G_\mathbb{Q}$ is open or not, observing that this problem seems "very difficult."

    Nice examples! Actually it is known that any finite groups arises as a Galois group over K=C((x,y)). Since K is Hilbertian, it is enough to prove it for K(t). Now, we know that if L is a large field (ie any smooth L-curve has infinitely many L-points as soon as it has one), then any finite groups arises as a Galois group over L(t) (see F.Pop, Embedding problems over large fields, Ann. of Math., 1996). And F. Pop recently proved that if R is a domain which is complete wrt a non-zero ideal (Henselian's enough), then its fraction field is large (see Henselian implies Large on his webpage).

    @JP: Thanks very much for the information. I was just thinking that this should be a case close to the border of the IGP and that I should check up on what is known.

    @Pete: I remember once reading a paper of Katz and being bewildered by what he was saying until I realised that Q_p[[x]] was much bigger than Z_p[[x]] tensor_{Z_p} Q_p.

    I like that one!

    Kevin: I'm quite fond of this distinction myself. It's why $\mathbb{Q}_p[[x]]$, which is the $\mathbb{Q}_p$-pro-unipotent completion of $\mathbb{Z}$, almost never comes up in Iwasawa theory. There, you're far more likely to see the small algebra.

    It's funny that you are illustrating yourself how tricky the distinction between $k((x))((y))$ and $k((x,y))$ can be, by giving a wrong example: in fact $\sum_{i \geq 0}x^{−i}y^i \in k((x,y))$. (Isn't it just $x/(x - y)$? Think a bit about convergence issues.) But I believe that $\sum_{i \geq 0} x^{-i^2} y^i \not\in k((x,y))$ - and I think I can prove this using the Weierstrass preparation theorem for Laurent series over complete DVRs, or something like that.

    @AS: Good point! I'm not sure how I missed your comment the first time around. I "fixed" my example by making it more wishy-washy. I would be very interested in seeing an explicit element in the larger field but not the smaller field, with proof. If I ask this as an MO question, would you answer it?

    Yes, I would. Go ahead!

    In 1st first ex. it is aparent when you view k a local field and look $k((x))((y))$ as a $3$-local field. Then $k((x))((y))=k((y))\{\{x\}\}$, where the field $k\{\{x\}\}$ is defined as the set of Laurent series $f$ of $y$ with coeff. in $k$ such that the $k$ valuation of the coeeficients of $f$ is bounded below and the coefficients tends to zero as the exponent tends to $-\infty$. As clear from the definition $k((x))\{\{y\}\}$ is not isomorphic to $k((y))\{\{x\}\}$ hence both can not isomorphic to $k((x,y))$. for more details you can check http://msp.warwick.ac.uk/gtm/2000/03/gtm-2000-03p.pdf

    Bloody Laurent series! This was educational.

  • Some false beliefs in linear algebra:

    • If two operators or matrices $A$, $B$ commute, then they are simultaneously diagonalisable. (Of course, this overlooks the obvious necessary condition that each of $A$, $B$ must first be individually diagonalisable. Part of the problem is that this is not an issue in the Hermitian case, which is usually the case one is most frequently exposed to.)

    • The operator norm of a matrix is the same as the magnitude of the most extreme eigenvalue. (Again, true in the Hermitian or normal case, but in the general case one has to either replace "operator norm" with "spectral radius", or else replace "eigenvalue" with "singular value".)

    • The singular values of a matrix are the absolute values of the eigenvalues of the matrix. (Closely related to the previous false belief.)

    • If a matrix has distinct eigenvalues, then one can find an orthonormal eigenbasis. (The orthonormality is only possible when the matrix is, well, normal.)

    • A matrix is diagonalisable if and only if it has distinct eigenvalues. (Only the "if" part is true. The identity matrix and zero matrix are blatant counterexamples, but this false belief is remarkably persistent nonetheless.)

    • If $\mathcal L: X \to Y$ is a bounded linear transformation that is surjective (i.e. $\mathcal Lu=f$ is always solvable for any data $f$ in $Y$), and $X$ and $Y$ are Banach spaces then it has a bounded linear right inverse. (This is subtle. Zorn's lemma gives a linear right inverse; the open mapping theorem gives a bounded right inverse. But getting a right inverse that is simultaneously bounded and linear is not always possible!)

    Wow. I believed that second one until now. Which is ridiculous, of course, since the operator norm of a nilpotent matrix can't be zero or else it wouldn't be a norm!

    The parethentical comment in 2nd bulleted point is worded as if, $\textit{in general},$ the operator norm were equal both to the spectral radius and the largest singular value (or, perhaps, that $\|A\|=\rho(A)$ and $\lambda_1(A)=s_1(A).$) But for a nilpotent matrix the spectral radius is 0, whereas the operator norm and the largest singular values aren't.

    Fair enough; I've reworded the parenthetical.

    I guess in the last bullet point you mean a right inverse? It's easy enough to give a surjective bounded linear transformation which isn't bijective.

    Also (pardon the pickiness) you presumably mean the OMT gives a *continuous* right inverse.

    Yes, I meant right inverse, thanks. Getting a continuous right-inverse is actually a subtle question - the OMT only gets boundedness, which is not equivalent to continuity when one is not linear. I believe that the existence of a continuous right inverse may follow from a classical theorem of Bartle and Graves, but this is nontrivial.

    (I should also point out that by "bounded" I mean "maps bounded sets to bounded sets", not "maps the entire space to a bounded set".)

    Got it, thanks; I was thinking of applying OMT the wrong way around.

    I was puzzled by this one: "A matrix is diagonalisable if and only if it has distinct eigenvalues." until I realized you meant distinct roots of the characteristic polynomial rather than the minimal polynomial.

    In the last clause, there is always a continuous (usually non-linear, of course) left inverse. This is the Bartle-Graves theorem.

    The fourth point is not, AFAICT, commonly-held at all. Students treat spectral analysis of matrices with suspicion and tend to assume that nothing is possible that they don't remember how to do with eigenvalues...

License under CC-BY-SA with attribution


Content dated before 6/26/2020 9:53 AM