*This re-written essay, published now on the twelfth of March***, **2016* was published here in its original form on the third. Owing to my reflections on some subsequent discussions I have decided to change, not the heart of its ideas, but the manner of their presentation. I freely admit that it may be incorrect; there may, however, be something in it, and I shall continue to hope that, if there is, a person with mathematical knowledge (which I entirely disclaim) will perceive that thing and be able to develop it further. If not, the reader may consider all this as a mere harmless piece of youthful folly.*

**INTRODUCTION**

First of all, I wish to say that I have no desire to depreciate the reputation of Georg Cantor by the publication of this little essay. Although I believe many of his inferences about infinity to have been wrong, his investigations, whatever my opinion about their truth or falsehood, were the only reason why I was myself induced to investigate the topics in this essay, and to form the beliefs which it sets forth. As Aristotle says, truth results from a combination of every thinker’s contribution to a problem, and we should be grateful not only to those of our predecessors who share our views, but also to those whom we believe to have been mistaken; for without their preliminary work, we might never have gained the mental framework with which to formulate our ideas that build upon them. [Aristot. Met. 2.993.]

I ought secondly to state that I am not a mathematician but a philosopher, and that, indeed, I know virtually nothing about mathematics. I was led into the consideration of this subject by virtue of the fact that its conclusions pass over into the realm of philosophy, and present mankind with some very troubling inferences. Ever since Kurt Gödel and Paul J. Cohen declared that the continuum hypothesis, which David Hilbert called “The most important unsolved problem in mathematics,” could be proved neither true nor false, the findings of Georg Cantor about infinity have seemed to unseat reason itself. For, on the one hand, Cantor has told us that our whole intuitive conception of infinity is a lie; and, on the other, we have been subsequently informed, that, in what is supposed to be the very purest form of reasoning, mathematics, there are questions which are by definition unanswerable as a consequence of his findings.

The theories I will propose in the course of this essay accomplish, I believe, six things:

1) They put forward what appear to me to be the only logical ways of understanding certain mathematical concepts in and of themselves.

2) They resolve the seeming paradox of the natural numbers’ being countable, and the real numbers uncountable.

3) They resolve the seeming paradox of there being an infinite hierarchy of greater degrees of infinity.

4) They resolve the continuum hypothesis.

5) They resolve the subsequent paradox that questions in mathematics may be unanswerable.

6) They do not injure a single mathematical proof in the process, including the technical validity of Cantor’s first uncountability proof, Cantor’s diagonalization method, and Cantor’s Theorem, nor cause mathematics to have to function in any new way whatever.

Now I absolutely allow that the eminent Georg Cantor proved two things, and that he was wrongly vilified by the mathematical establishment of his day despite the fact that he did so clearly prove them: first, that, with his first uncountability proof and his diagonalization method, he demonstrated that the real numbers are uncountable; and secondly, that, with Cantor’s Theorem, he demonstrated that the power set of a set always has greater cardinality than its original set. As I do not deny the technical truth of any of these proofs, there is no need for me to recapitulate them here. His mathematical proofs, in and of themselves, cannot be controverted, and have stood the test of 140 years.

What I do, however, deny, are many *inferences* which he and subsequent mathematicians and philosophers have derived from these proofs. For I do not believe that they at all change our classic and intuitive understanding of what infinity is, once a few critical missing pieces of the puzzle of Number are filled in which we have hitherto failed to see. The truth of the matter is, in my opinion, that both Cantor and those of his contemporaries who were against him were correct and incorrect each in their own separate ways. For Cantor, on the one hand, was correct in the technical validity of his mathematical proofs, but wrong about many of the inferences concerning infinity which one may derive from them, once the truthfulness of certain mathematical assumptions and axioms is challenged on the basis of grammar and reason; while his contemporaries, on the other, were right in defending the true and conventional notion of infinity, but wrong in attacking the technical validity of Cantor’s proofs in and of themselves. For though they knew he was wrong, they ought to have gone further to find out *why* he was wrong. There *is *a difference between countable and uncountable infinity, the real numbers *are *uncountable, and it *is *the case that a power set always yields a greater cardinality than its set. But I have come to the conclusion that Cantor’s proofs do not* *change our pre-Cantorian understanding of what infinity is, once the grammatical nature of certain components of mathematics is sincerely analyzed.

It may be well to state, at the commencement of this essay, exactly what I conceive the only true understanding of infinity to be. Infinity, in my point of view as well as that of Aristotle, represents *ceaseless potentiality*. In the very etymology of the word, it derives from the Latin infinitas, which broken down into its roots means, without an end; and the same is true of the Greek apeiria, from which it was taken as a loan-translation. The only true infinity of which reason may conceive is that of an infinity without end, an infinity which cannot be bounded. Ceaselessness is *what it is*. Indeed infinity is, and must be such potentiality without end, unless we are to speak about a fiction, a centaur, a chimera. It is as irrational to categorize it in any other way as it is to say that “Shakespeare is Oscar Wilde,” or that “Socrates is Aristotle”—a violation of the first law of thought. There can be no different *layers* of infinity, no *gradations* of it; for it is boundless potentiality pure and simple. Once allow greater *degrees* of it, and one commits a logical absurdity; such a thing cannot be conceived: for how may one attain to a greater *degree* of *boundless* potentiality? Infinity must *be* infinity, or it is simply nonsense.

This way of looking at infinity would appear to have been held by many of the great mathematicians who came just before the Cantorian revolution, as, for example, the following quotations will evince: first, one from Poincaré, who said: “Actual infinity does not exist. What we call infinite is only the endless possibility of creating new objects no matter how many exist already.” Secondly, one from Gauss, who said: “I must protest against the use of an infinite magnitude as a completed quantity, which is never allowed in mathematics. The Infinite is just a manner of speaking, in which one is really talking in terms of limits, which certain ratios may approach as close as one wishes, while others may be allowed to increase without restriction.”

Now the contention of Cantor is that mathematics demonstrates that we have, not a single concept of infinity, but an infinity of gradations of infinity. It may, in his opinion, be shown, in the first place, that the real numbers represent a greater degree of infinity than the natural numbers, owing to the fact that the real numbers are “uncountable,” and the natural ones “countable”; and, secondly, what is much more striking than this, that we may discover a greater degree of infinity than even that of the real numbers, by making a power set of them; then a greater degree of infinity than we have got by that very power set by making a power set of *that* power set, *ad infinitum*, so that there is, in fact, an *infinite gradation* of levels of infinity. This would seem to overthrow our whole conception of infinity altogether: the mind reels at it as at sheer absurdity, and it is still denied by many non-mathematicians that such a thing may be possible.

There is another matter which it may be well to go into at the commencement of this essay, and that is concerning what the very highest principles of mathematics ought to be. The axioms of mathematics, I contend, ought to be based on *truth*, and *sought out*; they must not be *arbitrary*: and they ought to be defined in a way which *accords* with truth as closely as possible. I define truth as that which is in accordance with reason. And to give an example of this methodology from actual practice I will put to the reader the following question: How did set theory itself come about? How must Cantor have “discovered” it, so to speak? He took, indeed, a hint from antiquity; but he basically worked out, starting from scratch, the principles of it in his own mind according to his* reason*: first by discovering the concepts of it, and then by analyzing how those concepts behave. That is to say, if we analyze what the process must have been a little more deeply, that his reason perceived and then analyzed what a “set” was in its very essence *as a concept*, well before he had even deduced what practice could be inferred from this real mental conception. His verbal definition of the set was simply an attempt to encapsulate what his mind perceived as a substantive mental reality, that is, a concept mentally perceived and external to the will in exactly the same manner as any physical object is: for a verbal definition does not make an external concept what it is whether it be physical *or *mental, and we ought, by verbal definitions, only to try to bring real external concepts to our attention, and to separate them from other concepts and mark them out to the mind by explaining what are their particular essences. This is why Diogenes, the Cynic philosopher, was justified in ridiculing Plato’s definition of a man as “*An animal, bipedal and featherless*,” by plucking a chicken and declaring, “*Here is Plato’s man*.”

Indeed the only way, I say, of getting to correct verbal definitions and axioms in any science whatever, is to analyze and understand external concepts in and of themselves, and to try to fit our descriptions as closely to these realities as possible. Now it is sometimes said that mathematics is a “mind game”; but it must be understood in what sense it ought to be considered as a “mind game,” if it is indeed to be considered one. And I will use the following analogy to explain what I mean: We may, if we please, *make up* a word, and give to *that* an *arbitrary* definition. But if a thing, whether it be mental or physical, is *external to the will*, and *perceived*, and not *invented*, then it is our duty to conform to it in our definition as closely as we can, as we have said. In the science of mathematics, therefore, in which we perceive real and substantive mental realities, we must not *make things up*, and give to them *arbitrary *definitions, but attempt to *seek them out*, and to make the *definitions* conform to the *concepts*. If we find that the concepts are not what we thought they were, or that we had made unthinking but faulty assumptions about them, then it is the definitions that must be accommodated to fit the truth, and not the truth that must be accommodated to fit the definitions.

This is what I believe I have done in the course of this essay: for whatever the existing verbal definitions of mathematics may say, I have tried to analyze the very nature of Number, and with what faculty? My reason: the very same faculty which Cantor made use of to invent set theory from the top downward. And indeed, those who would deny the arguments proposed in this essay simply on the basis that they violate existing axioms of mathematics are placed in the logical paradox that we ought never to have accepted Cantor’s very uncountability proofs and his Theorem in the first place, owing to the fact that they appeared to overthrow our long-held ideas, cherished for many millennia, about what infinity means; and the mathematical establishment of the nineteenth century, which wrongly opposed and persecuted Cantor, becomes justified in having done so. I am myself asking for nothing more than what Georg Cantor did.

Again, there was a time when mainstream mathematics accepted that the assumptions of naïve set theory were true; but after Bertrand Russell showed that there was a latent logical paradox in it which went against reason, namely that it permitted one to make a set of all sets which are not members of themselves, the axioms of set theory had to be changed to accommodate this fact. Now if set theory had been more long-standing than it was before Russell’s paradox was discovered,—if (say) a thousand years had passed before Russell’s paradox had been found out,—and if (say) I myself had happened to discover it,—then that same reader who would tell me now that I may not challenge the existing axioms of mathematics, would have told me also in that very case the axioms of set theory could never be changed on the same basis,—because they had been established for so long, and assumed to be so fundamental.

Once admit that the axioms of mathematics are not based on *reason*, and it follows that none of its conclusions have any more truth or meaning than does the Liar’s Paradox or some other logical fallacy. There is no conceivable way except reason by which we may deduce its axioms without overthrowing the entire edifice: once admit that “The axioms of mathematics are verbal, traditional, and arbitrary, and have no necessary basis in reason,” and the whole discipline is demolished as a science. If this be admitted, then one may refute Cantor’s findings about infinity simply by stating that mathematics itself is nonsense; for by violating reason it would violate truth. Now the principles put forth in this essay violate some existing assumptions and axioms of mathematics; however I do not believe them to contradict reason, and that is the chief thing to bear in mind.

I would note that I am not alone in holding many of my above-mentioned beliefs: Kurt Gödel, for example, wrote that “The world is rational,” and that “Concepts have an objective existence.”

Having prefaced my discourse with all this, I will proceed to my argument proper, which is divided into two parts. The first will deal with Cantor’s inference from his first uncountability proof and diagonalization argument that the real numbers represent a greater degree of infinity than that of the natural numbers; the second, with his inference that one may discover an infinitely increasingly hierarchy of greater degrees of infinity by means of making power set of power set of an infinite set ad infinitum, owing to the truth of Cantor’s Theorem that every power set must have a greater cardinality than its set.

**PART ONE: CANTOR’S INFERENCES FROM HIS FIRST UNCOUNTABILITY PROOF, AND THE DIAGONALIZATION ARGUMENT**

It is my opinion that a failure to analyze the concepts of mathematics in terms of grammar is the root of all the trouble and paradox about infinity which has plagued us this past century and a half. I am not aware that a grammatical analysis of number has ever before been attempted; but as mathematics is really nothing more than human language specialized to an intense degree, has this not been a great oversight? For grammar is the universal language by which we express our rational thoughts. We categorize the numbers into rationals, irrationals, natural numbers, integers, and what not, according to their various properties; and yet we have failed to analyze a higher and further distinction which sits above all these classes, which is that between *nouns* and *adjectives*.

A *noun* is that which may subsist on its own: it enjoys an independent existence. An *adjective* is not self-subsistent: it is dependent, takes its being by means of modifying a noun, and requires the very presence of a noun in order to subsist and function.

Now that the natural numbers are nouns is evident: for they are simply potential representatives of the natural numerical qualities of any of the simple concrete objects we see in our day-to-day existence, as trees, birds, and rivers; so too with concrete mental objects, as Will, Reason, and Perception. They represent *things *in and of themselves. So, too, it is evident that they require nothing to substantiate them: they act and exist of their own accord in the abstract realm. It is both true and fair, therefore, to classify them as *nouns*, and therefore to give them the appellation of *noun numbers*.

But is this true of other kinds of number? I contend not: for truthfully and searchingly analyze the case and you will find that every other kind of number but the natural numbers is, in fact, an *adjective*, and therefore ought to be classified as an *adjective number*. It is, indeed, my understanding, that every other class of number but that of the naturals relies upon the natural numbers for its very existence. One might wipe all other classes of number out, and the natural numbers would continue to function precisely as they do now; they would not lose anything from it: they would continue to signify what they signify, and to enjoy their existence precisely as though nothing had ever happened. But if, on the other hand, one were to wipe out all the *natural* *numbers*, every single other kind of number would cease to bear any meaning. It is impossible to conceive of them as doing anything, meaning anything, existing altogether, without implied natural numbers, as I will attempt to show.

First of all, let us deal with the non-integer real numbers, two representatives of which are the decimals and the fractions.

Now what is a decimal? And what is a fraction? Analyze them and you will find that they cannot be nouns, and must be adjectives; specifically, they are, I contend, *adjectives of comparative size*, which describe *implied *natural numbers. For they operate by comparing the *size*, (but not natural number *numerosity*,) of an implied natural number or numbers with the *size* of another natural number, according as that other natural number is divided into parts, each group of parts itself representing a natural number, and according as these groupings of natural number parts stand in relation to each other as to their own relative sizes; the thing which I compare being identified in size with the smaller of the two groupings of parts. The difference between the natural number in its regular form, and a decimal or a fraction’s *implied* version or versions, is that a decimal or a fraction gives us additional information about its implied natural number or numbers which is absent when we write down our natural numbers in the plain and ordinary way. The implied natural number and the regular natural number are, nevertheless, in their essences the same, and differ only in their accidents.

An example will help the reader to understand this. Let us say that I have a piece of cake, that is to say, one natural number piece of cake, and I divide that natural number piece of cake into four equal portions. I then take one of these portions. I now have in my hand *one *natural number piece of cake—just as surely as the original remains *one *natural number piece of cake. I may, at this point, choose to analyze the *size* of that piece of cake in my hand by comparison with the *size* of the piece of cake from which it was taken. This size is 1/4^{th} of the other piece of cake expressed as a fraction, or 0.25 of it expressed as a decimal. And yet this new piece of cake which I hold in my hand remains, numerically, as I say, *one* natural number piece of cake. And yet what, therefore, does the fraction or the decimal represent? What it represents is, that this particular piece of cake which I hold in my hand, although *numerically* still worth one natural number, is worth, in point of its *size*, one natural number piece of cake out of the original natural number piece of cake from which it was taken, once that original natural number piece of cake has been divided up into four equal natural number parts, one of those parts being compared in size with the three others taken together, and my piece of cake being identified with this smaller, single part. That new piece of cake which I hold in my hand, the decimal or the fraction explains, is so much smaller in point of *size* as compared with that other, original piece of cake. But *numerically*, they are both one natural number—both the piece I have taken, and the original which I divided up before I took it from it.

A further concrete example which may more accurately demonstrate how the decimal or the fraction functions in the abstract realm of pure mathematics, is if the reader imagines me first to have a piece of cake all on its own; secondly to happen upon another, larger piece of cake; and lastly is given to understand that the piece of cake I have in my hand, by virtue of *sheer coincidence*, happens to be worth, in point of size, exactly one quarter (say) of the larger piece of cake, *though I had not taken it out of that cake*—that it is found to be worth one quarter of it simply because we have *compared *the one thing with the other thing, and discovered the case to be so. That is, that *my *one natural number piece of cake simply *happens to be worth*, in point of size, one natural number part out of four natural number parts of the piece of cake with which we are comparing it. And in this case, again, we might describe that natural number piece of cake in my hand as being worth “1/4th,^{“} or “0.25,” of the other, larger natural number piece with which we compare it, in terms of its *size*, though as a natural number it remains *one*.

That I choose to compare the *size*, therefore, of an object in my hand with that of another object does not affect the natural number *numerosity* of my object, or the fact that I continue to hold *one* natural number object. Comparing sizes by means of the natural numbers is all that a decimal or a fraction is capable of doing. They are representations, not of self-subsistent numbers or noun numbers, but of acts of comparisons of size between natural number and natural number. The decimal or the fraction, therefore, has no existence on its own: for all it may do is compare the* sizes *of* **implied *natural numbers. Decimals and fractions must, therefore, as I say, be classified, not as nouns, but as adjectives, and placed under the classification (to coin a phrase) of *adjective numbers*.

The number line, I believe, plays an important part in deceiving us into the idea that the fractions and decimals may enjoy a self-subsistent existence: that they may represent anything of any meaning without the use of implied natural numbers with which to make comparisons. For the number line is actually a mingling together of two very different classes of thing under a false guise of seeming interchangeability, viz. adjective numbers* *and noun numbers. From a philosophical point of view, we ought to choose to view it in either the one or the other way, else we go astray. The two must not be mixed together in the mind. For, when we see:

0.5 … 1 … 1.5 … 2 … 2.5 … 3 … etc.

And consider all this as representing a *continuous spectrum of number*; it is easy to fall into the trap of supposing that, because we have here the *symbols *of the natural numbers, in the form of “1, 2, 3,” therefore they actually necessarily represent the natural numbers proper. But in actual fact, what 1, 2, and 3 represent, if we are to consider them as being continuous with the decimals, are not the natural numbers 1, 2, and 3, but “1/1,” “2/1,” and “3/1.” For if I have a piece of cake which is 3.5 times larger than another piece of cake, that is simply another way of saying that I have *3/1* plus *1/2 *of it. The 3 here does not stand in for the natural number three proper, or the *noun number* three, but rather for a fraction making a comparison of size between x natural numbers and 3 natural numbers out of 1 natural number of some thing—hypothetically cloning itself three times, as it were, in order for us to make a comparison with it.

From the number line, therefore, as I say, a great deal of our confusion arises: for “1, 2, 3,” may stand for the true natural numbers which are represented by those symbols, in which case *they may not be said to lie on a continuous spectrum with any other numbers but one containing the natural numbers alone*; or, they may stand for “1, 2, 3,” as being considered as the *adjective numbers* “1/1,” “2/1,” “3/1,” which *employ *the natural numbers in order to make comparisons of size. As we have here two completely different classes of thing both making use of the same symbols, the mind is led to make the subconscious inference, that adjective numbers and noun numbers may swap their grammatical functions, that is, that a noun number may function as an adjective number, and vice versa, because they appear to lie on a continuous scale together, when the case is really nothing of the sort. It is from hence that we grow to the half-conscious and unthinking inference that decimals and fractions may enjoy a self-subsistent existence. But they may do no such thing: the only continuous spectrum of number they lie on, is that which makes comparisons of size *by means of *the natural numbers: they are adjectives purely, and are numerically continuous with adjectives of like kind alone, while the natural numbers are the nouns which they describe, and upon which they rely.

This matter is complicated still further in the mind by the fact that, in certain cases, the natural numbers may be joined together with the adjective numbers in making statements about things. Thus, if I am comparing size strictly speaking, a piece of cake which is 3.5 times *larger than* another implies that the 3 here is an adjective number representing 3/1; but if I say that I *have* “3.5 pieces of cake,” I am stating that I have three full natural number pieces of cake, all of a size, whatever that may be, and so the 3 here really is the natural number 3; but then also an additional natural number piece of cake which is worth one half the size of any one of the aforementioned pieces of cake, as described by the adjective number 0.5; nevertheless, however, in total, *four *natural number pieces of cake.

A final few examples I will give the reader to highlight this distinction between size, which adjective numbers represent, and natural number numerosity, are that, though I should have a small, a medium, and a large ball, I shall still have three balls; that, if I have ten cups each of a different size, I shall still have ten cups; and that I am not living on two planets, despite the fact that the planet Earth is two times larger than the planet Mars.

So much for the non-integer real numbers. I might here proceed immediately to the next phase of my argument; but, although not absolutely necessary to go into, it may be well, in order to achieve greater intellectual consistency, to analyze here a couple of other kinds of number in order to prove my contention that the natural numbers are the only noun numbers.

The negative integers (i.e. -1, -2, -3, etc.) represent natural numbers being acted on by adjectives also, except that here the natural number is not implied but actually written out for us to see; the adjective component is the minus symbol “-“. The minus symbol represents an adjective of reduction or debt, which states that we are to have, or shall have, some thing or things taken away from us. 2 books plus -1 book means that one book has been taken away from me; while when the integers go into negative territory and exist there pure and simple we may understand them as intransigent debts stating that, even were I to gain an additional number of such and suches, I should inevitably be deprived of them. Thus if I “have” -2 books, I state that, even if I were to get 2 additional books, I should still be left with zero of them, because two books must at all events be taken away from me if I get them. Nevertheless, a description of an act of reduction or a state of indebtedness according as it defines a natural number is not a noun, but an adjective modifying a noun.

Zero, likewise, we may classify as an adjective number. It was called in Latin, from late antiquity onward, *nulla*, a combination of *ne*, not, and *ullus*, any—thus literally *not any thing*, or *no thing*, which in my opinion is a correct apprisal of it. There is implied in the concept of zero the number one, with an adjective negating the number one, or, perhaps, simply “any number,” or “number” altogether, so that we have implied in it the notion of “*no* one,” “*no* thing,” “*no* number,” or what have you. In any event, as it represents a description of the absence of number, it could not enjoy any existence without the notion *of* number, which is a noun, and therefore must be an adjective number.

To recapitulate what we have said so far, numbers may be divided into two grammatical categories, nouns and adjectives. A noun may exist on its own, but an adjective requires the use of a noun in order to function. The only noun numbers are the natural numbers; all the rest of the numbers are adjective numbers. All adjective numbers require natural numbers in order to function, and would have neither existence nor meaning on their own: in the case of the non-integer real numbers as represented by decimals and fractions, for example, they would become comparisons without things with which to compare. If the only correct interpretation of 0.25 or 1/4th is that x natural number or numbers are worth, in point of size, x lesser number of natural numbers as compared with x greater number of natural numbers out of a thing, it is evident, that if you suppose there to be no natural numbers involved in the process, there can be no meaning to the decimal or the fraction either.

I will here move on to the next phase of my argument, which is an analysis of the irrational numbers, and wherein their difference from the rationals chiefly consists.

It is not a contradiction of my contention that mathematics ought to be based on reason, reality, and logical consistency, that the non-integer real numbers, if they do indeed describe comparisons of size, may be protracted to infinity in the mind in the form of the irrational numbers. For all that this implies is an infinite act of division into new numbers of natural numbers, along with an infinitely ongoing comparison between x natural numbers and x natural numbers out of x natural numbers, as opposed to a single and finite action of such division and comparison. It is valid for the same reason that counting up infinitely, in other words infinitely making new copies as opposed to infinitely making new divisions out of a thing, is also logically consistent with the fact that physical reality contains a finite number of objects. The concept of infinity is a logically consistent extrapolation from the artificial constraints of the physical world, and therefore a truthful part of mathematics, which subsists in the mental. For though I know that counting to infinity is impossible in the physical world, because time and the number of atoms have absolute limits; I may yet extrapolate the same principle of counting up into the abstract realm, and validly imagine away these limits, in order reasonably to conceive of infinity, because they are fundamentally *artificial* ones.

That the mind, can reasonably conceive of infinite divisibility, just as it can reasonably conceive of infinite multiplication, is indicated by Zeno’s paradox of the runners, which he used to attempt to prove the doctrine of Parmenides that everything is one. The central fallacy which led to that paradox lied in the fact that Zeno did not recognize that the actual physical realm is bounded and finite, though the mind is capable of removing these restrictions. Zeno contended that the quickest runner in a race could never overtake the slowest, since the pursuer must perpetually reach the point which the pursued started from; thus Achilles, let us say, could never overtake a tortoise in a race, if the latter were given a headstart. While Achilles covers the distance between himself and the tortoise that existed at the start of the race, the tortoise may create a new distance-gap. While Achilles is running over that distance-gap, the tortoise creates a new one, and so on ad infinitum. However the very fallacy of the paradox is inherent in the fact that it presupposes an infinite number of points of distance between Achilles and the tortoise. Diogenes, the Cynic philosopher, actually had one of the best early answers to this by saying nothing upon hearing Zeno’s arguments, but simply standing up and walking about. For the very fact that, in reality, fast runners do overtake the slower, shows that distance must be finite, not infinite. There is a finite number of atoms in the universe; the infinite division of distance, which never allows the tortoise to catch up with Achilles, because we may consider smaller and smaller distance-gaps which the faster runner has to cover ad infinitum, exists only because our mind is falsely imagining that distance, outside of ourselves, works in the way which we might make it work in our mind. In our mind, we can imagine a world in which atoms are subdivided in an infinite process, so that nobody ever moves. But in physical reality there is such a thing as an atom, there is that part of the universe which cannot be cut down any further, the smallest conceivable unit of physical matter. Man dies, and the greatest supercomputer he could devise, and set counting, would eventually have to stop when its parts ran out. So long as the universe and all the things in it shall have an end, and so long as this life, this world, is composed of units of matter which cannot be divided infinitely, number must run out. In this world there can only ever be practical infinity, not actual infinity. Nevertheless, *mentally*, we can conceive, (and in accordance with reason,) of an infinity which never ends, of an infinity which outstrips the longest span and breadth of which this universe might be capable; just as we can conceive of dividing a fictitious universe’s atoms to an infinite degree. Neither exists in *physical* reality; nevertheless, both of them exist in *mental* reality, and, again, according to *reason correctly extrapolating from the physical*, in this case by removing the artificial constraints of atoms and the passage of time. And this is how the irrational numbers are reasonable.

The irrational numbers, therefore, in no way differ from the finite rational numbers, except in this, that the artificial constraints of time and atoms have been taken away from them, and so they may go on in a process or train unceasingly of making acts of comparison between divisions of a thing into a greater and greater number of natural number parts, as opposed to doing this on one single occasion.

Now it is evident on this basis that, for every irrational number, there must also be a pair of what I shall call* infinite natural numbers *to go along with it. For the comparison and division of natural numbers is of the very essence of the functioning of this class of adjective number, as we have seen. We might arbitrarily label these natural numbers with the symbol “nn”; more specifically, as it is the case that we have *two* incrementally increasing natural numbers involved in the process, the lesser and the greater one, so we may arbitrarily label these as “nn1” and “nn2.” To demonstrate: The irrational number π represents, at a first stage of calculation, 3.14, a comparison between the size of x natural number things and 7 natural number parts out of **50 **natural number parts of another natural number thing; at a second stage, 3.141, between x things and 141 parts out of **1000**; at a third, 3.1415, 283/**2000**; at the eighth decimal place we have 2,831,853/**20,000,000**;** **ad infinitum. At every step of the way the actors involved in this process of division must be natural numbers, because all the irrational numbers can ever represent by definition is an infinite process of comparison and division into new parts; and we may count out the decimal in exactly the same way as we may count out the infinite natural number parts, which decimal represents the process of comparison and division in and of itself: “3.14, 3.141, 3.1415,… 3.14159265,” etc. (ad infinitum). It therefore follows that the hypothetical natural numbers which must define the two totalities of these parts at every step of the way must also be singularly infinite, for otherwise, the irrational number would not be infinite. Hence, every irrational number must carry along with it two corresponding infinite natural numbers. Thus we may say that nn1-π is “the lesser infinite natural number which is brought about by inevitably accommodating *π*,” which goes “7, 141, 283 … 2,831,853,” ad infinitum; and that nn2-π is the greater infinite natural number, which goes “50, 1000, 2000 … 20,000,000,” ad infinitum. In like manner, we may have nn1 or nn2-√2, nn1 or nn2-φ, and so forth for any irrational number whatever.

It follows from all this, as an inexorable logical consequence, that, if Cantor proved the irrationals uncountable as a class, as indeed he did; then the natural numbers must be uncountable as a class as well: for we must have two infinite natural numbers to go along with each process of comparison and division which any single irrational number represents. For just as, at every step of the never-ceasing process as regards the irrational number itself, a finite action of comparison of size between two divisions of a thing into new natural number parts out of new natural number parts is represented; so at every step of that incessant process must also be represented an increase in the two numbers of natural numbers which represent each totality of the two groupings of parts which are created at each new and separate act of division and comparison. You cannot have an unceasing train of acts of division and comparison without having an unceasing train of natural numbers which the acts of division create and with which they compare.

A possible objection: “But you are, in a sense, ‘creating’ these natural numbers from the irrational number.” Answer: Does this make them worth anything less? For how do we arrive at the knowledge of the irrational numbers themselves? Are they not too “created,” if we are to use this term, by an act of calculation at every step of the way? If, therefore, calculation brings *them *“into existence,” so to speak, why is calculation any less valid to bring *infinite natural numbers* into existence? Most everything in mathematics had to be learned of by means of a calculation; why may we not therefore fairly define infinite natural numbers to be a *discovery *like anything else in the science? They were always “there” all along in every case; but it was necessary to analyze the nature of the irrational numbers more closely in order to bring them properly to our attention. π itself was only found out by means of calculations with circles: it came to nobody spontaneously. As nobody can discover π without doing a calculation, is it therefore fair to say that nn1-π and nn2-π are non-existent concepts?

Another possible objection: “Mathematics does not allow the existence of infinite natural numbers.” Answer: See my comments at the start of this essay on the necessity of science’s conforming its axioms to reason. Is it really any stranger to accept the existence of infinite natural numbers than it is to accept the existence of infinite degrees of infinity? I think the latter concept far stranger than the former, for the former seems to me to be a simple logical accompaniment to the existence of the irrational numbers, while the latter is almost inconceivable to reason. Again, is it stranger to accept the existence of infinite natural numbers, or the logical paradox which follows on from the continuum hypothesis, which necessitates that questions in mathematics must remain unanswerable?

Another possible objection: “The rational numbers, under your system, would still be countable. Is this not peculiar?” Answer: As we have just proved that there exist infinite natural numbers, all we need do in order to create an infinite rational number is to take two infinite natural numbers which we gain from their correlation with the irrationals, and put them together into a ratio. Infinite rational numbers therefore also exist.

To recapitulate: if we cannot put the irrationals on a list, because they represent a necessarily infinite *process*; so we cannot put the natural numbers on a list as well, because such a process implies equally a necessarily and infinitely increasing number of two natural numbers at every step of the way, also as an infinite process; and it is demonstrated that the class of “infinite” *natural numbers* can never be put on a list or be *countable* owing to the very fact that the irrational numbers may not.

Now the countability-uncountability distinction within infinity still exists under my system: but the so-called countable infinite set of natural numbers is now understood to represent merely the infinite set of all natural numbers that does *not* include the infinite natural numbers. But this does not change the fact that the natural numbers as a class remain uncountably infinite, once the existence of infinite natural numbers is allowed; and therefore all classes of number, we come to understand, may represent an equal unceasing potentiality, when they are taken as a whole.

To conclude: I contend that every irrational number must imply two singularly infinite natural numbers to go along with it, and that Cantor’s proofs that the irrational numbers are uncountable as a class logically necessitate that the natural numbers as a class are uncountable as well. One would not, however, use his proofs to prove the latter proposition directly, but indirectly, by means of the grammatical argumentation which we have here put forward.

**PART TWO: CANTOR’S INFERENCES FROM CANTOR’S THEOREM**

Our understanding of sets is another example of where we have gone astray in our understanding of infinity, owing to our failure to conduct a true grammatical analysis of mathematics; and by means of grammatical analysis we shall also be enabled to solve the paradoxes concerning infinity which Cantor’s Theorem would appear to produce.

Now I do not deny the standard definition of what a set is, and even allow that it is valid to make infinite sets, and even power sets of those infinite sets, and power sets in turn of those power sets *ad infinitum*. But what I will deny is that this process ever implies any increase in the original numbers we start out with. For every superset, I contend, is nothing more than *an adjective*; not even an adjective number, for it implies nothing, modifying as it does something already under our notice, and is therefore an adjective pure and simple; and if we make sets of sets ad infinitum, I contend, we are simply describing descriptions ad infinitum.

The adjectival nature of the set is proven by the fact that we may only express the thought of one collection’s defining another collection by means of some adjectival expression; and, indeed, the very mathematical symbol which indicates set membership, “∈,” which derives from the Greek verb “ἐστιν” or “estin” meaning “it is,” is defined by mathematicians as signifying “is an element of,” “is a member of,” “belongs to,” “is in,” or “lies in.” In every case here we have either a predicate nominative modified by an adjectival prepositional phrase, or a predicate adjective pure and simple, whether it start with “of” or “in.”

Now to demonstrate by a concrete example what I mean when I say that a set cannot increase the number of numbers, but simply the number of adjectives describing the original numbers; if we analyze the taxonomy of a dog we shall yield the following seven sets:

Dog, < (of species Canis lupis) < (of genus Canis) < (of family Canidae) < (of order Carnivora) < (of class Mammalia) < (of phylum Chordata) < (of kingdom Animalia).

Here we have seven adjectives working in the capacity of prepositional phrases, and each one, except the initial set, which modifies a simple noun, modifying the other in turn; but we are still left over with one self-substantiating *thing*, the simple dog we started with. Now how else will the reader suggest that I might have explained this process, but with the grammatical functions by which set membership is defined? for there is no other conceivable way but with some adjectival expression to place one thing in the class or set of another thing. For if I were to say “The dog *is** *species Canis lupis” pure and simple, this would be making a direct equivalency between them by means of a simple predicate nominative without any qualification, which is a false assessment of the situation. I must therefore either say, “The dog *is an element of *Canis lupis,” and make a predicate nominative modified by an adjective; or make use of a predicate adjective pure and simple, i.e., “The dog *belongs to *or *lies in *or *is of *species Canis lupis.” There is, therefore, no conceivable way to define the grammatical behaviour of a superset but by understanding it as an adjective.

So if we go into the abstract realm, stripping away all concrete example, and suppose set S which contains {a, b} we are saying, (and I shall put all the adjectives in bold,) “{a} belongs **to**** (S)**,” and “{b} belongs **to**** (S)**”; next, if we make a power set out of S we are saying “S belongs **to**** P(S)**, which P(S) belongs **to**** {a}**, and **to ****{b}**, and **to ****{a and b}**,** (**being looked together as a single concept comprised of both, as opposed to as individuals,**)** and **to ****{ }** (**zero**, or **no thing**).” We might continue this process ad infinitum, making a power set of this power set and then power set after power set unceasingly, and yet all we shall ever yield out of the process are adjectives, while the original nouns, “{a}” and “{b},” remain the only numbers. No matter how many power sets you make out of an infinite set, therefore, you may never increase the number of numbers, because you are only getting an increasing collection of adjectives describing those numbers or each other: hence there is no greater degree of infinity as to number. An infinite set is an infinite series of numbers each bearing a *single *description, which is represented by that single set; an infinite making of power sets of power sets is an infinite making of numbers which bear descriptions of descriptions of descriptions *ad infinitum*. That is to say, Cantor shows that, by means of making power sets, you can perpetually keep describing your descriptions, and then describing those descriptions, and so on, and so on, in a never-ending series of layers. But the number of numbers described remains precisely the same, if we understand the process rightly, and, every other component of the process involved having been shown to be an adjective, the *numerical* size of the infinity as it stands at each new power set therefore remains unchanged. What we have in Cantor’s Theorem, therefore, is a proof, not of an infinite hierarchy of ever-larger *infinities*, but of an infinite hierarchy of *descriptions*. And is this not what we should expect? For it would be a very bare sort of infinity of objects if we were not permitted to describe those objects with an infinity of descriptions and descriptions of those descriptions as well.

How has the fallacy come about therefore, that you may have an infinite “hierarchy” of infinities as to number *proper*, as opposed to an infinite hierarchy as to the *description* of number? It lies in the mathematical definitions of the words “element” and “cardinality.” An *element *is said to be “any member of a set,” and *cardinality* is defined as “the number of elements in a set or other grouping, as a property of that grouping.” Now Cantor demonstrated that a power set does indeed always have a greater *cardinality* than its set. But a set, as we have seen, always operates in the capacity of an adjective, no matter how many *elements* it may contain; and so this word “element,” which masquerades as a number when it really signifies a component of an adjective, and this word *cardinality*, which simply counts these *elements* without considering how they *function,* in this case play us false, by leading us into the mistaken inference that, because we do indeed yield a greater number of adjectives *describing number *when we create a power set, therefore we yield from it a greater number of *numbers*. A hierarchy of infinities yielded by a making of power sets, therefore, is not a hierarchy concerning number itself, but a hierarchy concerning description of number; and, no matter how many power sets of power sets you may choose to create, you shall never increase your number of actual numbers.

**CONCLUSION**

Cantor came to his beliefs about infinity owing to the fact that mathematics had not yet been soundly analyzed on a grammatical basis. Once the nouns are clearly distinguished from the adjectives, many of the inferences which he derived from his technically correct proofs become invalidated, and our traditional and intuitive conception of infinity is restored.

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.