This article is currently unfinished.

*
Die ganzen Zahlen hat der liebe Gott gemacht, alles andere ist Menschenwerk.
*-Leopold Kronecker, via Heinrich Martin Weber

This article is an attempt at an informal justification for the current search for the quasi-mythical "field with one element," 𝔽_{1}.
This isn't really meant for people already familiar with the subjects adjacent to it, instead being intended for a lay audience (of which I am a member).
Despite this, there will be concepts referenced here that are not actually explained in depth, so as to not detract from the flow of the writing.
The reader is encouraged to supplement any sections not understood with additional reading.
This description will be improved later.

Counting has been an essential part of human existence since prehistory, but the abstract notion of "number" is, by comparison, a recent abberation.
One could count a particular kind of object, or label objects as being in a certain order, but numbers, as abstract objects unto themselves, were unheard of.
Much of ancient mathematics fell under what would today be called "applied mathematics," with pure results being made to assist in practical calculation.
Particularly in Greek mathematics, arithmetic was seen as subservient to geometry, as exemplified by Euclid's *Elements*.
That is not to say that the Greeks did not appreciate arithmetic, but that it was expressed through geometric language.

The geometric content of *Elements* should be mulled upon for a moment. Euclid's geometric system consisted of points and lines lying on a plane.
The closest one could get to representing the modern-day, abstract concept of a number was to draw a line segment of the desired length.
From this, one could indirectly work with the numbers that are the lengths. Many of the arithmetic results proven in *Elements* are proven via geometry.

From this representation of numbers as lengths of lines, a few desirable operations quickly become apparent. The first of these operations is simple: take two line segments and adjoin them. This represents the addition of the two underlying numbers. However, there are also times where one wishes to repetitively add a certain quantity a given amount of times. Notably, the quantity one wishes to repetitively add and the amount of times one wishes to do this are, fundamentally, the same kind of object: a quantity. This leads to the idea of multiplying two numbers. Notably, this operation resembles addition, in that they are both commutative and associative. Also, multiplication has something that addition does not: an immediately obvious (to antiquity) identity element, that being the number 1. It will soon become apparent that addition and multiplication are central to our later discussions.

One notable feature of using line segments to represent numbers is that any given representation depends on a given unit length. Therefore, the same line segment can represent different quantities with respect to different unit lengths. As it turns out, this scaling of line segments is controlled by multiplication. However, while one can multiply by certain numbers to capture the phenomenon of scaling, not all forms of scaling can be accounted for. To make up for this, the concept of ratios of natural numbers was introduced. Now, all kinds of scaling can be associated with some quantity.

There is something that we in Modernity find incomplete about this picture: the lack of 0 as a number in its own right.
Of course, people in antiquity had the concept of absence, as opposed to presence, so they could clearly understand equivalents to "there are none of X."
Instead, the trouble that the ancients had was with the recognition that there could be a number representing a measurement of nothing, on par with others.
It took quite a bit of time for this possibility to be recognized, though it was gradually adopted throughout the world.
Even among its earliest adopters, there was still confusion as to its arithmetic properties. Brahmagupta, in his *Brāhmasphuṭasiddhānta*,
could accurately describe most of the basic arithmetic properties of 0, and yet incorrectly stated that a nonzero number divided by 0 was again 0.
Later, Mahavira recognized that this was incorrect, yet his proposed solution was that numbers remain unchanged when divided by 0, which is also incorrect.

However, while the ancients weren't so prone to accept 0 as being on par with other numbers, plenty of of other constructions were allowed. Returning to the association of numbers with lengths of line segments, if one could construct a line segment with a specified length, it was a number. For example, Pythagoras' theorem, that the sum of the squares of the two smaller sides of a right triangle was the square of the hypotenuse, led to the recognition that a right triangle with two sides of length 1 must be such that the length of its hypotenuse, when squared, would be 2. So, any number could be given a square root. Likewise, π was defined the ratio of a circle's circumference to its diameter.

This leads to something of a problem, though: if the square root of 2 is the length of a line segment, it must be a ratio of natural numbers. Yet, no one could find a pair of numbers satisfying this property. Things came crashing down when it was eventually proven that this was impossible. How could it be that there was something that was clearly a quantity that could be measured, but not in terms of natural numbers? Some people tried to reason that this instead proved that not all measurements were numbers, though the usual reaction was instead to avoid the issue.

Even if one could accept 0 as being on par with other numbers, and even if one could somehow include irrationals, there was something seen as more absurd. While we have previously touched upon division as a way to invert multiplication, the same attention was not given to subtraction. Of course, given two numbers where one is greater than the latter, it is simple to subtract the latter from the former and get another quantity. If one accepts the concept of 0, then one can even extend this to the less strict requirement of the former being greater than or equal to the latter. However, if the former is less than the latter, then the notion of subtracting the latter from the former produced what was seen as the least sencical idea. Such a number would necessarily be less than 0. In fact, whenever negative solutions were encountered in antiquity, they were discarded for being nonsense.

Perhaps the gravity of how bizarre this concept used to be could be lost on the modern reader.
Numbers represented a measurement of something, which was an indication of presence. If there was at least one of something, it necessarily had presence.
By contrast, the reason 0 took long to be accepted as a full-fledged number was that it represented absence, the opposite of presence.
Therefore, to include 0 in one's number system meant having a collection of indicators of presence, with a sole exception given to an indicator of absence.
But, at least the concept of absence was not alien. Now, imagine declaring the existence of a number said to be less than 0. What does it represent?
It cannot represent presence, because it is not greater than 0. It is tautologically not equal to 0, so it cannot represent absence.
If one were to place all of the numbers in order, one could see that negative numbers must be on the opposite side of 0, so is it opposite the positives?
This cannot be, because if negative numbers are the opposite of positive numbers, then they represent the opposite of presence.
However, we already ruled out this possibility, because the opposite of presence is absence, which is the domain of 0, which these are distinct from.
Even when people eventually accepted the practicality of negative numbers for representing debt, they were still suspicious of them, and justifiably so.
The problem of their existence was so important that Immanuel Kant wrote *Versuch den Begriff der negativen Größen in die Weltweisheit einzuführen*
purely to contemplate them. From this, he concluded that logical negation and real negation were seperate concepts.
In brief, negative numbers were so difficult a concept for humanity that they forced Kant to develop his critical philosophy in response.

However, with the begrudging addition (no pun intended) of negative numbers, we now had a beginning of the real number system, albeit entirely unjustified.
Most practical computations could be accomplished purely within this system, with anything not able to be accomplished seen as nonsense.
Thanks to the developments of Muslim mathematicians, we had the beginning of our modern conceptions of algebra (a name whose Arabic origins are blatant).
In this scenario, one is given equation, or set of equations, in some unknown(s), and tasked with producing numbers satisfying these.
To put it more directly, this was fundamnetally about solving polynomials, of which a few solutions had been known since antiquity.
In particular, the quadratic formula, for obtaining roots of arbitary polynomials of degree 2, had been known for some time.
However, the quadratic formula could only be applied when the discriminant was nonnegative.
What had yet to be developed was a similarly complete theory of cubic polynomials, those of degree 3.
Various attempts at developing a cubic formula, analogous to the quadratic formula, eventually ran into serious roadblocks.
In particular, Gerolamo Cardano was perplexed by the fact that solving cubic equations with 3 real roots forced one to take square roots of negatives.
Of course, this cannot be done, because the square of any number is nonnegative. However, he had the (bizarre for the time) idea to ignore this fact.
Despite having great reservations, he went ahead with the calculation, and managed to obtain a desired solution.
For context, Cardano was one of the first European mathematicians to systematically work with *negative* numbers, which were still worrying to most.
Imagine the great pain of having to further abstract from an object that the vast majority of his contemporaries considered a ficticious convenience.
However, this was an important step in the further progression of mathematics, being one of the first times that complex numbers were shown to be useful.

Another major breakthrough in this era was the insight of René Descartes and Pierre de Fermat that geometry could be described by real coordinates. The fundamental idea behind this notion is that the real numbers can be put into direct correspondence with an infinitely extended line. Therefore, 1-dimensional space can be represented by a collection of real numbers, hence why today we refer to the "real number line." Higher-dimensional spaces, likewise, continue this correspondence, with new axes of directions introducing further real number coordinates. This meant that, on one hand, polynomials could define curves in space, and on the other, that curves could be described as satisfying certain polynomials. It is somewhat ironic that Descartes criticized complex numbers, being the first to call square roots of negative numbers "imaginary" numbers, as these two developments would eventually combine into a very significant leap in mathematics during the 20th century, but that is for later.

Another development that would shed light on the true nature of the real number system would be the gradual development of what is now called analysis.
This history actually stretches back to antiquity, with the Greeks again turning up as predecessors to more modern results.
One of the primary motivation for this development was the "method of exhaustion" used by certain mathematicians, in order to compute areas of shapes.
The basic idea was to create a sequence of polygons that lie inside the given object, whose shapes eventually resemble the given object.
At the same time, there would a be another sequence of polygons that enclosed the object, whose shapes would also eventually resemble the object.
The claim was that the areas of the polygons in both of these sequences would simultaneously become arbitrarily close to the actual area.
This, by the way, is how the areas of circles were originally calculated. The problem was that there was no *formal* justification for this.
Of course, it *feels* correct, and we can retrospectively prove that it is indeed a correct method of calculation, but this was murky at the time.

Another one of the main motivations was the question of infinitesimals, or rather, the twin questions of if they actually existed and, if so, what they were.
While the idea of infinity, particularly in its potential form, is well-known today, less known is the concept of the infinitesimal, "infinite smallness."
An infinitesimal quantity would therefore be a quantity smaller than any finite nonzero quantity. Of course, zero itself fits this description.
However, what most people really mean by an "infinitesimal quantity" is one smaller than any finite positive number, yet which is *nonzero*.
This definition has the potential to seem self-contradictory, but the temptation to have such objects readily becomes too strong to ignore.
Democritus, in particular, was known for considering the notion of dividing a finite object into an infinite number of infinitesimal segments.
However, this concept had its doubters, most famously Zeno of Elea, who put forth the paradoxes that now bare his name.

This problem, while controversial, was originally able to be ignored for many practical purposes, up until Modernity, when it became relevant in mechanics.
There were physical problems related to celestial motion that demanded one to discuss seemingly nonsensical notions such as "motion at an instant."
The informal solution was to extend the notion of velocity as a ratio of motion and time to the infinitesimal context.
This solution is more well-known today to many a student as the derivative of a function at a point.
Likewise, what is today called the integral was originally defined quite literally as an infinite sum of infinitesimal areas.
The integral symbol looks like an S because it *is* an S, standing for "sum," just as Σ stands for "sum" or Π stands for "product."

This introduced one of the greatest crises in mathematical thought up to that point. The notion of infinitesimal quantities was already controversial, but to then take a ratio of infinitesimals and get a finite quantity was ludicrous. Yet, when these completely unfounded methods were applied to very important physical problems, seemingly correct answers emerged. That these two facts could somehow be simultaneously true led to a question: is the infinitesimal calculus a parlor trick, or can it be made rigorous?

What eventually led to calculus being put on rigorous foundations is the concept of the limit, due to Augustin-Louis Cauchy and Karl Weierstraß.
The idea was essentially a mathematical formaization of the notion of continuity, formulated in what is today called the epsilon-delta formalism.
In essence, if, for any positive number *ε*, there is another positive number *δ* such that, given any point of distance less than *δ*
from a point *x*, the output of a function *f* at this point is of distance less than *ε* from *f(x)*, then it is continuous at *x*.
We can also adapt this definition from continuous function to discrete sequences of points. Instead of talking about points closer than *δ*,
we talk about the terms whose indices are greater than or equal to some natural number *N*, and then proceed analogously.
It is this notion of the limit that allows one to finally make sense of concepts such as the derivative or the integral.

Of note is that this definition is not just of great practical significance, but of great philosophical significance, as well. In particular, the interplay of discrete and continuous objects in the definition of convergent sequences resolves Zeno's paradox of motion. This, in turn, means that limits are of great significance to all dialectical thinking, as Hegel identified Zeno as the origin of the dialectic.

Equipping the real numbers with their canonical metric was of such importance that it was rapidly extended to other contexts. Of course, there is the obvious extension of equipping Euclidean spaces of higher dimensions with their canonical metrics. However, the wealth of concepts depending only on the existence of some metric was so great that it was quickly generalized to other contexts. A metric is defined to be a function of two variables on some set with outputs in the nonnegative real numbers, with the following properties:

- Symmetry:
*M*(*x*,*y*) =*M*(*y*,*x*) - Separation:
*M*(*x*,*y*) = 0 if and only if*x*=*y* - Triangle Inequality:
*M*(*x*,*y*) ≤*M*(*x*,*z*) +*M*(*z*,*y*)

The triangle inequality might seem a bit esoteric in comparison with the relatively understandable properties of symmetry and separation. However, it simply states that taking the shortest path between two points is identical to going directly from the first point to the second point. This means that it is impossible to find a "roundabout" path that is somehow shorter than the direct path. As a counterexample, if we were to equip the real numbers with a "distance function" that was the square of the canonical metric, these would exist. Under this system, the distance between 0 and 1 would be 1, and the distance between 1 and 2 would be 1, but the distance between 0 and 2 would be 4. If one were to "travel" to 2 from 0, it would be faster to stop at another point than to take the direct path. Thus, we must require the triangle inequality in order to have metrics that make some amount of intuitive sense.

A set equipped with a metric is then a metric space. There are quite a few results from the classical case of Euclidean metrics that can be translated here.
Of note is that, because we have a notion of determining which points are sufficiently close to some particular point *x*,
we can then discuss concepts of things converging to that point, such as sequences. However, one must be careful about this notion.
Unlike in the Euclidean case, it is possible to have sequences that have enough properties of convergent sequences that they "should" converge, yet do not.
This notion is that of a Cauchy sequence, which is a sequence such that all of its elements, after some point, become sufficiently close to each other.
In general metric spaces, it is possible to have Cauchy sequences that do not actually converge to any point.
The metric spaces in which all Cauchy sequences actually *do* converge to some point are known as complete metric spaces.
Given an arbitrary metric space, there is a complete metric space that is the "freest" complete metric space that it can be embedded into.
This is known as the completion of the metric space, and it is intimately linked with various formal constructions of the real numbers.

Of course, there also many properties induced by a metric that can be discussed without explicitly referring to the metric itself.
The chief example of this is the notion of a sequence converging by virtue of becoming arbitrarily close to a given point.
While this is classically articulated by a discussion of arbitrarily small distances, it can also be articulated by the concept of neighborhoods of points.
In the cas of metric spaces, the open balls around a given point are clearly neighborhoods of this point, but there are other neighborhoods as well.
In particular, arbitrary unions of neighborhoods are again neighborhoods of that point, and so are finite intersections of neighborhoods.
The evident generalization from neighborhoods induced by a metric is to instead define the neighborhoods *a priori*.
This is the modern concept of a topological space, which is a set equipped with a collection of subsets satisfying the following properties:

- The entire set is an open subset of itself.
- The empty subset is an open subset.
- Arbitrary unions of open subsets are again open subsets.
- Finite intersections of open subsets are again open subsets.

From this definition, we can then define the topology on a metric space that it is induced by its metric as the smallest collection of its subsets that contains all of the open balls of its points which in addition satisfies all of the above axioms. From this, the real numbers can be given a topology, generated by the open intervals on the number line. Similarly, the natural numbers and the integers can be given topologies, but by contrast, these topologies are discrete, meaning that all subsets are open. While we unfortunately lose the concept of a space being complete, topological spaces are a much broader class of spaces compared to metric spaces.

Of course, while topological spaces exist in a very general context, one often hopes for a space that at least partially resembles a reasonable space. In particular, one often desires topological spaces that locally resemble Euclidean space, even if they globally do not resemble them. A simple example is the circle. While it is very clearly not equivalent to a line, it is true that it locally resembles a line. Similarly, a sphere is very clearly not equivalent to a plane, but it is true that it locally resembles a plane. This general notion is captured by the idea of a manifold, which is a topological space that is covered by open subsets of Euclidean spaces. More precisely, there is a family of maps from a collection of open subsets of Euclidean spaces to this space, subject to the following conditions:

- Every point in the space lies in the image of at least one of these maps.
- Each of the maps induces a homeomorphism (isomorphism of topological spaces) between its domain and its image within the space.

The existence of these maps is desirable, as it allows one to work with the space in a manner where local phenomena depend only on these "coordinate charts." From these, much of real analysis in the classical case can be extended to manifolds. Except, not all can be extended quite yet. Remember the derivative? Discussions of sufficiently smooth (read: sufficiently differentiable) functions need some extra information to be brought over. The coordinate charts of our manifold induce "transition maps" between the open subsets covering the space, which filler.

The foundation of Cartesian analytic geometry is the identification of the real numbers with the continuum. This assumption has proven its fruitfulness, and feels intuitively correct, yet is it necessarily the case that this is so? To show that this assumption is indeed reasonable, we will attempt to show that the real numbers must arise from any reasonable geometric system. In opposition to analytic geometry, which constructs geometry from the reals, we will use synthetic geometry, where the geometry is what is fundamental.

The first perspective on synthetic geometry is incidence geometry, and in particular its specialization to projective geometry. In this setting, the two fundamental objects are points and lines, where there is a relation between them referred to as "incidence." This relation is interpreted as the truth value of the statement "the given point lies on the given line." Of note is that lines are not merely specified collections of points, but are full objects in and of themselves. While not identical, this is related to the notion of cohesion (as in, a context in which points can be considered to "cohere" together). In general, it does not make sense to think of an object in a cohesive context as nothing more than a collection of points, but as something more. The same vague notion can be brought to the projective context, where it is merely coincidental that a line can be identified with the points incident to it.

The basic idea behind projective geometry is that it is exactly like everyday geometry, except there is no such thing as the notion of parallel lines.
In our world, given a line and a point not incident to it, there is a unique line incident to that point
which shares no points in common with the first line. Every other line indicent to that point has a unique intersection with the first line.
Projective geometry is, in a sense, simpler than our own geometry, in that any given pair of distinct lines has the same set of properties as another pair.
However, reducing projective geometry to "there are no parallel lines" is misleading, because there are other geometries satisfying this property.
In particular, if we take a sphere to be our geometry, where the lines are its great circles, lines intersect at exactly *two* points, not one.
Instead, we want to model projective geometry on what Euclidean geometry would be like if its "points at infinity" were adjoined to it.
The intuition is that the reason parallel lines "don't intersect" is because they actually *do* intersect, just at a "point at infinity."

As it turns out, this intuitive picture can be described using a small collection of axioms. They are as follows:

- For every pair of distinct points, there exists exactly 1 line incident to both of them.
- For every pair of distinct lines, there exists exactly 1 point incident to both of them.
- There exist 4 distinct points such that no line is incident to more than 2 of them.

These axioms define what is called a projective plane. In fact, these axioms are satisfied by "the" projective plane, defined over the reals.
This space can be constructed by taking a 3-dimensional real vector space, removing the origin, and taking equivalence under scaling by nonzero real numbers.
In this space, the "points" are represented by lines through the origin in 3-dimensional space, and "lines" by planes through the origin.
There is a standard embedding of Euclidean 2-dimensional space into this projective plane,
with the point (*a*,*b*) being sent to the projective point [*a*,*b*,1],
the equivalence class of Euclidean 3-dimensional points that can be rescaled to be of this form. Of note is that not all projective points are of this form.
For instance, a projective point of the form [*a*,1,0] cannot be in the image of this embedding. These points are "points at infinity."
There is another standard emebedding, of the real line into the projective plane, sending the point (*a*) to the projective point [*a*,1,0].
However, there is still one more point left unaccounted for: the point [1,0,0]. With this point, all points are now accounted for.

Of course, the real projective plane is not the only model of our synthetic notion of a projective plane.
One can also construct a projective plane in the same manner as before, but replacing the real numbers with the complex numbers.
While the notion of projective coordinates makes sense in this context, one must remember that "lines" are *complex lines*, which are real planes.
Likewise, "planes" are *complex planes*, which are real 4-spaces.
Complex algebraic notions can be confusing if one does not remember that they have twice the real dimension of their real counterparts.
Keeping this in mind, however, one also gets models of our synthetic projective geometry over the complex numbers.

Of course, these are only models of 2-dimensional projective space. Can we extend this notion to include other dimensionalities? This is, in fact, possible, and can also be modeled by a small collection of axioms. They are as follows:

- For every pair of distinct points, there exists exactly 1 line incident to both of them.
- Given 4 distinct points
*a*,*b*,*c*, and*d*such that the lines*ab*and*cd*have a point incident to both of them, then so do the lines*ac*and*bd*. - Any line has at least 3 distinct points incident to it.

This is similar to our previous definition, but it is broader, allowing for models that are not models of projective plane geometry.
In fact, every possible dimensionality can be represented by some model satisfying these axioms.
We will say that a projective space is of dimension *d* if it can be generated by at least *d*+1 points, and no fewer.
What is meant by "generation" here is that the smallest subspace containing our *d*+1 points, such that all points incident to a line
between two points in our subspace are also included in the subspace, is necessarily the entire space.
This mimics a similar result in Euclidean geometry.

In low dimensionalities, there are a few other characterizations of particular dimensions, as follows:

- A space with no points nor lines is (-1)-dimensional (a somewhat degenerate case).
- A space with exactly 1 point but no lines is 0-dimensional.
- A space with more than 1 point but exactly 1 line is 1-dimensional.
- A space with more than 1 line where distinct lines always intersect at a point is 2-dimensional.
- All other spaces have dimension at least 3.

Of note is that projective spaces of dimension at least 3 do not satisfy our desire to have all distinct lines intersect somewhere. However, there is an analogous statement that always holds: distinct hyperplanes (subspaces of dimension one less than the ambient space) always intersect. As for the spaces of dimension 2, perhaps this is not obvious, but 2-dimensional projective spaces in the broader sense defined here are equivalent to our earlier definition of projective planes, meaning that there are no "extra" examples on either side.

As with the previous examples, one finds that producing "the" projective space of a given dimension produces a space which satisfies all of the given axioms. Not only is this true of the real projective spaces, but also of the complex projective spaces. One can also produce such spaces over other fields, such as the rational numbers, an algebraic extension of the rationals such as the Gaussian numbers, or even a finite field. With some extra care, one can even define projective spaces over noncommutative division rings, such as the quaternions. While one can produce plenty of examples of projective spaces through these constructions, is it the case that they are all of this form?

In dimension at most 1, projective spaces are defined up to isomorphism by how many points they have. In fact, for almost all cardinal numbers, there is a unique (up to isomorphism) projective space of dimension at most 1 with that number of points. The only exception to this is that there is no projective space with exactly 2 points, which can be deduced from the given axioms. Unfortunately, these projective space are not particularly interesting, with all of the points "cohering together," if you will. As for our question about projective spaces that do not arise from analytic examples, we already have a few. For instance, the projective line containing exactly 7 distinct points cannot arise from this example. It cannot arise from a division ring, because any such division ring must have exactly 6 elements, which is known to be impossible. A similar statement holds for the line with 11 distinct points, because there is no division ring with exactly 10 elements.

One only starts getting interesting projective spaces in dimension at least 2. It is in this dimension where the question of coordinates comes into play.

This article is currently unfinished.