Jump to content

Unifying theories in mathematics

From Wikipedia, the free encyclopedia

There have been several attempts in history to reach a unified theory of mathematics. Some of the most respected mathematicians in the academia have expressed views that the whole subject should be fitted into one theory (examples include Hilbert's program and Langlands program).

The unification of mathematical topics has been called mathematical consolidation:[1] "By a consolidation of two or more concepts or theories Ti we mean the creation of a new theory which incorporates elements of all the Ti into one system which achieves more general implications than are obtainable from any single Ti."

Historical perspective

[edit]

The process of unification might be seen as helping to define what constitutes mathematics as a discipline.

For example, mechanics and mathematical analysis were commonly combined into one subject during the 18th century, united by the differential equation concept; while algebra and geometry were considered largely distinct. Now we consider analysis, algebra, and geometry, but not mechanics, as parts of mathematics because they are primarily deductive formal sciences, while mechanics like physics must proceed from observation. There is no major loss of content, with analytical mechanics in the old sense now expressed in terms of symplectic topology, based on the newer theory of manifolds.

Mathematical theories

[edit]

The term theory is used informally within mathematics to mean a self-consistent body of definitions, axioms, theorems, examples, and so on. (Examples include group theory, Galois theory, control theory, and K-theory.) In particular there is no connotation of hypothetical. Thus the term unifying theory is more like a sociological term used to study the actions of mathematicians. It may assume nothing conjectural that would be analogous to an undiscovered scientific link. There is really no cognate within mathematics to such concepts as Proto-World in linguistics or the Gaia hypothesis.

Nonetheless there have been several episodes within the history of mathematics in which sets of individual theorems were found to be special cases of a single unifying result, or in which a single perspective about how to proceed when developing an area of mathematics could be applied fruitfully to multiple branches of the subject.

Geometrical theories

[edit]

A well-known example was the development of analytic geometry, which in the hands of mathematicians such as Descartes and Fermat showed that many theorems about curves and surfaces of special types could be stated in algebraic language (then new), each of which could then be proved using the same techniques. That is, the theorems were very similar algebraically, even if the geometrical interpretations were distinct.

In 1859, Arthur Cayley initiated a unification of metric geometries through use of the Cayley-Klein metrics. Later Felix Klein used such metrics to provide a foundation for non-Euclidean geometry.

In 1872, Felix Klein noted that the many branches of geometry which had been developed during the 19th century (affine geometry, projective geometry, hyperbolic geometry, etc.) could all be treated in a uniform way. He did this by considering the groups under which the geometric objects were invariant. This unification of geometry goes by the name of the Erlangen programme.[2]

The general theory of angle can be unified with invariant measure of area. The hyperbolic angle is defined in terms of area, very nearly the area associated with natural logarithm. The circular angle also has area interpretation when referred to a circle with radius equal to the square root of two. These areas are invariant with respect to hyperbolic rotation and circular rotation respectively. These affine transformations are effected by elements of the special linear group SL(2,R). Inspection of that group reveals shear mappings which increase or decrease slopes but differences of slope do not change. A third type of angle, also interpreted as an area dependent on slope differences, is invariant because of area preservation of a shear mapping.[3]

Through axiomatisation

[edit]

Early in the 20th century, many parts of mathematics began to be treated by delineating useful sets of axioms and then studying their consequences. Thus, for example, the studies of "hypercomplex numbers", such as considered by the Quaternion Society, were put onto an axiomatic footing as branches of ring theory (in this case, with the specific meaning of associative algebras over the field of complex numbers). In this context, the quotient ring concept is one of the most powerful unifiers.

This was a general change of methodology, since the needs of applications had up until then meant that much of mathematics was taught by means of algorithms (or processes close to being algorithmic). Arithmetic is still taught that way. It was a parallel to the development of mathematical logic as a stand-alone branch of mathematics. By the 1930s symbolic logic itself was adequately included within mathematics.

In most cases, mathematical objects under study can be defined (albeit non-canonically) as sets or, more informally, as sets with additional structure such as an addition operation. Set theory now serves as a lingua franca for the development of mathematical themes.

Bourbaki

[edit]

The cause of axiomatic development was taken up in earnest by the Bourbaki group of mathematicians. Taken to its extreme, this attitude was thought to demand mathematics developed in its greatest generality. One started from the most general axioms, and then specialized, for example, by introducing modules over commutative rings, and limiting to vector spaces over the real numbers only when absolutely necessary. The story proceeded in this fashion, even when the specializations were the theorems of primary interest.

In particular, this perspective placed little value on fields of mathematics (such as combinatorics) whose objects of study are very often special, or found in situations which can only superficially be related to more axiomatic branches of the subject.

Category theory as a rival

[edit]

Category theory is a unifying theory of mathematics that was initially developed in the second half of the 20th century.[citation needed] In this respect it is an alternative and complement to set theory. A key theme from the "categorical" point of view is that mathematics requires not only certain kinds of objects (Lie groups, Banach spaces, etc.) but also mappings between them that preserve their structure.

In particular, this clarifies exactly what it means for mathematical objects to be considered to be the same. (For example, are all equilateral triangles the same, or does size matter?) Saunders Mac Lane proposed that any concept with enough 'ubiquity' (occurring in various branches of mathematics) deserved isolating and studying in its own right. Category theory is arguably better adapted to that end than any other current approach. The disadvantages of relying on so-called abstract nonsense are a certain blandness and abstraction in the sense of breaking away from the roots in concrete problems. Nevertheless, the methods of category theory have steadily advanced in acceptance, in numerous areas (from D-modules to categorical logic).

Uniting theories

[edit]

On a less grand scale, similarities between sets of results in two different branches of mathematics raise the question of whether a unifying framework exists that could explain the parallels. We have already noted the example of analytic geometry, and more generally the field of algebraic geometry thoroughly develops the connections between geometric objects (algebraic varieties, or more generally schemes) and algebraic ones (ideals); the touchstone result here is Hilbert's Nullstellensatz, which roughly speaking shows that there is a natural one-to-one correspondence between the two types of objects.

One may view other theorems in the same light. For example, the fundamental theorem of Galois theory asserts that there is a one-to-one correspondence between extensions of a field and subgroups of the field's Galois group. The Taniyama–Shimura conjecture for elliptic curves (now proven) establishes a one-to-one correspondence between curves defined as modular forms and elliptic curves defined over the rational numbers. A research area sometimes nicknamed Monstrous Moonshine developed connections between modular forms and the finite simple group known as the Monster, starting solely with the surprise observation that in each of them the rather unusual number 196884 would arise very naturally. Another field, known as the Langlands program, likewise starts with apparently haphazard similarities (in this case, between number-theoretical results and representations of certain groups) and looks for constructions from which both sets of results would be corollaries.

Reference list of major unifying concepts

[edit]

A short list of these theories might include:

Recent developments in relation with modular theory

[edit]

A well-known example is the Taniyama–Shimura conjecture, now the modularity theorem, which proposed that each elliptic curve over the rational numbers can be translated into a modular form (in such a way as to preserve the associated L-function). There are difficulties in identifying this with an isomorphism, in any strict sense of the word. Certain curves had been known to be both elliptic curves (of genus 1) and modular curves, before the conjecture was formulated (about 1955). The surprising part of the conjecture was the extension to factors of Jacobians of modular curves of genus > 1. It had probably not seemed plausible that there would be 'enough' such rational factors, before the conjecture was enunciated; and in fact the numerical evidence was slight until around 1970, when tables began to confirm it. The case of elliptic curves with complex multiplication was proved by Shimura in 1964. This conjecture stood for decades before being proved in generality.

In fact the Langlands program (or philosophy) is much more like a web of unifying conjectures; it really does postulate that the general theory of automorphic forms is regulated by the L-groups introduced by Robert Langlands. His principle of functoriality with respect to the L-group has a very large explanatory value with respect to known types of lifting of automorphic forms (now more broadly studied as automorphic representations). While this theory is in one sense closely linked with the Taniyama–Shimura conjecture, it should be understood that the conjecture actually operates in the opposite direction. It requires the existence of an automorphic form, starting with an object that (very abstractly) lies in a category of motives.

Another significant related point is that the Langlands approach stands apart from the whole development triggered by monstrous moonshine (connections between elliptic modular functions as Fourier series, and the group representations of the Monster group and other sporadic groups). The Langlands philosophy neither foreshadowed nor was able to include this line of research.

Isomorphism conjectures in K-theory

[edit]

Another case, which so far is less well-developed but covers a wide range of mathematics, is the conjectural basis of some parts of K-theory. The Baum–Connes conjecture, now a long-standing problem, has been joined by others in a group known as the isomorphism conjectures in K-theory. These include the Farrell–Jones conjecture and Bost conjecture.

See also

[edit]

References

[edit]
  1. ^ Raymond Wilder (1981) Mathematics as a Cultural System, page 58, Pergamon Press
  2. ^ Thomas Hawkins (1984) "The Erlanger Program of Felix Klein: Reflections on Its Place In the History of Mathematics", Historia Mathematica 11:442–70.
  3. ^ Geometry/Unified Angles at Wikibooks