≡ Menu

Why do we use ‘x’ for the unknown?

The unknown from the Arabs

There is a very entertaining video about this topic from Terry Moore!
In a nutshell, Terry Moore suggests that the usage of x to represent an unknown variable can be traced back to the Arabic word šay’ شيء = “thing” (pronounced shay), used in Arabic texts. It was subsequently taken into Old Spanish with the pronunciation shay; because Spanish did not have the corresponding consonant for sh, the Greek letter ϰ was used in transcribing Arabic text, and the word was written ϰei and soon habitually abbreviated to ϰ. This was later latinised to x.

The unknown from the Greeks

In reality, the ancient Greek text Arithmetica by Diophantus (considered to be the Father of Algebra) predates Arabic study of the subject and was translated into Arabic for that purpose. Only six of thirteen volumes of the work survive, but they contain over a hundred general classes of algebraic equations.

You can read about the notation Diophantus used in James Gow’s A short History of Greek Mathematics. It includes a standard symbol for “unknown”, which was apparently a cursive lower-case sigma (essentially s). But, since that character didn’t yet exist, this is assumed to be inherited from the nearly identical Egyptian symbol for a blank papyrus scroll, which is also known to have been used as symbolism for an “unknown”. Much was lost when the Romans burned Alexandria, so there are gaps; it is presumed from his writing that Diophantus was using what he knew to be a well-established system of notation. There are also numerous other uses of letters to represent variables or constants, other than “unknown” in particular.

The earliest uses of an “unknown” in calculations was by the Egyptians, although the Babylonians also apparently made limited use of it as well. So the Greeks built upon the knowledge of the Egyptians and Babylonians (and some say the Indians), and then the Islamic scholars built upon the knowledge of the Greeks and Indians (and of course, the Babylonians). And then, after emerging from their benighted Middle Ages, European scholars built on top of that.

How Europe adopts the notation

The way this was adopted in Europe is not that clear. It’s an oversimplification to think of algebra as something which was smuggled from the Arabic world to the Christian world across the Pyrénées. There were all sorts of cultural exchanges between the two civilisations, in many different places. Most of the best known Renaissance mathematicians were Italians. Indeed, because of the commercial establishments that the Venetians and the Genoese had founded in the Eastern Mediterranean, they had many contacts with the Islamic world. According to Victor Katz in his A History of Mathematics – An Introduction:

Early in the fifteenth century (….) some of the abacists began to substitute abbreviations for unknowns. For example, in place of the standard words cosa [thing], censo [square], cubo [cube] and radice [root], some authors used the abbreviations ‘c’, ‘ce’, ‘cu’, and ‘R’. (….) This change was a slow one. New symbols gradually came into use in the fifteenth and sixteenth centuries, but modern algebraic symbolism was not fully formed until the mid-seventeenth century.

As Florian Cajori summarises in his History of Mathematics,

the use of z, y, x (….) to represent unknowns is due to René Descartes, in his La géometrie (1637). Without comment, he introduces the use of the first letters of the alphabet to signify known quantities and the use of the last letters to signify unknown quantities.

__________________________

Inspired by Terry Moore, “Why is ‘x’ the unknown?”

If you liked that post, then try these...

Peter Norman, The Forgotten by Armando Gherardi

Why 'I' Is Always Capitalised by Armando Gherardi

The Curious History Of The First Marathon Races by Armando Gherardi

How America Got Its Name by Armando Gherardi

The Remaining Issues From World War I by Armando Gherardi

Concealed truth about 9/11 by Armando Gherardi

{ 0 comments… add one }