What is IQ?

IQ is a type of standard score that indicates how far above, or how far below, his/her peer group an individual stands in mental ability. The peer group score is an IQ of 100; this is obtained by applying the same test to huge numbers of people from all sociology-economic strata of society, and taking the average.

The term ‘IQ’ was coined in 1912 by the psychologist William Stern in relation to the German term Intelligenzquotient. At that time, IQ was represented as a ratio of mental age to chronological age x 100. So, if an individual of 10 years of age had a mental age of 10, their IQ would be 100. However, if their mental age was greater than their chronological age (e.g., 12 rather than 10), their IQ would be 120. Similarly, if their mental age was lower than their chronological age, their IQ would be lower than 100.


When current IQ tests were developed, the average score of the norming sample was defined as IQ 100; and standard deviation (a statistical concept that describes average dispersion) up or down was defined as, for example, 16 or 24 IQ points greater or less than 100. Mensa admits individuals who score in the top 2% of the population, and they accept many different tests, as long as they have been standardised and normed, and approved by professional psychologists’ associations. Two of the most well-known IQ tests are ‘Stanford-Binet’ and ‘Cattell’ (explained in more detail below). In practice, qualifying for Mensa in the top 2% means scoring 132 or more in the Stanford-Binet test, or 148 or more in the Cattell equivalent.

Measuring Intelligence – Noteworthy Contributors
Sir Francis Galton
Sir Francis Galton was the first scientist who attempted to devise a modern test of intelligence in 1884. In his open laboratory, people could have the acuity of their vision and hearing measured, as well as their reaction times to different stimuli.

James McKeen Cattell
The world’s first mental test, created by James McKeen Cattell in 1890, consisted of similar tasks, almost all of them measuring the speed and accuracy of perception. It soon turned out, however, that such tasks cannot predict academic achievement; therefore, they are probably imperfect measures of anything we would call intelligence.

Alfred Binet
The first modern-day IQ test was created by Alfred Binet in 1905. Unlike Galton, he was not inspired by scientific inquiry. Rather, he had very practical implications in mind: to be able to identify children who cannot keep up with their peers in the educational system that had recently been made compulsory for all.


Binet’s test consisted of knowledge questions as well as ones requiring simple reasoning. Besides test items, Binet also needed an external criterion of validity, which he found in age. Indeed, even though there is substantial variation in the pace of development, older children are by and large more cognitively advanced than younger ones. Binet, therefore, identified the mean age at which children, on average, were capable of solving each item, and categorized items accordingly. This way he could estimate a children’s position relative to their peers: if a child, for instance, was capable of solving items that were, on average, only solved by children who were two years older, then this child would be two years ahead in mental development.

William Stern
Subsequently, a more accurate approach was proposed by William Stern, who suggested that instead of subtracting real age from the age estimated from test performance, the latter (termed ‘mental age’) should be divided by the former. Hence the famous ‘intelligence quotient’ or ‘IQ’ was born and defined as (mental age) / (chronological age). It indeed turned out that such a calculation was more in line with other estimates of mental performance. For instance, an 8-year-old performing on the level of a 6-year-old would arrive at the same estimate under Binet’s system as a 6-year-old performing on the level of a 4-year-