Measuring People up and Database Logic

Average Joe and Plain Jane are two North American ways of expressing in everyday speech the typical, normal, just plain, average person. In Weten Vraagt Meer Dan Meten, the concluding chapter points to the tendency in professional and expert contexts to compare people via measurements, with a particular emphasis on averages.omslag20weten-meten

The various contributors to this book brilliantly signal  a key social and epistemic issue, as stated by the punchy title. The current situation could also be labelled the “metrics paradigm”, a way of knowing in which the average has indeed been a ubiquitous statistical description of population measurements. Within this paradigm, our knowledge of populations is taking on a new vernacular, that of the probabilistic.

To contrast these related but distinct statistical descriptions, the example of composite images of pathological features are especially useful. Specifically, in the article Voxels in the Brain, I compared the average photographs of Francis Galton with the probabilistic atlases of the ICBM/Human Brain Project.

Average Images

The photographic plates on which the images of different representatives of a group were imprinted were intended as a summation of instances, produced through mechanical objectivity. The individual contributions to the image were meant to enforce essence through repetition, while erasing out (or averaging out) the ad hoc variations that were accidental to the essential features. The controlled exposure of the plates (distance to the subject, time of exposure, presumably also lighting) ensured that each case was objectively and precisely added to the summary image. In this process, the images blended; individual instances were ‘lost’, distilled, merged in the service of producing the whole.

In the probabilistic atlases, there is also a calculative logic that partakes of what became the “metrics paradigm”, of which Galton was a key proponent. Collective measurements produced according to standardised practices are the way to gain insights about normality and abnormality. But the mediation of this knowledge and the apparatus used to distil data into meaningful outcomes differ in significant ways from Galton’s approach.

Understanding these differences is relevant for thinking through how we treat individuals and how we distinguish the collective (populations, pathological groups) and, increasingly importantly, how we position individuals with regards to groups.

Probabilistic Images

In the average photographs, the individual measurements are pictures, and these are physically aggregated into accumulation of collective photonic traces on the photographic plate. In the brain atlases, the individual brain scans are digital measurements, organised in a space of voxels. This space is standardised, to allow the collation of measurements from different individual brain scans. The values of individual voxels can be expressed as collections of measurements—an average brain, for example, as an end product, close to the average photographs of Galton.

var_6panels
Paul M. Thompson, Michael S. Mega, and Arthur W. Toga 2000

The mediation of digital scans is different from that of optical photography, however, and the suites of technologies associated with digital scans support a different approach to the manipulation of individual cases in relation to group outcomes.

These possibilities form a ‘database logic’ (De Rijcke and Beaulieu 2014). In digital atlases, there is a different relation to the various constitutive elements. Since the individual measurements are not aggregated physically, as in Galton’s photographs, each measurement remains available, retrievable. The individual case is not merged into the whole in producing an average (or other statistical description). This means that the individual can be ‘found back’ in the collective, and that this relationship to the collective can also be calculated and expressed. This is where variation from the norm becomes exquisitely manipulable in a digital context– in the sense that you can do all kinds of things with it. The relationships between the different measurements can be expressed probabilistically, making variation within a group apprehensible. An important difference is that variation can be explored in a plastic way—much more so than with the optical photographs that are produced as the outcome of a uni-directional flow and where the individual cases become fixed in the collective representation. The database enables a back-and-forth between the individual and the collective. Specific structures in the brain can be quantitatively shown to vary more or less, individual brains can be shown to deviate more or less from the norm. Probabilities, rather than averages have become the dominant normative descriptor.

Personalised Population?

While the probabilistic brain atlas is a rather specific case (discussed at more length in De Rijcke and Beaulieu 2014), I’ve been noting that this logic is active in many other spheres. It points to a particular kind of “personalisation of population data” that I think is shaping how we think about individuals and populations. For example, even comparisons to ‘averages’ are increasingly personalised. I can compare my running pace or the amount of sleep I get to a personalised population of women who are the same age, weight, and height as me. In these comparisons to the norm, the individual as starting point is emphasised, and the relation to a population is personalised.

Is this paradoxical– a personalised population ? This approach to data is becoming so mundane that the tension inherent in this new social object runs the risk of going unexamined.

The exquisite personalisation that digital data enables is increasingly present in all kinds of products and services, so that the knowledge of the individual in relation to a population is becoming extremely widespread. While this type of personalised data has exploded in the last 5-8 years, as we gather and share more and more data through wearables and the growing presence of sensors in our environment, both public and private, it seems to be to be deeply resonant with a modernist project. The personalisation is effected through the accumulation of factors—the more factors for comparison the better the profile… better in the sense of more personal.

The individual as the sum of its factors, that is the crux  in constituting the modern subject. Are we back to Galton’s dream?

Yes and no. One of the main differences is the context in which the subject is created and who is doing the measuring. The relevant factors are not decreed, standardised and implemented by the state, nor are they elaborated in relation to the ambitions of better government of society and to one’s activities, rights or ambitions as a citizen. The current factorisation of individuals, the necessary condition to this personalisation of population data, takes place in a corporate context where better marketing is the ultimate aim and in relation to one’s activities, rights and ambitions as a consumer. If this is to be the shape of our knowledge of populations, I see all kinds of ramifications regarding the accountability and legitimacy of this knowledge.

Unravelling the paradox of the personalised population constitutes a larger project than can be addressed here, but these lines of thought and labels can be a fruitful framing.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: