Given the current trend to include strength and conditioning as the fourth discipline in triathlon (see an article on this part of training for the Dutch national triathlon team in this month’s edition of Transition), I’ve decided to document the overlooked fifth discipline: stuff.
Scuba diving is nothing in comparison; triathlon really involves a large amount of stuff! About 8 months into this adventure, having worked hard to fit all the training into my schedule and develop new routines, I’m facing up to the little discussed fact that the material culture of triathlon requires a substantial chunk of time and energy. (And money–but I’m not going there for now.) This dawned on me a couple of days ago, when I spent half a day reorganising my closet, in order to make room for and optimize the clothing portion of the triathlon stuff. And that’s only the textile department.
So what is the fifth discipline all about? First, you need to figure out what you need and prioritize the purchases. Granted, this is something that website, blogs and magazines do talk about–but it is only the beginning!!! Even expose-style, let it all hang out, nitty-gritty accounts of starting to tri (including this hilarious read) DO NOT TELL YOU THE TRUTH ABOUT THE STUFF. So in the coming period, I’m going to be doing some deep, embedded research as to why has this aspect of triathlon is not more explicitly discussed: Why the conspiracy of silence around the triathlon stuff?
You’ve been warned, major revelations follow.
This first step of figuring out what you need means delving into the nearly endless possibilities–the cost of triathlon stuff seems to be highly variable, and it appears that for each item, there is a range to chose from that covers orders of magnitude, going from 20 to 200 euros or from 200 to 2000, from 2000 to 20 000. What does this mean in practice: while I’m pretty sure that I don’t need a triathlon bike worth 20 000 euros, should I go for the cheapest option? Or will I outgrow that too quickly and should I invest in an upper lower-range model? There’s a few evening’s worth of internet research going into that one. Then there is the ordering, receiving delivery, sizing issues and potential returns. So there you have some substantial information gathering, shopping, decision-making, and logistics of deliveries and fitting required by the stuff.
But– and I know I’m repeating myself– this is only the beginning: once your stuff, in the right size, has come in, you have to find a place to store it. Think cleaning out the shed to make room for yet another bike, go to the municipal solid waste disposal facility (“la dompe” as we use to say when I was little), and install some sort of hanging system so that it will be stored reasonably out of the way of the rest of the family.
Phew. Well done.
But these are necessary one-offs, you might say, investments that are part of starting up a new sport.
And you would be wrong.
Because this carefully selected gear needs to be cleaned, maintained and repaired. So, sticking with the bicycle example, you need degreaser (biodegradable), cleaner, oil (pick the one for wet conditions, this being the Netherlands). And a super handy contraption to clean your bike chain–the contraption also needs assembling and cleaning, make no mistake. A kit to change flats (tire irons, CO2 cartridge and spare inner tube) that all fits into a special streamlined bag under your saddle (how do I secure that on precisely?) so that you can have this stuff at hand during a race. Yes, we’re a few instructional youTube videos further.
Need I go on? A laundry rack for the fine hand-washables has now become a permanent fixture in the bedroom. There is half a shelf in the kitchen reserved for water bottles. A largish basket in our entrance hall holds various bits of equipment needed on the way out (helmet, fluorescent bits and bobs and reflector bands, ziploc for the phone, special earphones) as though that part of the house isn’t already cluttered enough.
To end, here is a picture of some of the recent stuff, some visual evidence so you don’t have to take my word for it. Granted, this is for someone tri-ing in a relatively cool climate. Not like my cousine d’Australie who can do it all wearing a singlet, a cap and a pair of shades. Oh, and probably three kinds of sunblock.
I’ve been researching and reading up on how to solve my eating problem. It seems that this eating dilemma is going to be resolved–surprise surprise–with a golden third way. I should not either increase the baseline or compensate the calories as I go, but rather do both. This means two changes in my current eating pattern:
Some of the frustration I have been feeling was based on the fact that I’m hitting the boundary of how much time and attention I want to be spending on this triathlon thing… On the other hand, I don’t want to undermine the hours I do spend training by not feeding myself properly or have those episodes of low blood sugar where I wake up beyond famished and barely make it downstairs on wobbly legs.
So it seems that the above moves can help me avoid all three of these inconveniences. Eating more means adjusting the routine (increase in daily calory intake), and with a good set of post-workout snacks, I can tackle the recovery nutrition too.
I’ve been going through my cookbooks (including the Feed Zone Portables) and my little book of recipes to select some staples to tackle the recovery nutrition. I feel I need a few more options –and once found, will list these in a post– to nail that part of the triathlon challenge.
BTW: I have now officially registered for the Groningen Speedman (1/8th distance) in June, will probably do the ‘zwemloop’ (500m swim+ 5 km run) in April and am considering another longer event for September.
In the aftermath of the American elections, discussions about knowledge and truth have been framed by concepts of post-truth and alternative facts. Even among those of us who consider themselves experts in the processes of truth-making and knowledge creation, this framing has stirred things up, reactivating old fault lines in the STS community (see the Sismondo vs Collins exchanges in Social Studies of Science).
There are many other, very insightful, discussions. For me, Helen Verran framed it best of all, in her examination of post-truth governance, which she recently summarized in a presentation at the ETHOS Lab, IT University in Copenhagen.
Verran characterizes post-truth governance as one of three imaginaries of truth. One of these, post-truth governance, is based on an epistemology of market values, rather than on ontological truth. In the post-truth imaginary, what matters, the knowledge that is sought out and produced and evaluated, is knowledge that will inform on the opportunities for return on investment.
This framing helps to understand the place of instruments and techniques of calculation and their dominance in global institutions that direct capital flow–whether for the alleviation of poverty, the energy transition of the survival of refugees. Think of the World Bank’s decision-making processes, think of the UN environmental agency’s evaluations, think of the procedures of EU funding for research. What is calculable can be valued–in all senses of the world–and if it can be valued, it is the kind of knowledge that can be funded. In other words, it is the kind of truth worth investing in.
For Verran, it is opposition to this line of reasoning that characterises many of the populist politicians who denounce the logic of calculation. When calculation takes on overly abstract forms, when the measures become too far removed from lived experience, when metrics are pursued within hermetic systems, a sense of alienation follows. What counts seems irrelevant to what matters.
These are the circumstances in which people appear to be ‘tired of experts’, to pick up on a particular polemic that arose during Brexit campaigns. Think for example of rebellions against technocratic approaches to visas and the appeals for pardons for long-time residents of the Netherlands: their rich, meaningful lives lived here “don’t count” in the visa and asylum procedures; the relationships and identities shaped by the many years spent in a community don’t seem to matter.
To return to Verran, and her useful framework: she puts forth that there are currently three co-existing imaginaries of truth, each with their own knowledge-making technologies, political embedding and institutional supports:
She invites us to consider that we, as critical scholars, can translate between these imaginaries, but that ‘something happens’ when we do and that we should pay attention to this. What is lost-gained in translation?
This framework and its implications are well-articulated in Verran’s lecture and well worth a watch, especially given the focus on the prevalence of coherence theory of truth in the epistemology of IT–while giving a talk at ITU. It is also worth noting that these distinctions are eminently relevant in the current context, but not so new: Haraway set out grounded epistemology in in her manifesto years, some decades ago.
What triggers me most in this lecture is the suggestion that translation may be not only a necessary step, but also a productive one. Verran warns us of the need for skills to translate and invites us to pay attention to what might happen when we do this translation. If we are to understand such translations, and if we are to see truth as an event unfolding on the ground, then we must also have expertise to apprehend this. This expertise lies not mainly in techniques of calculation but in the arts of listening.
A video recording of Helen Verran’s talk can be found on the Ethos Lab website.
Data seems to be everywhere in contemporary society, including energy. Precisely because of this ubiquity, it is important to consider how data is created and used, and how it circulates, so that we can understand the implications for the energy transition, whether in the business sector, private and public life. The intersection of energy and big data also needs to be taken into account to achieve a sustainable future.
The course I’ll be teaching during my upcoming visit at the Dept of Science and Technology Studies at the University of Vienna will equip students to reflect and act on issues around big data, and will draw on conceptual and empirical work in the area of energy and sustainability. Big data is a phenomenon that affects all kinds of domains, from energy to banking to sports to neuroscience, and most of the theories and concepts discussed in the course will be of use in understanding Big Data (and other data-intensive innovations) in other domains as well.
Big data has a history going back several decades and has been shaped by tools and institutions, with the result that ‘big data’ has its own structures, biases and tendencies. It is therefore crucial to analyse how big data approaches are a specific way of creating knowledge about energy and how this knowledge is used. In particular, we will trace how new forms of measurement yield data that are then combined with particular kinds of statistics and database logics, and how an informational turn is affecting the technologies and infrastructures in the area of energy.
The topics to be addressed in the lectures are
The course will be held in March 2018 and I’ll be reporting on this blog about our learning.
In 2018, one of the activities I’ll be pursuing as visiting professor at the Department of Science and Technology studies (blog) at the University of Vienna is giving a course on energy and big data. This will be the occasion to explore with students of the Masters programme in Science and Technology Studies how energy and big data intersect and how this intersection matters for the energy transition. The course will run in March 2018.
Besides the traditional lecture form, we will have some practical sessions where we engage with the material culture of energy. A visit to the ASCR Demo Center at the Seestadt Technology Center is planned.
One of the outcomes of the course will build on the work of the Petrocultures collective, published in After Oil. This small book provides a toolbox to analyze narratives of transition. As such, it is a valuable introduction to discourse analysis. Furthermore, the political dimension of transition narratives is foregrounded in this approach. After Oil will provide the theoretical and methodological basis for group work and student presentations. Students will chose a particular narrative to explore, in the form of their choice.
Szeman, I., & Group, P. R. (2016). After Oil. Edmonton, Alberta: Petrocultures Research Group. http://afteroil.ca/resources-2/after-oil-book/
I am experiencing an eating problem. Not an eating disorder, not a food issue, but a problem with eating. It’s been developing these past few months and is now getting to be troublesome. There seems to be a growing disconnection between eating habits, hunger, exercise and diet. This is a multi-dimensional problem, but if I had to state is as a tension or dilemma, then it would be something like
How did it come to this? It’s a story of arts of listening and techniques of calculation, like others on this blog.
About three years ago, after noting that I was starting to gain weight just by looking at food (hello perimenopause!), I started logging my calorie intake using an app (Myfitnesspas). A few weeks of logging revealed (!) that 1200 calories a day was the baseline intake to maintain my weight. A run or swim meant I could throw in the occasional beer or chocolate chip cookie or indulge in that match made in heaven: a piece of chocolate and a well-parided whisky. So far so good.
But since end of November 2016, when I joined the local triathlon club GVAV Triathlon, I’ve increased the number of training sessions in my week, and these sessions have a much more intensive character. For example: 1.5 hour training on the track in the Stadspark mean 535 calories expended, not counting the bike ride there and back. This represents a whopping 45% of my regular calorie intake. So it’s not a question of enjoying an additional snack.
So how to deal with this? Should I compensate intake according to effort and have an additional 3 course meal on days with intensive training? This path involves putting quite a bit of thought and attention to eating, to keep the extra intake nutritious and well-timed. And frankly, I’m already spending a lot of time on this triathlon business. Or should I keep to a more stable, consistent baseline and simply increase my daily intake, so that it all comes up in the wash and intake and output balance out over the course of the week? This approach takes less effort and is easier to transform into a new eating pattern, with simply more calories in my diet. But this then carries the risks of piling on the weight if for some reason I’m training less that week and of having energy dips because of large efforts on some days and not others.
So what is the best approach? I’ll be looking for sources these coming weeks, and perhaps even asking around. Though feeling like the ubernewb at GVAV, I’m not too keen to exhibit my ignorance there.
Average Joe and Plain Jane are two North American ways of expressing in everyday speech the typical, normal, just plain, average person. In Weten Vraagt Meer Dan Meten, the concluding chapter points to the tendency in professional and expert contexts to compare people via measurements, with a particular emphasis on averages.
The various contributors to this book brilliantly signal a key social and epistemic issue, as stated by the punchy title. The current situation could also be labelled the “metrics paradigm”, a way of knowing in which the average has indeed been a ubiquitous statistical description of population measurements. Within this paradigm, our knowledge of populations is taking on a new vernacular, that of the probabilistic.
To contrast these related but distinct statistical descriptions, the example of composite images of pathological features are especially useful. Specifically, in the article Voxels in the Brain, I compared the average photographs of Francis Galton with the probabilistic atlases of the ICBM/Human Brain Project.
The photographic plates on which the images of different representatives of a group were imprinted were intended as a summation of instances, produced through mechanical objectivity. The individual contributions to the image were meant to enforce essence through repetition, while erasing out (or averaging out) the ad hoc variations that were accidental to the essential features. The controlled exposure of the plates (distance to the subject, time of exposure, presumably also lighting) ensured that each case was objectively and precisely added to the summary image. In this process, the images blended; individual instances were ‘lost’, distilled, merged in the service of producing the whole.
In the probabilistic atlases, there is also a calculative logic that partakes of what became the “metrics paradigm”, of which Galton was a key proponent. Collective measurements produced according to standardised practices are the way to gain insights about normality and abnormality. But the mediation of this knowledge and the apparatus used to distil data into meaningful outcomes differ in significant ways from Galton’s approach.
Understanding these differences is relevant for thinking through how we treat individuals and how we distinguish the collective (populations, pathological groups) and, increasingly importantly, how we position individuals with regards to groups.
In the average photographs, the individual measurements are pictures, and these are physically aggregated into accumulation of collective photonic traces on the photographic plate. In the brain atlases, the individual brain scans are digital measurements, organised in a space of voxels. This space is standardised, to allow the collation of measurements from different individual brain scans. The values of individual voxels can be expressed as collections of measurements—an average brain, for example, as an end product, close to the average photographs of Galton.
The mediation of digital scans is different from that of optical photography, however, and the suites of technologies associated with digital scans support a different approach to the manipulation of individual cases in relation to group outcomes.
These possibilities form a ‘database logic’ (De Rijcke and Beaulieu 2014). In digital atlases, there is a different relation to the various constitutive elements. Since the individual measurements are not aggregated physically, as in Galton’s photographs, each measurement remains available, retrievable. The individual case is not merged into the whole in producing an average (or other statistical description). This means that the individual can be ‘found back’ in the collective, and that this relationship to the collective can also be calculated and expressed. This is where variation from the norm becomes exquisitely manipulable in a digital context– in the sense that you can do all kinds of things with it. The relationships between the different measurements can be expressed probabilistically, making variation within a group apprehensible. An important difference is that variation can be explored in a plastic way—much more so than with the optical photographs that are produced as the outcome of a uni-directional flow and where the individual cases become fixed in the collective representation. The database enables a back-and-forth between the individual and the collective. Specific structures in the brain can be quantitatively shown to vary more or less, individual brains can be shown to deviate more or less from the norm. Probabilities, rather than averages have become the dominant normative descriptor.
While the probabilistic brain atlas is a rather specific case (discussed at more length in De Rijcke and Beaulieu 2014), I’ve been noting that this logic is active in many other spheres. It points to a particular kind of “personalisation of population data” that I think is shaping how we think about individuals and populations. For example, even comparisons to ‘averages’ are increasingly personalised. I can compare my running pace or the amount of sleep I get to a personalised population of women who are the same age, weight, and height as me. In these comparisons to the norm, the individual as starting point is emphasised, and the relation to a population is personalised.
Is this paradoxical– a personalised population ? This approach to data is becoming so mundane that the tension inherent in this new social object runs the risk of going unexamined.
The exquisite personalisation that digital data enables is increasingly present in all kinds of products and services, so that the knowledge of the individual in relation to a population is becoming extremely widespread. While this type of personalised data has exploded in the last 5-8 years, as we gather and share more and more data through wearables and the growing presence of sensors in our environment, both public and private, it seems to be to be deeply resonant with a modernist project. The personalisation is effected through the accumulation of factors—the more factors for comparison the better the profile… better in the sense of more personal.
The individual as the sum of its factors, that is the crux in constituting the modern subject. Are we back to Galton’s dream?
Yes and no. One of the main differences is the context in which the subject is created and who is doing the measuring. The relevant factors are not decreed, standardised and implemented by the state, nor are they elaborated in relation to the ambitions of better government of society and to one’s activities, rights or ambitions as a citizen. The current factorisation of individuals, the necessary condition to this personalisation of population data, takes place in a corporate context where better marketing is the ultimate aim and in relation to one’s activities, rights and ambitions as a consumer. If this is to be the shape of our knowledge of populations, I see all kinds of ramifications regarding the accountability and legitimacy of this knowledge.
Unravelling the paradox of the personalised population constitutes a larger project than can be addressed here, but these lines of thought and labels can be a fruitful framing.
My review of a number of recent scholarly contributions on energy and sustainability has just appeared in De Nederlandse Boekengids (nov 2017). While diverse and wide-ranging, the many insightful analyses contained in these publications demonstrate the power of imagination in overcoming the current energy impasse. The various authors also apply incisive critical thought to fossil culture, and to the very idea of the climate crisis and of the anthropocene.
The following works are reviewed:
In a piece taken up in Energy Humanities: An Anthology, the grande dame of Canadian letters Margaret Atwood wonders about the power of literature to fight climate change:
Could cli-fi be a way of educating young people about the dangers that face them, and helping them to think through the problems and divine solutions? Or will it just become part of the ‘entertainment business’?
Atwood’s contribution was noted by Jelmer Mommers in De Correspondent, when it first appeared. This new genre has also been noted in the Netherlands, the subject of an essay review in the Nederlandse Boekengids. As a researcher in the field of energy, I’m obviously primed, and when browsing in the bookshop on a day off, energy titles do jump out at me.
2. Het tegenovergestelde van een mens, Lieke Marsman
3. The End We Start From, Megan Hunter
4. Gold Fame Citrus, Claire Vaye Watkins
5. The Carbon Diaries, Saci Lloyd