I’m not a big fan of the Quantified Self movement – more on why in a future post. But I do think having a pulse on some data about your health and wellness can be valuable.
I also believe that our moods are both symptoms and causes of good and bad health.
Furthermore, I’m very curious about the essence of the relationship among software and patients, providers, and other medical systems.
Now, as far as monitoring moods, there is a lot more than just experience sampling. A fundamental problem of measuring moods is the observer phenomenon. Asking “What’s your mood right now” runs the risk of influencing the qualitative response.
I’ll spare my longer views of how to build the “right” app for tracking moods (and beyond moods, the full spectral array of relevant data required to robustly capture affective dispositions).
Anyway, I downloaded one of the many mood-tracking
devices apps for iOS – I won’t name it because this post isn’t about the product per se. It’s about a fundamental problem with all healthcare apps that require any kind of routine or non-routine interaction from the user.
Specifically: the reminder. Initially, I found the pings to be bells of mindfulness so-to-speak. The app had an elegant way of recording my mood at a given time. Even the alarm was calm: a soft bong on a singling bowl. Good enough.
But it didn’t take long for it to become totally annoying. I had to turn it off. And when I went back to the history it recorded, it didn’t provide much useable data to interpret into meaningful decision-making material.
If a physician ordered me to download the app and stick to her prescribed protocol for reminder settings, there’s a good chance I’d go ape-berserk.
Think about patients with clinical affective mood disorders. It’s not a joke to conjecture that apps like these could conduce a patient to cycle upward into hypomania – or even full mania.
When thinking more seriously about these apps (should we call them ‘medical devices’? – see this for that question), we need to understand not only the pathologies and the surrounding environment, but also the *software*.
This is new territory. It’s not enough for us to just say “these are tools, let’s fit them into healthcare”. No, we need clear definitions and understandings.
Specifically, we need to know a lot more about the essential relationships which software play in health care – this is new land.
We know what a pill does. We know what an implanted asset does. We know what a surgical procedure does. In short, we know a good deal about how patients relate with these traditional assets.
But software is different.
Software is pliant, fickle, ramifying, mutable.
If done right, software can be enormously beneficial.
If done wrong, it can be annoying.
Worse, it can be adverse.
Technology shouldn’t drive us nuts.
- Phil Baumann