As a follower of the “Quantified Self” work catalyzed by Kevin Kelly et al, I was eager to see Laurie Frick’s exhibit “Quantify Me” at “women and Their Work” – Marsha and I hung out there last night exploring the aesthetic representation of Frick’s mind.
Using her background in engineering and technology she explores self-tracking and compulsive organization. She creates life’s most basic patterns as color coded charts. Steps walked, calories expended, weight, sleep, time-online, gps location, daily mood as color, micro-journal of food ingested are all part of her daily tracking. She collects personal data using gadgets that point toward a time where complete self-surveillance will be the norm.
Though I’m interested in the subject, I’m not into self-surveillance because it takes too much metatime. I’m a cyborg at heart, but not particularly organized about my cyborganic data. Building a project like this around it is a way to make it more attractive to track and evaluate processes of body and mind.
The systematic review published in the JOPM turned up 51 articles, of which 38 were suitable for inclusion in the review. There are three assessments of self-diagnosis: that it’s reliable and desirable (31%), that it’s not reliable but still desirable (23%), or that it’s neither reliable nor desirable (29%).
I’m sure the assessments depend to some extent on context and personality, which varied in the papers assesed. One significant problem considered in the discussion is that “self-diagnosis obviously challenges the authority of medicine, an authority which may already be in decline.”
A decline in medical authority is not necessarily a bad thing, even from medicine’s perspective. The notion of self-care and the changing nature of the doctor-patient relationship have been lauded as positive changes in the health system.
The article goes on to discuss how the patient/doctor relationship was historically “characterized by an authoritative, paternalistic doctor managing the care of the submissive patient.”
With respect to patient self-diagnosis, the modern patient clearly would participate in diagnostic decision making, but not necessarily with ease. The ability to assess quality and reliability of health information is not necessarily within the grasp of most lay people, presenting a difficulty on two fronts. On the one hand, any attempt to mediate access to information, or to recapture control of its delivery will infringe upon lay autonomy, returning the patient to the paternalistic care of the omniscient physician. On the other, consuming information without adequate understanding results in individual vulnerability for both patient health and the doctor-patient relationship.
I think this gets to the challenge we face in advocating participatory medicine, but the same challenge is inherent in any democratization of knowledge: to participate, you have to be able to make informed decisions – you have to be informed, and you have to be capable of understanding the information that’s accessible to you. The real source of empowerment may be, not just in the education and participation of the patient, but in the mutually empowering relationship of physician and patient.
The article concludes “that there are no clear binaries to guide the incorporation of self-diagnosis into contemporary health management.”
It is a complex matter, because it is a relational one, tightly bound up in the ways lay people and doctors position themselves and interact relative to one another and relative to particular disease categories.
The next stage of brainpower enhancement could be technological – through genetic engineering or brain prostheses. Because the gene variants pivotal to intellectual brilliance have yet to be discovered, boosting brainpower by altering genes may still be some way off, or even impossible. Prostheses are much closer, especially as the technology for wiring brains into computers is already being tested (see “Dawn of the cyborgs”). Indeed, futurist and inventor Ray Kurzweil believes the time when humans merge with machines will arrive as early as 2045 (New Scientist, 9 May, p 26).
In the future, will there be a sort of “class division” between those whose brains are enhanced, and those who don’t want or can’t afford enhancement?
The guiding principle, perhaps, could be to make sure the technology is cheap enough to be open to all, much as books, computers and cellphones are today, at least in richer countries. “If this stuff can be produced cheaply and resonates with what people want to do anyway, it could take off,” says Chris Gosden, an archaeologist at the University of Oxford.
John Dupré at the University of Exeter, UK says “There will be a lot of evolution, but it won’t be classic neo-Darwinist changes in the genome. It will be changes in the environment, in technology and in the availability of good education. I don’t think souping up people’s genomes is the way to go.” [Link]
The result? “…it transmitted everything I’ve ever had. With almost no dates attached.” So you couldn’t tell, for instance, that the diagnosis of anxiety was related to chemotherapy-induced nausea: “… the ‘anxiety’ diagnosis was when I was puking my guts out during my cancer treatment. I got medicated for that, justified by the intelligent observation (diagnosis) that I was anxious. But you wouldn’t know that from looking at this.”
Where there was supposed to be “more info” about conditions listed, the information wasn’t particularly robust, and some conditions were listed that Dave never had.
I’ve been discussing this with the docs in the back room here, and they quickly figured out what was going on before I confirmed it: the system transmitted insurance billing codes to Google Health, not doctors’ diagnoses. And as those in the know are well aware, in our system today, insurance billing codes bear no resemblance to reality.
All this raises the question, and the point of Dave’s post: do you know what’s in your medical records? Is it accurate information? If some physician down the line was reading it, would (s)he make an accurate assessment of your history?
Think about THAT. I mean, some EMR pontificators are saying “Online data in the hospital won’t do any good at the scene of a car crash.” Well, GOOD: you think I’d want the EMTs to think I have an aneurysm, anxiety, migraines and brain mets?? Yet if I hadn’t punched that button, I never would have known my data in the system was erroneous.
Dave realized that the records transmitted to Google Health were in some cases erroneous, and overall incomplete.
So I went back and looked at the boxes I’d checked for what data to send, and son of a gun, there were only three boxes: diagnoses, medications, and allergies. So I went back and looked at the boxes I’d checked for what data to send, and son of a gun, there were only three boxes: diagnoses, medications, and allergies. Nothing about lab data, nothing about vital signs.
Dave goes on to make a rather long and magnificent post, which you should read (here’s the link again). The bottom line is that patients need working, interoperable data, and patients should be accessing and reviewing, and there should be methods for correcting factual inaccuracies.
We’re saying this having heard that most hospitals aren’t storing data digitally, anyway. This is new territory and we know we have to go there. Salient points:
- Get the records online
- Make sure they’re accurate
- Have interoperable data standards and a way to show a complete and accurate history for any patient
- Have clarity about who can change and who can annotate records
That’s just a first few thoughts – much more to consider. If you’re interested in this subject, read e-patients.net regularly.
Participatory medicine is a phenomenon similar to citizen/network journalism where everyone, including the professionals and their target audiences, works in partnership to produce accurate, in-depth & current information items. It is not about patients or amateurs vs. professionals. Participatory medicine is, like all contemporary knowledge-building activities, a collaborative venture. Medical knowledge is a network.
There are numerous forbearers of theories on human evolution and traces can be found in a plethora of sources, all suggesting that the biological human is not the final stage of evolution for the Homo sapiens sapiens. The philosophy and social/cultural movement of transhumanism has developed not only from the words “trans” and “human”, but also through an understanding that the human condition is one in which we might go outside to gain perspective, a process in becoming, an evolutionary transformation:
- “Trans-human” and the Italian verb “transumanare” or “transumanar” was used for the first time by Dante Alighieri in Divina Commedia.It means “go outside the human condition and perception” and in English could be “to transhumanate” or “to transhumanize”.
- T.S. Eliot wrote about the risks of the human journey in becoming illuminated as a “process by which the human is Transhumanised” in “The Cocktail Party”, Complete Poems and Plays: 1909-1950.Julian Huxley wrote about how humans must establish a better environment for themselves, while still remaining man in New Bottles For New Wine, which contains the essay “Transhumanism”.Teilhard de Chardin wrote about intellectual and social evolution and ultra-humanity in The Future of Man.Abraham Maslow referred to transhumans in Toward a Psychology of Being.
- The Reader’s Digest Great Encyclopedia Dictionary defined “transhuman” as meaning “surpassing; transcending; beyond”.The Webster’s New Universal Unabridged Dictionary defined “transhuman” as meaning “superhuman,” and “transhumanize,” as meaning “to elevate or transform to something beyond what is human”.
Alina Tugend in The New York Times notes a multitasking trend since the 1990s, saying that “while multitasking may seem to be saving time, psychologists, neuroscientists and others are finding that it can put us under a great deal of stress and actually make us less efficient.” As a good case study who’s thought about it a lot, I felt real resonance with the quote from Edward Hallowell, author of CrazyBusy: Overstretched, Overbooked, and About to Snap!: “Multitasking is shifting focus from one task to another in rapid succession. It gives the illusion that we’re simultaneously tasking, but we’re really not. It’s like playing tennis with three balls.”
….despite what many of us think, you cannot simultaneously e-mail and talk on the phone. I think we’re all familiar with what Dr. Hallowell calls “e-mail voice,” when someone you’re talking to on the phone suddenly sounds, well, disengaged.
“You cannot divide your attention like that,” he said. “It’s a big illusion. You can shift back and forth.”
The article goes on to discuss overload, fragmentation, and the neural overhead of task-switching.
Dr. Hallowell has termed this effort to multitask “attention deficit trait.” Unlike attention deficit disorder, which he has studied for years and has a neurological basis, attention deficit trait “springs entirely from the environment,” he wrote in a 2005 Harvard Business Review article, “Overloaded Circuits: Why Smart People Underperform.”
“As our minds fill with noise — feckless synaptic events signifying nothing — the brain gradually loses its capacity to attend fully and gradually to anything,” he wrote. Desperately trying to keep up with a multitude of jobs, we “feel a constant low level of panic and guilt.”
Does the brain look like a koosh ball? Indeed so, when seen via diffusion spectrum imaging, which “analyzes magnetic resonance imaging (MRI) data in new ways, letting scientists map the nerve fibers that carry information between cells.”
Neural fibers in the brain are too tiny to image directly, so scientists map them by measuring the diffusion of water molecules along their length. The scientists first break the MRI image into “voxels,” or three-dimensional pixels, and calculate the speed at which water is moving through each voxel in every direction. Those data are represented here as peanut-shaped blobs. From each shape, the researchers can infer the most likely path of the various nerve fibers (red and blue lines) passing through that spot.