Data glitches: how to get right with your customer

My friend and colleague e-Patient Dave deBronkart explains how misinformation from a customer service agent for Verizon led to a massive international data charge on his mobile phone bill. Verizon was stellar in their handling of the problem. Key point: they didn’t try to make him wrong or make it his problem.

I won’t try to paraphrase his post – it’s pretty rich. Read it here. He explains Verizon’s checks and balances for mitigating the problem, what he did as an empowered/engaged/activated consumer, and what the implications are for empowered patients and people dedicated to improving healthcare, where data issues are common and a big deal.

Free as in beer

In the New Yorker, Malcolm Gladwell runs a sanity test on Chris Anderson’s book Free: The Future of a Radical Price.

There are four strands of argument here: a technological claim (digital infrastructure is effectively Free), a psychological claim (consumers love Free), a procedural claim (Free means never having to make a judgment), and a commercial claim (the market created by the technological Free and the psychological Free can make you a lot of money). The only problem is that in the middle of laying out what he sees as the new business model of the digital age Anderson is forced to admit that one of his main case studies, YouTube, “has so far failed to make any money for Google.”

Why is that? Because of the very principles of Free that Anderson so energetically celebrates. When you let people upload and download as many videos as they want, lots of them will take you up on the offer. That’s the magic of Free psychology: an estimated seventy-five billion videos will be served up by YouTube this year. Although the magic of Free technology means that the cost of serving up each video is “close enough to free to round down,” “close enough to free” multiplied by seventy-five billion is still a very large number. A recent report by Credit Suisse estimates that YouTube’s bandwidth costs in 2009 will be three hundred and sixty million dollars. In the case of YouTube, the effects of technological Free and psychological Free work against each other.

Note that Gladwell’s review is available free online, and Anderson’s book costs $17.19 via Amazon.

Open Architecture Products

Andrew Lippman on “open architecture products,” and viewing customers as partners who will contribute to the evolution and design of the products they buy and use.

The stability that we associated with products is gone. And so if you try to base your business on a product that you think will last a long time, then I suspect you’re likely to be in trouble — because society will change more rapidly….The young are not satisfied with products. They’re satisfied with things they can build into their own products. And so the challenge is to build those as open-architecture products. (From MIT Sloan Review)

www.hsmglobal.com

Crowdsourcery in Austin

The Austin Web Crowd. Photo by John Anderson, Austin Chronicle

Several local web thinkers met recently with the IT staff from the City of Austin to discuss a best approach for redeveloping the city’s web site. The Austin Chronicle has a good article about the project focusing especially on my pal whurley and his efforts to help crowdsource a more innovative approach.

To kick-start the redesign process, Hurley initiated OpenAustin, a website where users can vote ideas for the city website up or down and submit their own. They range from the pedestrian (the currently top-rated suggestion is to pay Austin Energy bills online, followed by a system to list road closures across the city) to transparency-related (being able to track applications for city contracts, putting City Council meetings and video online faster) to the more arcane techy (“machine-readable data feeds” for all city info: crime, restaurant health inspections, etc.; and similarly, publishing every piece of city data as RSS feeds).

Hurley acknowledges OpenAustin is currently in the idea-generation stage. The final form of OpenAustin’s assistance in the redesign is still very nebulous – and of course, entirely dependent on the city’s response. Still, Hurley foresees several ways it could take place. “Maybe we don’t take all the RFPs [requests for proposals] when we do it, and maybe we don’t take everything and outsource it to the community,” he says. But he thinks OpenAustin could “help coordinate the community effort” by conceiving the redesign so volunteers could create mash-ups of applications: for example, overlaying a map of all the city’s bike routes with a map of free Wi-Fi hot spots or early voting locations.

My suggestion to the city was to scale the project to manageable chunks. Have an initial RFP for a framework so that technology and presentation are relatively coherent, but build the framework with the flexibility to allow City departments to RFP and manage their own subprojects.

Everyone in the conversation seems to agree that the city should make its data as accessible as possible, so that in addition to the city’s own site, innovative external applications could be developed that find useful ways to aggregate and analyze… this is what the Obama administration’s shooting for at the Federal level.

One other note: got this from Matt Esquibel at the City of Austin:

We also wanted to invite you to a public forum on Wednesday June 17, 2009 at the Carver Museum from 6:00 – 7:30 p.m. to discuss the AustinGO project moving forward. We hope to provide insight into the direction of the project and listen to the the thoughts and ideas of the community in attendance. We plan to have more public forums in the coming months and will provide more information as dates, times and formats are determined.

Technoutopia socialism

Kevin Kelly talks about “social media” and social-ism, saying “the frantic global rush to connect everyone to everyone, all the time, is quietly giving rise to a revised version of [the s-word].” This is a new brand of socialism that “operates in the realm of culture and economics, rather than government—for now.”

Instead of gathering on collective farms, we gather in collective worlds. Instead of state factories, we have desktop factories connected to virtual co-ops. Instead of sharing drill bits, picks, and shovels, we share apps, scripts, and APIs. Instead of faceless politburos, we have faceless meritocracies, where the only thing that matters is getting things done. Instead of national production, we have peer production. Instead of government rations and subsidies, we have a bounty of free goods.

He uses the word socialism, he says, “because technically it is the best word to indicate a range of technologies that rely for their power on social interactions.”

Heralds of the transition:

How close to a noncapitalistic, open source, peer-production society can this movement take us? Every time that question has been asked, the answer has been: closer than we thought. Consider craigslist. Just classified ads, right? But the site amplified the handy community swap board to reach a regional audience, enhanced it with pictures and real-time updates, and suddenly became a national treasure. Operating without state funding or control, connecting citizens directly to citizens, this mostly free marketplace achieves social good at an efficiency that would stagger any government or traditional corporation. Sure, it undermines the business model of newspapers, but at the same time it makes an indisputable case that the sharing model is a viable alternative to both profit-seeking corporations and tax-supported civic institutions.

Who would have believed that poor farmers could secure $100 loans from perfect strangers on the other side of the planet—and pay them back? That is what Kiva does with peer-to-peer lending. Every public health care expert declared confidently that sharing was fine for photos, but no one would share their medical records. But PatientsLikeMe, where patients pool results of treatments to better their own care, prove that collective action can trump both doctors and privacy scares. The increasingly common habit of sharing what you’re thinking (Twitter), what you’re reading (StumbleUpon), your finances (Wesabe), your everything (the Web) is becoming a foundation of our culture. Doing it while collaboratively building encyclopedias, news agencies, video archives, and software in groups that span continents, with people you don’t know and whose class is irrelevant—that makes political socialism seem like the logical next step.

I don’t know that I would make that prediction, and while I’m swimming in all this, I’m feeling a bit circumspect about the future (which, incidentally, isn’t here yet and never will be, despite what you’ve heard.) We’re increasingly dependent on computers, for instance, and global energy shortages or outages could be problematic (better crank out a lot more thin-film photovoltaics). But it’s cool to feel a bit of utopian optimism, if only briefly, between newscasts.

Mindcasting

Jay Rosen made a rich Tumblr post about mindcasting and Twitter. Mindcasting is Jay’s term for his posting style – where his goal is to have a high signal to noise ratio… and he’s a very active conversation engine. This post has notes on the form… e.g.

The act of building an editorial presence in Twitter by filtering, processing and structuring the flow of information that moves through the medium using one’s follow list, journalistic sensibilities and individual right to publish updates.

Also “It’s true that mindcasting is a pretentious term. People have always told me that certain things I do are pretentious. Every occupation has its hazards, right? What saves mindcasting from being totally so is that it’s an alternative to an even more pretentious notion: lifecasting.” He ends with a great Julian Dibbel quote:

It may begin as just a seed of an idea — a thought about the future of online media, say — tossed out into the germinating medium of the twitterverse, passed along from one Twitter feed to another, critiqued or praised, reshaped and edited, then handed back for fleshing out on a blog, first, and then, perhaps, in a book. It’s not that tweet-size sparks of insight haven’t always been part of the media ecosystem, in other words. It’s just that Twitter now has given them a vastly more exciting social life.

Read Jay’s whole post, my excerpts here don’t do it justice. Just registering my affinity. I really like the idea of diving into the information flow and working it to accelerate its quality. (Wondering if I should add Tumblr as yet another venue for writing/blogging/conversation.)

Open Government on the Internet

Friday’s “Open Government on the Internet” conference at the LBJ Library opened with Bill Bradley, who discussed his (and President Obama’s) profound interest inmaking government more accessible. The conference explored various aspects of government openness and transparency, but at the core of the conversation is an intention – affirmed by Federal CIO Vivek Kundra – to put all data online and to open up government databases, making as much data as possible accessible via clear and usable application programming interfaces. Bradly talked about data and information as raw material for sculpting democratic outcomes. For example, he supports a searchable Federal budget with links from each budget item to the appropriations bill, the authorization bill, information about the committee that passed the bill, who testified and who they represented. This would be a series of connections that would let you see who did what to affect spending at a granular level.

Bradley said we have the opportunity to leverage the ideas of enormously talented people throughout the U.S. via crowdsourcing or “ideastorming.” Truly open government ios not just about providing information from government sources to the people, but also about flowing ideas back from people to the government.

Most people, he said, are not extremely ideological. Their political views may be more complex, not well summarized by categories like “right” or “left,” “Republican” or “Democrat.” He imagined individuals having personal political pages that include more detail about their views, and from the contents of these pages, you could build constituencies around specific issues (the kinds of “adhocracies” that I envisioned in 1997, when I wrote “Nodal Politics” as one chapter of a never-published book about the Internet’s potential as a platform for democracy and political activism).

Following Bradley, there was a keynote by Vivek Kundra, the new Federal CIO he directs Federal technology policy and strategy. The Federal Government has a wealth of information, he says, that taxpayers paid for and have a right to access and use. Obama issued memoranda on transparency and open government as a first move after he took office – it was his highest priority. We have the Freedom of Information Act that LBJ signed when he was president, and we should assume that transparency and accessibility of information is the default, and not an exception that requires a special request.

Note the Human Genome Project, which put the genome data in the public domain. This resulted in a global explosion of innovation in treatment development, over 500 new drugs. Also consider the democratization of satellite information and its impact on navigation and mapping.

By opening up and making data available across disciplines, we can tap into the ingenuity of the people. The true value of technology and data lies at the intersection of multiple disciplines. Crowdsourcing is powerful – the crowd might see patterns the public section lacks the resources or attention to spot. Looking at innovation at a grassroots level, and lower cost of technologies.

Kundra is looking at agencies that have led the way, and new ways to leverage networks. It’s not enough to merely “webify” exiswting institutions. We should fundamentally change processes. There are 24,000 web sites within Federal government, but we need to think about moving government and services where the people are, systems like Facebook, Twitter, Craigslist, Ebay, etc. where there’s high adoption. How do we move our applications where the people are, and fit them to context? We need to provide services in contextgs people are most comfortable with.

Wayne Caswell asked about broadband objectives. Kundra says the intention is to aggressively ensure that we extend broadband access into rural and underserved communities. Services should exist across the entire spectrum, and solutions should work everywhere.

Dennis Mick asked about the possibility of intrusive surveillance. There are robust privacy committees within the CIO council and within the White House. The idea is to bake privacy protection into technologies as they’re developed. They’re working closely with the General Services Administration to negotiate model agreements and ensure privacy protection.

Sharron Rush asked about accessibility. The Feds have rules about accessibility of web sites, yet not all the Fed sites meet accessibility standards (e.g. recovery.org). Kundra says part of the problem is in failing to address accessibility up front and bake it into the procurement process and the architecture of solutions. This will be corrected to make sure no one is disenfranchised.

Gary Chapman, Director of the LBJ School’s 21st Century Project and an organizer of the event, spoke next, saying that the discourse and vocabulary of enterprise computing is being challenged by a new discourse and thinking about consumer technology. What is the bridge between enterprise computing and the new consumer model that is encroaching on institutions?

The Evolving Brain

The human brain is always evolving, and that evolution is accelerating. Consider “superplasticity,” described as “the ability of each mind to plug into the minds and experiences of countless others through culture or technology.”

The next stage of brainpower enhancement could be technological – through genetic engineering or brain prostheses. Because the gene variants pivotal to intellectual brilliance have yet to be discovered, boosting brainpower by altering genes may still be some way off, or even impossible. Prostheses are much closer, especially as the technology for wiring brains into computers is already being tested (see “Dawn of the cyborgs”). Indeed, futurist and inventor Ray Kurzweil believes the time when humans merge with machines will arrive as early as 2045 (New Scientist, 9 May, p 26).

In the future, will there be a sort of “class division” between those whose brains are enhanced, and those who don’t want or can’t afford enhancement?

The guiding principle, perhaps, could be to make sure the technology is cheap enough to be open to all, much as books, computers and cellphones are today, at least in richer countries. “If this stuff can be produced cheaply and resonates with what people want to do anyway, it could take off,” says Chris Gosden, an archaeologist at the University of Oxford.

John Dupré at the University of Exeter, UK says “There will be a lot of evolution, but it won’t be classic neo-Darwinist changes in the genome. It will be changes in the environment, in technology and in the availability of good education. I don’t think souping up people’s genomes is the way to go.” [Link]

Personal health records: the data’s not in (really)

A PHR (Personal Health Records) system like Google Health supposedly “puts you in charge of your health information,” but where do you start? ePatient Dave e-patients.net, decided to take the plunge and move his considerable (after bouts with cancer) health data to Google’s system. His hospital was already supporting easy upload of patient records to Google Health, a matter of specifying options and clicking a button at the patient portal.

The result? “…it transmitted everything I’ve ever had. With almost no dates attached.” So you couldn’t tell, for instance, that the diagnosis of anxiety was related to chemotherapy-induced nausea: “… the ‘anxiety’ diagnosis was when I was puking my guts out during my cancer treatment. I got medicated for that, justified by the intelligent observation (diagnosis) that I was anxious. But you wouldn’t know that from looking at this.”

Where there was supposed to be “more info” about conditions listed, the information wasn’t particularly robust, and some conditions were listed that Dave never had.

I’ve been discussing this with the docs in the back room here, and they quickly figured out what was going on before I confirmed it: the system transmitted insurance billing codes to Google Health, not doctors’ diagnoses. And as those in the know are well aware, in our system today, insurance billing codes bear no resemblance to reality.

All this raises the question, and the point of Dave’s post: do you know what’s in your medical records? Is it accurate information? If some physician down the line was reading it, would (s)he make an accurate assessment of your history?

Think about THAT. I mean, some EMR pontificators are saying “Online data in the hospital won’t do any good at the scene of a car crash.” Well, GOOD: you think I’d want the EMTs to think I have an aneurysm, anxiety, migraines and brain mets?? Yet if I hadn’t punched that button, I never would have known my data in the system was erroneous.

Dave realized that the records transmitted to Google Health were in some cases erroneous, and overall incomplete.

So I went back and looked at the boxes I’d checked for what data to send, and son of a gun, there were only three boxes: diagnoses, medications, and allergies. So I went back and looked at the boxes I’d checked for what data to send, and son of a gun, there were only three boxes: diagnoses, medications, and allergies. Nothing about lab data, nothing about vital signs.

Dave goes on to make a rather long and magnificent post, which you should read (here’s the link again). The bottom line is that patients need working, interoperable data, and patients should be accessing and reviewing, and there should be methods for correcting factual inaccuracies.

We’re saying this having heard that most hospitals aren’t storing data digitally, anyway. This is new territory and we know we have to go there. Salient points:

  • Get the records online
  • Make sure they’re accurate
  • Have interoperable data standards and a way to show a complete and accurate history for any patient
  • Have clarity about who can change and who can annotate records

That’s just a first few thoughts – much more to consider. If you’re interested in this subject, read e-patients.net regularly.

National Broadband Plan (hooray)

Finally, historically, the FCC is working on a national broadband plan. “Why: President Obama considers broadband to be basic infrastructure, like electricity and water, and wants the FCC to do what it can to help drive adoption rates across the USA.” This is what the Freedom to Connect crowd (including yours truly) and the community networking activists have been saying for years. It was delayed for years because “under President Bush, broadband was considered a luxury, and received light government attention as a result.”

Algae Biofuels Summit

Here’s a conference for “communities in the algae biofuels value chain,” including “power plant operators, industrial carbon generators, algae technology developers, algae equipment suppliers, algae project developers, biofuels refiners, financiers, carbon market players, oil companies, airlines, aircraft and engine manufacturers.”

…the goal of the Summit is to provide a forum where the algae community can discuss and learn how to build the links within the value chain that are necessary to make the algae biofuels industry a reality.