Connectivism

Have you ever thought about how completely irrelevant structured learning is? Indeed. “The illiterate of the 21st century will not be those who cannot read or write, but those who cannot unlearn and relearn.” – Alvin Toffler. The video below advocates a change in how we learn – network-centric, personal, based on your context, not based on some institution’s agenda. (Thanks to Judi Clark for sending me the link to this video.)

Transitional Internet

I continue to be focused on the future of the Internet and aware of divergent paths. In the later 2000s, following a period of digital and media convergence and given broad adoption of evolving high speed (broadband) network connectivity, the Internet has become an environment for mixed media and marketing. The Internet is increasingly centralized as a platform that serves a global business engine. It’s a mashup of business to business services and business to consumer connections, a mashup of all the forms of audio, text, and video communication and media in a new, more social/participatory context: the faceless consumer now has an avatar, an email address, and a feedback loop.

The sense of the Internet as a decentralized free and open space has changed, but there are still many advocates and strong arguments for approaches that are bottom-up, network-centric, free as in freedom (and sometimes as in beer), open, collaborative, decentralized. It’s tempting to see top-down corporate approaches vs bottom-up “free culture” approaches as mutually exclusive, but I think they can and will coexist. Rather than make value judgements about the different approaches, I want to support education and thinking about ethics, something I should discuss later.

Right now I want to point to a collaboration forming around the work of Venessa Miemis, who’s been curating trends, models, and projects associated with the decentralized Internet model. Venessa and her colleagues (including myself) have been discussing how to build a decentralized network that is broadly and cheaply accessible and that is more of a cooperative, serving the public interest rather than a narrower set of economic interests.

I’ll be focusing on these sorts of projects here and in my talks on the future of the Internet. Meanwhile, here are pointers to a couple of Venessa’s posts that are good overviews for what I’m talking about. I appreciate her clarity and focus.

There’s also the work of Michel Bauwens and the P2P Foundation, which I’ve followed for several years. The P2P Wiki has relevant pages:

4chan and anonymity

Marc Savlov interviewed me for this article in the Austin Chronicle, generally about anonymity on the Internet and specifically about 4chan. I hope I made the point that anonymity is a wicked problem (what is identity, anyway?), and that it’s sometimes a solution (as in police states, viva Tor). Coincidentally I had interviewed the phenomenal Tom Jennings yesterday for Plutopia News Network, and when he saw the Chronicle article, he sent this link to to a paper he’d written about 4chan. “The effect of the code mechanisms chosen by 4chan encloses a robust
and stable culture of a form and shape not possible in more finely
controlled environments, and that code is deceptively simple.” Christopher Poole, aka Moot, creator of 4chan, will deliver a keynote at 2pm Sunday, March 13, at SXSW Interactive.

A note about “network neutrality”

This is something I posted in the “state of the world” conversation with Bruce Sterling on the WELL…

I give talks on the history and future of media, and on the history, evolution, and history of the Internet. I gave the talk this week to a small group gathered for lunch in a coworking space here in Austin, and after hearing the talk a technologist I know, Gray Abbott, suggested that I say more about the coming balkanization of the network as the most likely scenario. The Internet is a network of networks that depends on cooperative peering agreements – I carry your traffic and you carry mine. The high speed Internet is increasingly dependent on the networks of big providers, the telcos or cable companies like AT&T, Sprint, Verizon, Time Warner, and Comcast. They all see the substantial value supported by their networks and want to extract more of it for themselves. They talk about the high cost of bandwidth as a rationale for charging more for services – or metering services – but I think the real issue is value. When you see Google and Facebook and Netflix making bundles of money using your pipes, you want a cut. And if you’ve also tried to get into the business of providing content, it’s bothersome to see your network carrying other competing content services, including guerilla media distribution via BitTorrent.

However higher costs could become a barrier. The value of the Internet is a network effect – it’s more valuable as more people use it to do more things; cost as a barrier to entry could reduce participation and diminish the Internet’s value. Killing the golden goose, so to speak. Low cost barriers also stimulate innovation. If I want to create a television series, aside from production costs, I also have to find a broadcast or cable network that will carry it – I have to get permission, in effect, because broadcast and cable channels are relatively scarce and relatively expensive to get into. Larry Lessig pointed out, in his review of The Social Network, the real story of Mark Zuckerberg – that he could build Facebook from nothing without asking anybody’s permission.

“Network neutrality” is about limiting restrictions on use and access,not necessarily about controlling cost, though it might mitigate against “toll roads” on the information superhighway. According to the Wikipedia article on net neutrality, “if a given user pays for a certain level of Internet access, and another user pays for the same level of access, then the two users should be able to connect to each other at the subscribed level of access.” That doesn’t really suggest a low cost of entry, and even with “neutral” networks (or, as we prefer to say these days, an Open Internet), the overall cost of access could increase, or there could be metering that would contain some sorts of activities, like video transmissions. Right now I have unmetered or flat rate access, so I could watch all the Netflix and Hulu I want without additional cost.

Time Warner or AT&T U-verse customers are dropping the cable television services because they can download all the programs they want via the Internet service from the same company. I can imagine companies looking at stats – more and more customers dropping the service, more and more bandwidth dedicated to streaming and BitTorrent. It’s no wonder these companies are feeling cranky, and it’s no wonder they’re talking about finding ways to charge more money. But this is what their customers want.

This isn’t really about the Internet as an information service or a platform for sharing and collaboration. This is about the Internet as a channel for media, an alternative to cable television. One fear many of us have had is that big network companies will push that interpretation. “It’s time for the Internet to grow up, we want to make a real network with real quality of service, we want to make it more like our cable networks.” Which are more tightly controlled, of course, and carry only the content the providers agree to carry.

Are there “master keys” to the Internet?

Interesting article in the New York Times“How China meddled with the Internet,” based on a report to Congress by the United States-China Economic and Security Review Commission. The Times article talks about an incident where IDC China Telecommunication broadcast inaccurate Web traffic routes for about 18 minutes back in April. According to the Times, Chinese engineering managers said the incident was accidental, but didn’t really explain what happened, and “the commission said it had no evidence that the misdirection was intentional.” So there was a technical screwup, happens all the time, no big deal? Or should we be paranoid?

No doubt there’s a lot to worry about in the world of cyber-security, but what makes the Times article interesting is this contention (not really attributed to any expert):

While sensitive data such as e-mails and commercial transactions are generally encrypted before being transmitted, the Chinese government holds a copy of an encryption master key, and there was speculation that China might have used it to break the encryption on some of the misdirected Internet traffic.

That does sound scary right? China has an “encryption master key” for Internet traffic?

Except it doesn’t seem to be true. Experts tell me that there are no “master keys” associated with Internet traffic. In fact, conscientious engineers have avoided creating that sort of thing. They use public key encryption.

So why would the times suggest that there’s a “master key”?

Tim Wu and the future of the Internet

Tim Wu explains the rise and fall of information monopolies in a conversation with New York Times blogger Nick Bilton. Author of The Master Switch: The Rise and Fall of Information Empires (Borzoi Books), Wu is known for the concept of “net neutrality.” He’s been thinking about this stuff for several years, and has as much clarity as anyone (which is still not much) about the future of the Internet.

I think the natural tendency would be for the system to move toward a monopoly control, but everything that’s natural isn’t necessarily inevitable. For years everyone thought that every republic would eventually turn into a dictatorship. So I think if people want to, we can maintain a greater openness, but it’s unclear if Americans really want that…. The question is whether there is something about the Internet that is fundamentally different, or about these times that is intrinsically more dynamic, that we don’t repeat the past. I know the Internet was designed to resist integration, designed to resist centralized control, and that design defeated firms like AOL and Time Warner. But firms today, like Apple, make it unclear if the Internet is something lasting or just another cycle.

Advocating for the Open Internet

“Net neutrality” and “freedom to connect” might be loaded or vague terminologies; the label “Open Internet” is clearer, more effective, no way misleading. A group of Internet experts and pioneers submitted a paper to the FCC that defines the Open Internet and explains how it differs from networks that are dedicated to specialized services, and why that distinction is imortant. It’s a general purpose network for all, and can’t be appreciated (or properly regulated) unless this point and its implications are well understood. I signed on (late) to the paper, which is freely available at Scribd, and which is worth reading and disseminating even among people who don’t completely get it. I think the meaning and relevance of the distinction will sink in, even with those who don’t have deep knowledge of the Internet and, more generally, computer networking. The key point is that “the Internet should be delineated from specialized services specifically based on whether network providers treat the transmission of packets in special ways according to the applications those packets support. Transmitting packets without regard for application, in a best efforts manner, is at the very core of how the Internet provides a general purpose platform that is open and conducive to innovation by all end users.”

Steven Berlin Johnson: good ideas

On October 20, I caught Steven Johnson’s talk at Book People in Austin. I’ve known Steven since the 90s – we met when he was operating Feed Magazine, one of the early web content sites. After Feed, Steven created a second content site, actually more of a web forum, called Plastic.com.

Starting with Interface Culture, Steven has mostly written books, and is generally thought of as a science writer, though I think of him as a writer about culture as well. His book Emergence: The Connected Lives of Ants, Brains, Cities, and Software was a major influence for those of us who were into social software and the percolation of “Web 2.0.” I related it to my earlier “nodal politics” thinking, and it influenced the collaborative paper created by Joi Ito et al., called “Emergent Democracy.” Steven wrote an analysis of the Howard Dean Presidential Campaign for the book I edited with Mitch Ratcliffe, Extreme Democracy.

When Steven wrote The Ghost Map, he came to realize that the story breaking the cholera epidemic in London in 1854 was more complicated than he had realized. John Snow is credited with identifying the source of the cholera (in water, not airborne as many thought), but he wasn’t working in a vacuum. Among others, Reverend Henry Whitehead assisted him, and it was Whitehead that located the index patient or “patient zero” for the outbreak, a baby in the Lewis House at 40 Broad Street. Ultimately the discovery that cholera was water-borne, and that the 1854 outbreak was associated with a specific water pump in London, was collaborative, a network affair. Realizing this, Steven wanted to know more about the origin of great ideas and the spaces that make them possible in both human and natural systems.

Before he got to his current book, Where Good Ideas Come From, Steven looked at the history of ecosystem science and found himself studying and writing about the life of Joseph Priestley, and publishing The Invention of Air. Ostensibly about Priestley, his discovery that plants produce oxygen, and his other contributions to science and nascent American democracy, the book is also about the conditions that contribute to innovation in science and elsewhere, including, per a review in New Yorker, “the availability of coffee and the unfettered circulation of information through social networks.”

These books form a trilogy about worldchanging ideas and the environments that make them possible. From what Steven learned in researching and writing them, he’s ready to dismantle the idea of the single scientist or thinker reversing or disrupting common paradigms with a eureka moment or flash of insight. That flash of light is the culmination of a longer process, 10-20 years of fragments of ideas, hunches that percolate and collide with other hunches. And there’s usually no thought of the impact of an idea. Tim Berners-Lee didn’t set out to create the World Wide Web, he was just scratching his own itch.

Good or great ideas emerge from what Steven calls “liquid networks,” clusters of people hanging out and talking, sharing thoughts in informal settings, often in coffee houses. The people who innovate and produce good ideas tend to be eclectic in their associations – they don’t hang out with people who are just like them, they’re exposed to diverse thinking.

This aligns with my own thinking that we should have idea factories that bring these diverse sets of people together… this is what I’ve seen as the real promise of coworking facilities and various other ways of bringing creative mixes of people to rub their brains together and produce sparks.

Here are three stray thoughts expressed that I really liked, that came up in Q&A:

  1. Error and noise are important parts of the process of discovery. You can’t advance without ’em.
  2. A startup is a search algorithm for a business model.
  3. There’s a thin line between saturation/overload and productive collision.

Photo by Jesús Gorriti

Central Texas World Future Society: Future of the Internet

Here’s a presentation I delivered to the Central Texas World Future Society, used as a framework for a discussion of scenarios for the network and for the application layer.

Digital Habitats/technology stewardship discussion

Digital HabitatsNancy White, John D. Smith, and Etienne Wenger have written a thorough, clear and compelling overview of the emerging role of technology stewardship for communities of practice (CoPs). They’re leaders in thinking about CoPs, they’re smart, and they’re great communicators. Their book is Digital Habitats; stewarding technology for communities, and it’s a must-read if you’re involved with any kind of organization that uses technology for collaboration and knowledge management. And who isn’t?

It’s my privilege to lead a discussion with Nancy, John, and Etienne over the next two weeks at the WELL. The WELL, a seminal online community (where Nancy and I cohost discussions about virtual communities), is a great fit for this conversation. You don’t have to be a member of the WELL to ask questions or comment – just send an email to inkwell at well.com.

Information spill?

We’ve all zeroed in on a set of established platforms for interaction, primarily Facebook and Twitter. Icons linking to Facebook and Twitter pages are standard on many web sites now – suggesting a consensus about where people are hanging out. Many experience the Internet through one or both of these platforms, and a few scattered others (.e.g YouTube, Yelp, blogs etc.). Increasingly we see world-views based on shared content and hyperlinks. As it becomes the new normal, social media is just media, no need to make the distinction. We can end the obsession with tools and forms on the production side, and focus on content. On the consumption or demand side, we have a problem of abundance, of having more quality content than we can track and manage. Filters are crucial, but imperfect. Maybe we still need some work here.

How do we characterize the flow of media? In this context, we invoke the words “push” and “pull.” John Hagel describes pull as ” creating platforms that help people to reach out, find and access appropriate resources when the need arises.” This morning I met with Evan Smith of The Texas Tribune, and he used the opposite word, talking about pushing media to readers where they are, rather than expecting them to come to you – “web site as destination” is obsolete in the world of social media.

I think they’re both correct. Is this a 21st Century media koan? I’d love to hear your thoughts.

Whatever the case, I don’t think we have a handle on the evolving flow of information online, any more than BP has a handle on the flow of oil from the MC252 spill (if you can call a explosive hemorrhage of oil a “spill”) in the Gulf of Mexico.

Fiber Fete: Google’s fiber testbed

Minnie Ingersoll of Google at Fiber Fete talking about what Google is doing with it’s fiber testbed project.

What they want to do:
1) next generation applications.
2) Experiment with new and innovative ways to build out fiber networks
3) Work with “open access” networks

Not becoming a national ISP or cable tv provider. Google had suggested the FCC needed to make this kind of testbed, but realized the Commission had other focuses. Google realized this would be within their purview based on their mission statement.

Application review process for proposals from cities wanting the testbed project has begun. Over 1,100 communities applied. Evaluating based on speed and efficiency of deployment. Understanding how the community will benefit. Much will depend on the conversation they have with the communities as they learn more about their needs.

Working now on developing the offering. Openness – Is this a white label or wholesale service? What products and service partnerships are possible. Google will also develop its own high-bandwidth offerings.

May choose more than one community with very different characteristics.

Applications are full of civic pride. You learn what makes the various locations/communities unique.

Will announce services as soon as possible.

Leverage the enthusiasm – Google to create a web site to help communities connect with other resources. Don’t want to have cities feel excluded from getting higher-end broadband services.

What policies need to be in place to support broadband now?

Brough Turner asks about middle mile networks. Something Google looks at – where do they already have fiber? Sometimes communities farthest from the infrastructure, though, are the ones that would benefit most.

Bice Wilson: enthusiastic about leverage the enthusiasm concept. All the people in the room represent communities that are inventing this new cultural process. Google is helping drive the process. Are you planning to make this useful in that way (as a model)?

Google is looking for specific ways to keep the applicant communities talking to each other. Is it an email list? A forum? A wiki? Definitely looking to Open Source, create white papers and best practices from what they do so that others can benefit.

David Olsen from Portland: what type of testbed environment? Also thanks for what Google has done to raise consciousness of cities about significance of broadband.

Urban vs rural: not sure whether it will be 1, 2, 3, 4 communities. Might be in different communities, or neighborhoods or subsets of a community. Will probably be looking for more than one community, with differences. Probably a mix.

David Weinberger wonders how raw the data Google outputs about the project will be, and how immediate. Google hopes to satisfy with the amount of data, and immediacy. Google will be responsive to feedback, so people can let them know whether they’re providing enough info.

Marc Canter brings up political issues around municipalities providing pipes. Have they heard from AT&T and Comcast, etc.?

Google is definitely inviting the other providers to use their pipes. There’s plenty of room in the broadband space, and no one company has a monopoly on innovation. Discussions are ongoing about partnerships.

How open is open? What rules will there be?

Google will advocate policies around net neutrality, e.g. no content discrimination.

Garrett Conrad asks about leveraging Google’s apps vs apps the community might come up with?

The community aspect will be key, crucial. It would be wrong for Google to tell the community what they need… will be listening, but will also be prepared to offer guidance and applications.

Leslie Nulty asks what is the business structure concept that lies behind this? It’s not completely clear. Appears that Google intends to build, own, and operate these networks – become a monopoly provider. What are the checks and balances? Will Google become an unregulated monopoly?

Some will be the published, openness principles Google will be expect to be held to. Not a monopoly. Will offer reselling opportunities.

Canter: if you’re open, it’s not a monopoly.

The openness is of the service offering on top of our pipe. We’re not trying to force people into using Google apps.

Google does plan to build, own, and operate these networks in trial communities.

Nulty: price is the question for any community that might want to partner with Google.

Services will be competitively priced. Will negotiate with the municipalities on a contract that both think is fair. Google will be as transparent as they can be, and if there’s somethng they’re missing, let them know.

State regulations preventing broadband a barrier? Google wants to learn more about regulations and policies. Ask communities to explain regulatory barriers for their specific communities as part of RFI response.

Chris: Communities United for Broadband on Facebook.

Nancy Guerrera (sp?). Wants to know what it’s like working with local communities. Refers to previous project in San Francisco to set up muni wifi. Ended up building in Mountain View after discussions with SF didn’t work out. Google learned from this, though each community is different.

Will Google’s transparency extend to documenting issues/discussions with policy organizations?

Yes, if the press doesn’t document for us, we’ll do our best to document legal and regulatory barriers we encounter.