Fires, storms, and the crisis of authority

Smoke from the Bastrop Fires
Smoke from the Bastrop Fires

Of course we’ve been tracking the fires in the Austin area, especially the massive complex fire in Bastrop, and I’ve been thinking how to make sense of the disaster. Marsha and I drove toward Bastrop, Texas Monday to get a better look, not expecting to get very close (we didn’t want to be in the way). We drove within ten miles – not close, but close enough to capture photos of the massive tower of smoke: http://www.flickr.com/photos/weblogsky/sets/72157627607062626/ Jasmina Tesanovic was there the same day, and posted her thoughts here.

The whole area is a tinderbox after an unprecedented drought, and a great, now dangerous, feature of the Austin area is that cities and suburbs here have pervasive greenspaces, and we’ve built residences and other structures close to, and surrounded by, foliage that is now potentially explosive.

The current disasterous fires have a climate change signature; they’re products of the record Texas drought – at least exacerbated by, if not caused by, global warming. They were fanned by strong, oddly dry, winds from tropical storm Lee, and while no single storm is specifically related to global warming, their increasing number and severity may be related. While I’m not looking for a climate change debate here, it’s frustrating that the issue has been politicized on both left and right, and leaders have ignored scientific consensus for so long that prevention is no longer an option. We should be thinking about adaptation, but that’s not happening, either.

In fact, we’re not prepared for disaster. Marsha and Jasmina returned to Bastrop Tuesday hoping to volunteer, and Marsha spent much of Wednesday as a volunteer at one of the evacuee shelters. So much is happening so quickly, it’s hard to manage – and there’s no clear leadership or structure. The fire has destroyed 1,386 homes, and it’s still burning. Much of the attention and energy is focused on core concerns. On the periphery of the disaster, there are too few leaders or managers and too many details to manage.

This is a metaphor for global crisis. Economies are challenged and systems are breaking down; at the same time, we have real crises of authority. At a time that demands great leadership, we have no great leaders. Politicians left and right are stumbling. In Texas, which has needed great insightful leadership for some time now, the governor dismisses science and leads rallies to pray for rain.

In difficult times past, great leaders have emerged. Where are they now?

Tracks on the Moon

Nasa’s lunar reconnaissance orbiter caught a nice shot of the Apollo landing site, including tracks left by astronauts in 1969-72. The moon’s such a bleak landscape, odd that it so captured our imagination back then. Article in the GuardianUK.

Then again, there’s Gurdjieff’s view: “Everything living on the Earth, people, animals, plants, is food for the moon…. All movements, actions, and manifestations of people, animals, and plants depend upon the moon and are controlled by the moon…. The mechanical part of our life depends upon the moon, is subject to the moon. If we develop in ourselves consciousness and will, and subject our mechanical life and all our mechanical manifestations to them, we shall escape from the power of the moon.”

More from Richard Myers:

Within the polytheistic world there is a partial correlation with The Fourth Way’s teaching regarding man as a food for the moon. In the mythology and the teachings of several of these polytheistic religions is found the belief in the moon as the repository of the finer bodies of man. In Etruscan mythology, the moon or “Luna” is the underworld, where souls go to rest and the production of new souls begins. In Greek mythology, upon death the soul and psyche first go to the moon and then go to the underworld where there is a second death and a separation. The soul then goes to the moon and the psyche to the sun. The Bhagavad-Gita describes two paths souls travel after physical death; one is the path of the sun, also known as the bright path, and the other is the path of the moon, known as the dark path. Gurdjieff states that man is a food for the moon and these myths and beliefs to a degree correlate with his statement. Gurdjieff also states that, “We are like the moon’s sheep, which it cleans, feeds and sheers, and keeps for its own purposes.” Though pantheistic religions and mythology put man under the sway of the gods they do not equate man to the status of domesticated sheep. This degree of mechanical control by the moon over organic life on Earth and man in particular is probably unique to Fourth Way teaching. Gurdjieff’s statement also implies that the moon is somehow feeding man. There is indeed some basis in Hindu beliefs that man does, at least indirectly, receive something from the moon in the form of soma. Soma in Hindu mythology is an elixir of immortality that only the gods can drink; the moon is said to be the storehouse or cup of soma. Though soma is believed by some to be a plant-derived intoxicant or hallucinogen, this may be a distraction from its real meaning. A verse from the Bhagavad-Gita speaks to this: “Permeating throughout the planetary system I maintain all moving and stationary beings by my potency and having become the essence of the moon, I nourish all plant life.”

Consciousness in a Box

Stumbled onto this piece I wrote in 1994 for FringeWare Review, triggered by a meeting with Hans Moravec, as I recall.

Robotics has two sides — real-world practical application and
development, and scifi mythopoetic phantasy construction — and like
most real/surreal dichotomies of the Information Age, these two sides
are blurred and indistinct within human consciousness, whatever that
might be….

A good question in this context: What is consciousness? This is hard
to answer because of the obvious blind spot inherent in self-definition
(conscious process defining consciousness), you can’t see the forest for
the trees or the neurons for the nerves, as the case may be. Because
the “conscious” part of me is as deep as I usually go, or as I need to go
in order to play the various survival games, I tend to confuse
consciousness, an interface between the internal me and the external
“thou,” as the totality of my being, as a real thing rather than a
conveniently real-seeming process. (Then again, if consciousness
defines reality, what’s real is what consciousness says is real, but that’s
a digression….)

The sages tell me I’m delusional (attached to the delusion of samsara,
of the world, in the Buddhist view), but I can’t quite figure out what
this means. That’s because “I” am as much the noun, delusion, as the
adjective, delusional. So much of what I am is filtered ouot,
inaccessible to the ego-interface.

But wait. The delusional “I am” is a convenience that facilitates
individual survival-stuff, so I’m not dissin’ it. The purpose of this rant is
to make a point, not about ego or delusion (I’ll let the sages stew in
those juices), but about robotics and AI research and the belief, often
expressed in both scifi and real-world contexts, that you, or more
precisely “your consciousness,” can be stored digitally. In most scifi
depicitons of “consciousness in a box,” the object is immortality: you
store what’s essentially you, and it “lives” forever, or until the plug’s
pulled, whichever comes first (I know where I’m putting my money).
In scifi, this is just another device for exploring the question of
immortality, which has fascinated scifi authors and the mythmakers that
preceded them as a way to come to terms with the death thing. Trying
to rationalize the inescapable. But you find other optimistic folks
(Hans Moravec, the Extropians) who are quite serious about the
potential for immortality and who consider the consciousness-in-a-box
scenario a viable means to that end.

I have a couple of problems with the scenario, myself, the first being
that, even if you digitized your consciousness and stored it in a
psychoelectronic device of some kind, it would not be you. Your
awareness would still fold when you discorporate; the thing that’s
stored might emulate your thinking or even your behavior, but it would
be a simulacrum, like you but not you.

The other problem I have is best expressed in the form of a question:
What are we storing? There seems to be a confusion between process
and object. If consiousness is indeed only a shallow process handling
the various negotiations between what we call subconscious and
external reality, what is the character of the data you’re uploading and
defining as you. Rules, implementations, stored memories —
consciousness is really a hash consisting of no single, store-able entity.
It’s like trying to package a tornado — what do you put in the package?
Do you include all the chaotic elements of weather formation and all
the applied physical rules that are manifest in the tornado’s brief life
span as a process event?

The bottom line here is that you can’t really isolate a single entity
“consciousness” and divorce it from its generative context.

Can you even simulate consciousness? Or intelligence, which
probably has a clearer rule base than the vaguer concept of
consciousness, but is still elusive. An “artificial” intelligence with
sufficient density and complexity to mimic human consciousness is the
very real goal of a particular thread of applied research, but so far no
digital simulacrum has been constructed that “thinks” as we know
thinking. The problem here resonates with the earlier argument about
stored consciousness: we don’t have clarity about the definition and
composition of human consciousness, so how can we copy it? It’s
hard enough to copy something we know.

The mythic representations of scifi robots like Robbie or Gort or
Hal9000 are like consciousness in a black box, deus-ex-machina stuff
that might serve to carry a plot forward but, to those who punch code
into dumb processors day after day, doesn’t ring any more true than a
fairy tale or myth, which is to say that it’s more about wishes and fears
than about any current or projected reality. It’s one thing to load a few
rules, even with algorithms to simulate heuristic process, into the CPUs
of this world, but it’s a real stretch to conceptualize silicon-based
thinking or awareness.

Human and animal consciousness are products of code generations
and modifications that reach `way back, perhaps to the inception of the
universe, and are driven by an unfathomable creative force compared
to which our efforts to construct artificial minds seem comparatively
short-sighted and pitiful. Then again, I suppose in our efforts to mimic
“the gods” we’re channeling that creative force, whatever its true
origins, because it must be inherent in the coce structure of the human
genome. And if that’s so, perhaps we’re destined to coevolve with our
own creations, which have themselves evolved from basic practical
and conceptual tools to today’s ubiquitous computing systems. This
coevolution may produce cyborganic life forms which, though not
created entirely by our hands, may be seen as products of an obsessive
desire to be as we imagine gods to be, creatively self-perpetuating and
therefore, as a race if not individually, immortal.

Time and the brain

Burkhard Bilger in The New Yorker profiles David Eagleman, a brilliant researcher who’s studying the brain, consciousness, and the perception of time. At a personal level I’ve spent a lot of time in recent years studying and trying to comprehend my own degrees and levels of consciousness and perception. We think of our “conscious experience” as a constant, and our unconscious as inaccessible… but through attention we learn that there are gradations in the range of conscious to “un-” or “sub-” conscious experience; that perceptions can vary with context; that memory is selective and undependable; that our perception of the world is generally incomplete though we do a good job of filling the gaps. When David Eagleman was a child he fell from a roof and realized that his perception of time had changed as he was falling. Now he’s doing evidence-based research to determine how people experience the world, what are the variations, how does the brain work and how does the mind work?  Read about it here. If you know about similar studies and writings, please post in comments.

Supermoon

Tonight we’ll have a super perigee moon, “a full moon of rare size and beauty.”

“The best time to look is when the Moon is near the horizon. That is when illusion mixes with reality to produce a truly stunning view. For reasons not fully understood by astronomers or psychologists, low-hanging Moons look unnaturally large when they beam through trees, buildings and other foreground objects. On March 19th, why not let the ‘Moon illusion’ amplify a full Moon that’s extra-big to begin with? The swollen orb rising in the east at sunset may seem so nearby, you can almost reach out and touch it.”

[Link]

Self organizing solar panels

“Scientists at MIT have discovered molecules that spontaneously assemble themselves into a pattern that can turn light into electricity — essentially a self-creating solar panel. In a petri dish.” [Link]

I was wondering if this discovery has a practical application. A commenter has the same question, someone else answers:

The implication of the addition of an ‘additive’ to disassemble into a liquid ‘soup’ is that the stuff can be sprayed/painted onto a surface. It also means that it can be mixed with polymers and woven into materials etc.

Paint or spray your house/car/boat/aircraft with it, and decide you want a different colour? No problem, spray the additive/solvent and it comes off.

(Thanks to Audrey Thompson for the pointer.)

Look like a winner

Yesterday I had the privilege to attend an informative talk about effective communication by my friend and colleague Kevin Leahy, aka Knowledge Advocate. One point among many in Kevin’s talk: the content of a communication doesn’t matter as much as we think it does. Kevin, an attorney, said that post-trial conversations with jurors finds that they often recall little about what was said, but much about how they felt about witnesses, based quite a bit on their perception of body language. Coincidentally this morning I find an article about research, conducted by MIT political scientists, that shows how the appearances of politicians strongly influence voters, that people around the world have similar ideas about what a good politician looks like. [Link to the paper “Looking Like a Winner”  (pdf)] 

Sounds like you can take this to the bank: how you LOOK is important, and your BODY LANGUAGE is also important. What you think and what you say? Not such a big deal.

Another point, reading between the lines of the MIT Study: you’re better off if how you look is congruent with people’s perception of your role – there are definite stereotypes. If you don’t look like a politician but you have political ambitions, it’s better to work behind the scenes. (I think politicians already know this).

A difference image

Ann Corwin created this terrific painting of Charles Babbage’s Difference Engine and posted it at her site, Existence is Wonderful. She’s using a color palette that fits the tones in her living room – I think it’s an effective interpretation.

More on Babbage and his machine here:

UTeach

I spent today at the 2010 UTeach Conference here in Austin. UTeach is an acclaimed teacher prep program at the University of Texas. Attendees were mostly K-12 teachers and university professors from across the U.S. I heard about UTeach’s STEM focus (Science, Technology, Engineering, and Math), New Technology High Schools in Napa and Manor, project-based learning, Knowing and Learning in Math and Science, etc. I was primarily interested in the possibility of collaborative projects and learning involving multiple classrooms and disciplines, mediated by social technology. I was live tweeting the event. There were multiple sessions per time slot, so I only got a slice of it. (I also missed the events on Tuesday, and probably can’t make it tomorrow – so much more to learn about learning.)

Foraging and surfing

I’ve often said that we don’t know enough about how peope behave online – e.g. how they read blogs or other web sites. Do we visit the same sites over and over again? Or do we surf, following links we stumble across as we wander, and now with pervasive social media, those that are posted on Twitter, Facebook, LinkedIn, etc.? More likely both – we have some sites we visit regularly, but we also bounce around a lot.

Behaviors are probably more complex than we think. Seth Godin writes that he learned, from Clay Shirky, of something called a Lévy flight: Example: “an animal that forages will hang out in a small area, looking for nuts or berries, then will realize it has used up all the likely sources in this spot. It will then head off in a random direction, walk many paces, and start foraging again.” The online version:

Someone discovers your site. They poke and prod and join and return and return again. Then they feel as though there’s no more benefit and they move on, surfing until they find another place to forage.

Godin calls this “a much more nuanced representation of consumer behavior than solely thinking about the ideas of brand loyalty or random web surfing.” But I’m enough of a nimrod to want to substitute the word human for consumer.

Information/culture wars

In creating with a history of the “climate fight,” Dr. Spencer Weart has created a history with interesting points about the democratization of knowledge. [Link] He talks about a decline in the prestige of all authorities, expansion of the scientific community with greater interdisciplinarity, and a decline of science journalism.

These trends had been exacerbated since the 1990s by the fragmentation of media (Internet, talk radio), which promoted counter-scientific beliefs such as fear of vaccines among even educated people, by providing facile elaborations of false arguments and a ceaseless repetition of allegations.

Mike Hulme’s response:

I think Spencer is helpful by suggesting there is a much bigger story happening in the world of science, knowledge and cultural authority of which the climate change incidents of this moment are just part. These are going to be increasingly difficult challenges for many areas of science in the future – how is scientific knowledge recognized, how is it spoken and who speaks for it, and how does scientific knowledge relate to other forms of cultural authority. It’s not just about the politicization of public knowledge, but also about its fragmentation, privatization and/or democratization.

In comments, Bob Potter says

The key phrase is “expert public relations apparatus”. In the mid 20th century scientists had the luxury of public respect. People believed what they said. As public confidence in authority figures of all types waned, scientists took no notice. When global climate change became a serious issue scientists still assumed that a “word from the wise” would be sufficient, and that is all they brought to the fight. They lost the war because industry had a public relations army and they did not.

All great points: we’re in the midst of culture and information wars, and the concept of “authoritative voice” is less meaningful, if not lost. We can’t fix this by going backwards… as so many of us have said before, we have to focus more than ever on media literacy. Should be right up there with reading, writing, and arithmetic.

Finding the forks in the road

Joel Makower considers four studies that explore the impact on business of climate change and related issues – the need for water management, and uncertainty about energy sources. Says Joel, “our world these days seems to be a succession of forks in the road, points at which decisions need to be made about which pathway we collectively must take.” This reminds me of something Rod Bell used to say, repeatedly: “To solve big problems, you have to go through big confusion.” [Link]

Another case where size doesn’t matter

I’ve often wondered whether insects are more intelligent than we think. A Science Daily article suggests that “tiny insects could be as intelligent as much bigger animals, despite only having a brain the size of a pinhead.” The article goes on to say that brain size is not predictive of intelligent behavior, that “bigger animals may need bigger brains simply because there is more to control.” Lars Chittka, Professor of Sensory and Behavioural Ecology at Queen Mary’s Research Centre for Psychology, says “In bigger brains we often don’t find more complexity, just an endless repetition of the same neural circuits over and over. This might add detail to remembered images or sounds, but not add any degree of complexity. To use a computer analogy, bigger brains might in many cases be bigger hard drives, not necessarily better processors.”