Showing posts with label Frank Zappa. Show all posts
Showing posts with label Frank Zappa. Show all posts

Wednesday, September 29, 2010

DEATH BY DATA


I just finished listening to the complete symphonies of Franz Josef Hadyn who is widely recognized as “the father of the symphony.” His achievement is incredible if only for its sizeable output—some 107 works in all. The reason I bring it up is not out of any odd feeling of accomplishment though the experience was filled with musical wonders—but because it’s made me think that before digital media came on the scene, it would not have been possible to listen to them all—unless, of course, I was able to sit through the four years of concerts that it took for the Stuttgarter Kammerorchester to record the 37 CD set.

The digital compact disc made available comprehensive box sets of individual artists, composers, and bands in diverse collections that encompass the history of music genres including the arcane as well. It may be kind of daunting to confront an artist’s complete works when they exhibit the scale of a Hadyn, for example. The Internet has also made it possible to expand one’s reach exponentially into the world of the consequential, in addition to burying us in minutiae and trivia. The question becomes—how do we go about discovery and finding meaning in this mirror maze of data?

It’s nothing new to say that we are suffering under the weight of information and the grip of technology. Jaron Lanier’s recent book, You Are Not A Gadget, is as good as any in the list of jeremiads warning us about giving up our souls to silicon based-lifeforms. Personally, I experienced a tipping point this past summer with my inbox groaning for the mercy of the delete button and unsubscribe links which became my truest online friends.

The data available at a mere mouse click through search is imposing as well. Recently, my ten-year-old son expressed an interest in movies about World War II. He came to me frustrated by the wide range of choices offered by Netflix. It became apparent to me that his desire for discovery needed human intercession—and not the kind offered by several engines that pride themselves on non-robotic crawler solutions and even so-called "human search." Collaborative filtering and recommendation engines be damned, what he was asking for was curation.

The future of search is curation. I am convinced it will be at the foundation of many successful business enterprises and for individuals who can provide an editorial perspective on qualifying information. It’s not enough just to make the information available as we have been finding out. Say you were new to rock and roll—or Hadyn, for that matter. Where would you start? Google? Wikipedia? iTunes? And if so, how reliable are these methods? Google’s acquisition of metaweb last July speaks to emergent search methodologies that attempt to provide a layer of contextualization. Wolfram/Alpha is another that steps up the visual component of search.

In a conversation with Frank Zappa, he once pointed out to me that the binary mind behind modern computer technology is more limited than we think, particularly when taking into consideration the nature of time. He saw the conventional perspective of past, present, and future augmented by “never” and “eternity” and offered a vision of time as spherical and non-linear. He suggested that a computer that added these two features to the conventions of "on" and "off" switching would yield results that were more in keeping with the way that we live in time radially with our brains. Before he died in 1993, Frank joked that the Japanese “had probably already been working on it.”

The religious scholar, Mircea Eliade, once pointed out that the end of an era or great age often generates a popular belief that if all information were to be made available, that the Answer will then present itself. Of course, if Google were a religion, this idea would be the central tenet of the digital faith—and any entity whose corporate philosophy is “You can make money without doing evil” might arouse suspicions. Its mega initiatives like Google Earth and Project Gutenberg should raise an eyebrow at least. Who knows, maybe Google has already discovered the Answer to the Answer.

But, on the whole, I prefer to look for the answer in music, say in one of Bach’s inventions or in a John Coltrane solo, than in any old text-based search. It is here that we are presented with the age-old battle of what came first at the Creation—a subject of one of Hadyn’s master works as well—did the universe start with light as in a very special visual effect or was it in born of sound, mantra or "the Word"? I’ll place my bet on the sound of music any day because a Google search I just did yields 146,000,000 results for “Let there be light” versus a search for “The Big Note" which wins with 203,000,000--so it must be true...

Saturday, September 5, 2009

THE DAY THE WORLD ENDED


Last month, while American consumers were being prepped and primed by media outlets and marketers for the 40th Anniversary of the Woodstock Festival, another anniversary was taking place at the beginning of August. I don’t want to be some kind of killjoy and take away from the nostalgic image blitz of stoned-out youth frolicking in the New England mud bathed in the electric rain of rock gods and demigods, many of whom I still worship. But, as Frank Zappa once said to me, “The world will end in nostalgia,” so it seems logical that I can’t get thoughts of August 6, 1945 out of my mind. Maybe it’s because the hills above my house have been on fire for the last several weeks, raining down ash and producing atmospheric conditions around us that resemble the smoky, yellow eclipse lit haze of some other planet. Driving back home from San Diego, I could see the giant mushroom cloud pluming over Pasadena from over a hundred miles away.

At 8:15 AM on that August morning 64 years ago, a B-29 bomber dropped a single-bomb with the charming nickname “Little Boy” over the Japanese city of Hiroshima. On August 9, a second bomb called “Fat Man” was dropped on the city of Nagasaki. These particular targets were chosen according to Stephen M. Younger in his book, "The Bomb: A New History", because they “had not suffered from the devastating bombing raids that had reduced Tokyo and other cities to little more than smoldering ruins. The hills that surrounded Hiroshima and Nagasaki would focus the effect of the blast, further increasing the destruction caused by the bombs.”

“Little Boy” dropped in forty-three seconds to nineteen-hundred feet above Hiroshima and exploded. The height had been chosen “to maximize the damage produced by the expanding nuclear fireball.” Its detonation created an intense flash that was called “brighter than a thousand suns.” Within seconds, an immense shock wave and firestorm swept the city destroying everything in its wake including some 68,000 buildings. Three days later, as Younger reports, “the United States demonstrated to Japan and the world that Hiroshima was not a one-off event” when it completely destroyed Nagasaki with a second atom bomb.

I heard a recent, breathless radio promo for a show called “Surviving Disaster” on Spike TV that described it this way—you have 20 seconds to cover your eyes and about 20 minutes to take cover from radioactive fallout. The promo ponderously warned, “It’s not a question of whether it will happen, but when.”

Just how the show is drawing the conclusion of inevitability is unclear, but the leap from the catastrophic events of Hiroshima and Nagasaki to nuclear attack as entertainment value is a mind-boggling, but not necessarily American impulse. Godzilla is not only an iconic film monster, but is held in Japan with an almost religious reverence. One reason why is that Godzilla, Rodan, and other Japanese monster movies have been seen as a symbolic, subliminal response, inspired by the Japanese experience with the atomic attacks. Godzilla awakes in the film as the result of French nuclear tests in the Pacific. Despite its grave delivery, the Spike promo is completely removed from the terrible reality of what actually occurred in August, 1945.

Estimates for the death toll from both bombings has been estimated at well over one-hundred times the casualties from the 9/11 attacks. This figure includes an estimated from sixty-six-thousand to one-hundred-forty thousand instant deaths in Hiroshima and an estimated forty-thousand in Nagasaki. We know that in the immediate five years following, one-quarter of a million more died with untold hundreds of thousands more in the decades following the bombings from radioactive related diseases. But statistics remove us from the human factor of disaster and the horror of Hiroshima and Nagasaki is beyond human imagination.

Hiroshima has been called “the exclamation point of the twentieth century”, but two perspectives from survivors are more than enough to tattoo the pictures forever in one’s brain. Stephanie Cooke tells of one in her recent book, "In Mortal Hands: A Cautionary History of the Nuclear Age": “…a nineteen-year-old girl who survived reported a remarkable sight near a public garden. Amid the bodily remains, burned black and immobilized at the moment of impact, there was, she said, ‘a charred body of a woman standing frozen in a running posture with one leg lifted and her baby tightly clutched in her arms.’”

In "Uranium: War, Energy and the Rock That Shaped the World", Tom Zoellner describes how Japanese writer, Yoko Ota, remembers the white flash as “the collapse of the earth which it was said would take place at the end of the world.”

Zoellner continues, “Even President Truman, who was famously coolheaded about the decision to use the weapon on Japan, wondered in his diary if the act he would soon authorize was ‘the fire destruction prophesied in the Euphrates Valley Era, after Noah and his fabulous ark.’"

When news of the successful atom bombing of Hiroshima reached the team of scientists behind its invention in Los Alamos, New Mexico, “there was a general excitement, and scientists rushed to book tables at Santa Fe’s best restaurant to celebrate the achievement. But that night’s party on the mesa was a grim affair. Almost nobody danced, and people sat in quiet conversation, discussing the damage reports on the other side of the world. When J. Robert Oppenheimer left the party, he saw one of his colleagues—cold sober—vomiting in the bushes.”

The decision to drop the atom bomb is a controversy that will remain unsettled and is examined at length by Richard Rhodes in his books about the nuclear age. One school of thought is that the Japanese doctrine of "defense at all costs" was a bluff; another indicates that they had already expressed a willingness to negotiate a cease-fire through Russian back channels. According to the Russians, the atom bomb was secondary and it was the declaration of war against Japan by Moscow that was the deciding factor in ending of the war.

In her illuminating book, "Troubled Apologies: Among Japan, Korea, and the United States", Alexis Dudden describes both US media censorship and outright fabrication about the bombing of Japan as propelling “the basic story line for Hiroshima and Nagasaki that Americans would come to cling to as history at the cost of learning what was actually going on: ‘the bombs saved lives.’…the US government and its officially placed mouthpiece at the New York Times established as a fact that no one in Hiroshima had died from radiation and that only foreign lies (British or Japanese) suggested otherwise.”

The New York Times’ science writer who was an eyewitness over Nagasaki, William “Atomic Bill” Laurence, won a Pulitzer Prize for his early, evangelical coverage of atomic weapons. His account of the event demonstrates that he was not only distant from the event by mere altitude, but close to some kind of atomic rapture:

“Being close to it and watching it as it was being fashioned into a living thing so exquisitely shaped that any sculptor would be proud to have created it, one felt oneself in the presence of the supernatural…Awe-struck, we watched it shoot upward like a meteor coming from earth instead of from outer space, becoming ever more alive as it climbed skyward through the white clouds. It was no longer smoke, or dust, or even a cloud of fire. It was a living thing, a new species of being, born right before our incredulous eyes.”

Historically, not everyone was sold. Harvard physicist George B. Kistiakowsky witnessed the Trinity test in July, 1945 only several weeks before the atomic bombing raid on Japan and called it, “the nearest thing to Doomsday that one could possible imagine. I am sure that at the end of the world—in the last millisecond—the last man will see what we have just seen.”

John Hersey was the first to write of the human factor in his long August 1946 New Yorker essay profiling regular people on the ground in Hiroshima. The day after the Trinity test sixty-eight scientists at the University of Chicago signed a confidential letter to Harry Truman urging him not to use the device. They wrote presciently: “If after the war a situation is allowed to develop in the world which permits rival powers to be in uncontrolled possession of this new means of destruction, the cities of the United States as well as the cities of other nations will be in continuous danger of sudden annihilation.”

Leo Szilard, the scientist who persuaded his colleagues to write the letter, and the man who conceived of the chain reaction and worked on the Manhattan Project later referred to himself and other atomic scientists as “mass murderers.”

“Why did they have to go and drop another?" a wife of one of the atomic scientists asked upon hearing the news of Nagasaki. “The first one would have finished the war off.” Short of an apology, this kind of self-reflection on the part of civilians as well as the scientists—including Einstein—who were behind the creation of the Atomic Era, leads one to wonder what we can do to make amends today.

There is a long tradition—if not ritual—of apology in Asian cultures. It is one that seems to have been adopted for some time by Americans, who are now accustomed to press conference scenes where morally straying politicians apologize to the nation, their constituents, wives, and families for errant behavior. More recently, other kinds of less predictable apologies have appeared.

Last February, the Senate apologized to Native Americans for atrocities committed during the opening and seizing of their lands. On July 29, the US House of Representatives issued a resolution formally apologizing to black Americans for slavery one-hundred forty years after its abolition. After forty years of silence, at a local Columbus, Georgia Kiwanis Club on August 21, Lt. William Calley (the only Army officer convicted of the 1968 My Lai massacre), in an extraordinary and unexpected apology, expressed his “remorse for the Vietnamese who were killed, for their families, for the American soldiers involved and their families. I am very sorry."

Twenty years ago, Congress apologized for the World War II interning of Japanese Americans in concentration camps. Why not Hiroshima? None other than the author of "Giving: How Each of Us Can Change the World" said the following in 1995: “The United States owes no apology to Japan for having dropped the atomic bombs on Hiroshima and Nagasaki.” Bill Clinton was just towing a bipartisan line.

Alexis Dudden traveled with President George W. Bush on a trip to Tokyo during 2002 on a mission, among other things, to thank the Japanese government for its support of the War on Terror and to launch plans to celebrate the 150-year anniversary of Japanese-American relations. Dudden relates that she “had an unexpected, theatrical education in one of the trajectories of Hiroshima’s history during (a) routine walk.” As she passed down the main boulevard near the Japanese Parliament and National Library, “several of the notorious black trucks popular with the country’s extreme right wing passed…with the lead van blaring the customary martial songs. This was not unusual, but the message pouring from the loud speakers stopped me flat—‘Welcome to Japan, President Bush of the United States of America! Apologize for Hiroshima and enjoy your stay!’”

She goes on to say, “Throughout the recent era of apologies all around—or maybe in spite of it—there has remained one matter on which Washington holds firm, regardless of who is in office—there will be no apology for Hiroshima or Nagasaki.”

Originally, the lowest projection of how many American lives would have been saved by avoiding a costly land invasion of Japan by using the bomb was twenty-six thousand casualties. Dudden observes, “Americans transferred what happened—the destruction of Hiroshima and Nagasaki—for an event that never took place—the proposed land invasion of Japan—to stand in for history. By the early 1950s, the imagined truth was American myth, and in 1959, President Truman wrote for the record that the bombs spared 'half a million' American lives, and that he 'never lost any sleep over the decision.' Over the years…American storytelling has come to count the number of ‘saved’ Americans as high as 1 million. (This number appeared squarely in David McCullough’s 1993 Pulitzer Prize winning biography, Truman, despite abundant evidence to the contrary at the time.)”

Apologies are more complicated than they appear. Dudden’s book details the way that formal apologies can be used to cloak deeper strategy to avoid restitution and financial penalties. As to the US government’s obdurate stance, Dudden concludes, “The chronic inability to confront how America’s use of nuclear weapons against Japanese people in 1945 might constitute the kind of history for which survivors would seek an apology, let alone why the use of such weapons might represent a crime against humanity, is sustained by Washington’s determination to maintain these weapons as the once and future legitimate tools of the national arsenal. It is not at all by chance that among weapons of mass destruction—nuclear, chemical, and biological—only nuclear weapons are not prohibited by international law. Were it otherwise, the likelihood that the history of America’s use of them on Japan would generate changes of attempted genocide against the United States or Harry Truman would increase exponentially.”

Last April, President Obama made strong statements during a visit to Prague about his commitment to abolish nuclear weapons. His speech called for an international summit on the subject by the end of the year. The Mayor of Hiroshima, Tadatoshi Akiba said, “As the only nuclear power to have used a nuclear weapon, the United States has a moral responsibility to act, defining U.S. responsibility in a historical context." Akiba asked Obama to hold the summit in Hiroshima.

Our family had a Japanese exchange student staying with us for a year. After she had been living with us for a while, I felt compelled to speak with her about her hometown of Hiroshima and to apologize in my own way for what had happened before either of us had been born. She seemed surprised at my gesture and we spoke about the event in the abstract—her parents had been children at the time and spoke little to her, if at all, about their memories. I suppose that the American generation preceding mine who experienced the direct consequences of World War II might argue with my stance on apology citing my distance from the events that defined them, in many cases, for the rest of their lives.

I find it interesting that the word “apology” and “apocalypse” have the same prefix. Apology is said to be rooted in words originally meaning “regret, defense, or justification” and giving an account or story of oneself. Apocalypse is rooted in the Latin word meaning “revelation” and the Ancient Greek meaning “to uncover” as in to lift a veil. The prefix “apo” means “from, away, off”. Perhaps there is a connection between the act of apologizing and the avoiding of apocalypse—by this logic, if we lift the veil that hides our own truth, then revelation might follow. A year after the bombing of Hiroshima and Nagasaki, the esteemed Indian yogi, Paramahansa Yogananda, reflected on the discovery of uranium: “The human mind can and must liberate within itself energies greater than those within stones and metals, lest the material atomic giant, newly unleashed, turn on the world in mindless destruction.”

With the first new ruling party now established in Japan in over fifty years, an appropriate overture to the new government from the American President whose campaign mantra was “change” should be to agree to hold the Nuclear Non-Proliferation Treaty Review Conference in Hiroshima and take the world stage by opening his remarks with an apology to the people of Japan. Think about it the next time you are being served sushi—these people were our enemies? Or as Allen Ginsberg might say, “We are the Japan.”

Sunday, March 22, 2009

THE BIG NOTE OR HOW TO APPEASE YOUR LOCAL ANCESTRAL SPIRITS


Words have a strange power over us. Anyone can appreciate this who has been to a Death Metal concert or an opera, major sporting event, heard an orator like Martin Luther King, Jr. or witnessed Tibetan monks chanting “Om Mani Padme Hum”. Though most religious beliefs and scientific theories about the origins of the universe favor light at the beginning, there are some that don’t see, but hear the beginning in the form of the First Sound.

The science of Hindu “mantras” or sounds of power is one living example from the Sanskrit tradition that is thousands of years old. This tradition which also figures in Buddhism is based on the ancient development of certain words, syllables, sounds or groups of word that are distinguished by having the ability to create transformation as tools of power. The respect that the specific use of sound had in these Eastern traditions is no better borne witness to than in the early use of music and sound as a weapon that was regarded as capable of being terminal—a feature, perhaps, that some Death Metal fans secretly wish for.

Marshall McLuhan observed that Hitler’s rise can be attributed in large part not only to the dire economic circumstances of 30’s Germany, but to his absolute command of the radio. Had he lived during the Age of Television, McLuhan points out, his frenzied, frothing at the mouth, mad-eyed delivery wouldn’t have lasted a second.

Given the unique power of sound, it’s no wonder that the notion of its connection to the origin of the cosmos entered the realm of modern music as well. Pete Townshend incorporated the idea at core of The Who’s “Lifehouse” project with the song “Pure and Easy” which proclaimed, “There once was a note, pure and easy…” Maestro and 20th century classical composer, Frank Zappa, was more expansive in his expostulating of “The Big Note” in his early masterwork, “Lumpy Gravy”: "Everything in the universe is ... is ... is made of one element, which is a note, a single note. Atoms are really vibrations, you know, which are extensions of THE BIG NOTE...Everything's one note. Everything—even the ponies. The Note, however, is the ultimate power, but see, the pigs don't know that, the ponies don't know that..."

Zappa continued the dialogue in “A Different Octave” in his “Civilization Phaze III”:

Spider: We are ... actually the same note, but ...
John: But different octave.
Spider: Right. We are 4,928 octaves below the big note.
Monica: Are ya ... are you trying to tell me that ... that this whole universe revolves around one note?
Spider: No, it doesn't revolve around it; that's what it is. It's one note.
Spider: Everybody knows that lights are notes. Light, light, is just a vibration of the note, too. Everything is.
Monica: That one note makes everything else so insignificant.

As a great American satirist and sociologist in the tradition of Lenny Bruce and Robert Crumb, it’s hard to separate Zappa’s work from its milieu and the hippie scene that was often subject of his mordant critique. Still, he also said famously: Remember, Information is not knowledge; Knowledge is not Wisdom; Wisdom is not Beauty, Beauty is not love; Love is not Music; Music is the best.”

Or as Germaine Greer said in a 2005 article in The Guardian, “In Frank's world, every sound had a value, and every action was part of the universal diapason, a colossal vibration that made energy rather than reflecting it.”

In our personal lives—as well as in the world of branding—sound is no more refined and focused than in names. "What's in a name? That which we call a rose, by any other name would smell as sweet," says Juliet in Shakespeare’s immortal tragedy of “star-crossed lovers”. Unfortunately, these teenagers are from warring families, and she is telling poor Romeo that their family names don’t amount to a hill of beans compared with their love for one another. But, names do matter and harshly as they were to find out by play’s end. Fast forwarding to the 21st century, history surrounds us no matter where we live in the place names that we often ignore or are oblivious to. Names are key to understanding not only because they provide a clue to our own identity as individuals, but because place names can provide us a better sense of who we are by locating us in time and space.

After I first moved to LA, one of the things that amazed me was how close history was just in the passing freeway signs that beckoned in English, Spanish, and sometimes odd names like Malibu, Cahuenga, Azusa, and Cucamonga. Curious, I started trying to locate old maps and records to trace names that had this distinctly non-European flavor to them. I mean, growing up in New York, I knew that Manhattan was named after a tribe called the Mannahatta and that Wall Street was named after…well, the wall of a fort. But, maybe it’s just all the tall buildings and pavement that seem to have removed the past so entirely from the landscape—though there is now an interesting project called surprisingly enough, The Mannahatta Project, that was recently profiled in the New Yorker, and is recreating what the New York Island looked like prior to contact by Europeans.

But, it was so easy to squint my eyes somehow in the San Fernando Valley and look at the mountains, washes, and hillsides still intermittently decorated by chaparral and imagine what it was like only a century or less ago. History is close here relative to the Old World. The last full-blooded Chumash Indians died in the Santa Barbara area around 1900. I have a photo of an Indian village in Riverside (which is about half an hour from my house), that dates as recently as 1920. I say “recently” even though it’s almost 90 years ago, but the past seems more present here to me for some reason. Perhaps it’s just that Hollywood has made its mark by adding more ghosts even if they are frozen by artificial light to add a layer to the ancestral spirits and ghosts of the conquistadors, padres, prospectors, cattlemen, and railroad barons that seem to lie rustling just under the cover of the Santa Ana winds and coastal morning fogs. And I guess it was one of those coastal names that started a fascination and study that has lasted to this day with the Chumash people who lived here. I got hold of a Rosetta stone of sorts in the form of notes from an anthropologist named John Peabody Harrington who logged place names and other priceless ethnographic data from the few surviving Chumash during the early part of the 20th century. One such location was “Humaliwo” which in the original language meant to describe the place “where the surf crashes loudly”, a fact that would be appreciated by surfers at the same approximate location where 3000 villagers lived known today as “Malibu”.

The Chumash Indians of Southern California were extraordinary in many ways. They had the only ocean-going canoes in the Western hemisphere outside of the peoples of the Northwest Coast of the continent. They had a working knowledge of astronomy, a complex social system, trading culture, and were world class artists as evidenced by the rock art that still lies hidden to amaze the lucky beholder in sandstone outcrops, caves, and other secret places in their territory between Malibu and Morro Bay.

The Chumash believed that dolphins, a close aquatic relation of the porpoise, were guardian spirits who literally served to hold up the world. It’s no wonder that the Hollywood film industry found its center here for what better description of the artist? The dolphins accomplished this feat by swimming around the earth and weaving a web of salty spray in their looping up-and-down motion between the worlds of ocean and air.

According to local Indian legend, sympathetic spirit beings took pity on the lonely dwellers of the ancient Channel Islands and built a rainbow bridge to the mainland, thereby making way for the first pilgrimage of humans. Unfortunately, not all the humans in this exodus were able to maintain their balance on the bridge of colors; those were the “unlucky ones” who tumbled off before reaching land and transformed into dolphins when they hit the wild currents below.

Today, a curious vernacular adoption of the word “porpoise” employs the term as a verb to illustrate an up-and-down motion similar to the movement of these aquatic mammals. This particular usage is meant to describe periodicity in the life cycle of human beings, societies and even corporations. In this perspective, one’s life “porpoises” as we navigate the highs of comedy and lows or “slings and arrows” of tragedy. One role of drama and the arts is to provide human beings with a method to penetrate this mysterious cycle of time, and to create the absolute that is possible in the moment that time stops and we are part of the whole. “Porpoise” then can become twisted as a sound alike like a Marx Bros. routine to suit our agenda here to mean “purpose” at this intersection where the creative act defines space and time.

It is not recorded whether dolphins or porpoises accompanied the first European explorers who first sighted the area now known as Greater Los Angeles. What is known is that these earliest explorers who were part of the Spanish Cabrillo Expedition in 1542, recorded on Sunday, October 8, their arrival at “the mainland in a large bay” (most likely Los Alamitos or San Pedro Bay) which they named “Baia de los Fumos” or “the Bay of Smokes”. This namesake was given to commemorate the haze that even then covered the landscape in an unreal, mysterious curtain comprised of vapors from campfires of the several dozen Indian villages that dotted the region as far as the eye could see. On the approximate site of one these villages (belonging to the Gabrielino or as they called themselves, the Tongva or “people”), and immortalized in old maps of the desert padres as the town of “Puvungna”, now sits the sprawling campus of California State University at Long Beach.

Though the record of what the village name means is shrouded in haze like the Bay of Smokes, it said to have an association with the word for “crowd”. We do know that the suffix, “gna” means “place of”, for example, as used in other Los Angeles area place names like “Cahuenga” --“place of the mountain”--or “Tujunga” --“place of the owl.” Even so, if Puvungna or “the place where crowds gather” is of uncertain province, it is still regarded as the most significant village documented by one of the most famous missionaries, Father Boscana.

The village of “Puvu” or Puvungna was known as the place where, according to Tongva legend, a great gathering took place to commemorate the creation myth of these local Coastal Shoshonean people. It is said that so many people would show up to the council from afar, that they would have to sleep outside the village limits, keeping warm by crowding together “in a ball.”

Father Boscana documented the shamanic religion of the local Indians in dramatic detail and drew parallels between the Tongva Creation myth and Genesis, citing similarities in their descriptions of the formation of the elements. Other sources have spoken about links between the Tongva narrative and Greek creation story as related by Hesiod, where the archaic Greeks, like the Tongva ancestors, were “acorn eaters” and animals who could talk and came out of the darkness. So, in the same place that ancient ancestors celebrated their stories of creation, the Academy now stands.

California poet Robinson Jeffers’ poem “Hands” memorializes an Indian rock art site deep in the Ventana Wilderness near Big Sur where a ceremonial cave is decorated with several hundred white handprints. He describes the aboriginal artists as speaking to us through time: “Look: we also are human; we had hands, not paws. All hail you people with the cleverer hands, our supplanters in the beautiful country; enjoy her a season, her beauty, and come down and be supplanted, for you also are human.”

Tribal peoples recount that the ancestral spirits are pleased when they hear song and witness the dancing and dramatic performances of human beings who still wear bodies. Our hands differentiate us from the four-leggeds. We are described in dated anthropological texts as “man the tool maker.” When we use our hands to create art, music, dance and poetry, we recreate the world. We also have the opportunity to consciously pay homage to those who came before. If our lives are creative, then we can be driven by a purpose that honors the spirits of Puvungna, now known as Cal State, Long Beach or Yangna, as the place now known as the City of Los Angeles was called by the people who lived her first. For in all of these places, the creation story is ongoing.

According to ethnographic sources, there is now a worldwide crisis of native languages going extinct. Linguist Michael Krauss who has dedicated his career to making the public aware of how language is threatened, estimates that the number of oral languages assured of being around by 2100 is 600 or just 10% of the present number. He further cites that about half of the 6,000 languages spoken on earth today are “moribund” a status due to the fact that “they are spoken only by adults who no longer teach them to the next generation.” The loss of a language is as catastrophic as the disappearance of a species. With it, we lose a piece of the whole.

Like the ancient Indian languages that are disappearing with the death of elders who are often the only remaining speakers, there are many place names whose meaning is now completely lost in time. So, one purpose that we can find in our lives, then, is to define meaning once again for these places by making sure that we amuse, engage and serve the spirits who were here first. Only then, is there the possibility that they will take pity on us and return the gesture by answering with inspiration to light our unique moment in time.

Saturday, March 7, 2009

IT'S A SHORT FORM WORLD AFTER ALL


My eight-year-old son recently asked me when I started seeing in color. We were watching a black and white TV show on cable and had been talking about what some of my favorite shows were when I was growing up. This wasn’t my first close encounter with my children’s incredulity at my media shortcomings. Past incidents have included their disbelief that I grew up without videotape and DVDs. Vinyl recordings were also a revelation when I pulled some albums from my secret stash out of the garage and gently placed them on the altar of a new turntable.

Artifactual media can be a curio if not hold a talismanic power over newcomers. Sometimes new generations are beaten into submission through accidents of discovery or inter-generational wars of attrition. A major victory in my personal campaign in support of archaic media occurred last week when my teenager asked for advice on how to properly handle her new vinyl acquisition—an MGMT record. It was almost a cultural breakthrough until it was marred when I had to transfer the record to a digital file because my son had used my new record player to do some scratching—only without the benefit of having a disc on the turntable, thus shredding another hard-to-find needle and rubber platon.

When generational media worlds collide, minds are blown. In my case, I was captivated by my son’s perspective that before the advent of color televisions and what NBC called “living color”, we would all obviously only be seeing the world in black and white. Looking at the Wall Street quants maze of arcane derivatives and other financial instruments, I sometimes wish the world could still be deciphered in black and white. But what is interesting about my son’s comment is that we all seem to take the media we grow up with for granted.

There is now a generation that did not know life without the Internet and mass game changers like the iPhone and Wii. More important it seems than changes in technology and distribution are the generational shifts that change the way consumers use media. It also leads to questions about where the mass market and Main Street have gone and a conversation I had last week with the most brilliant marketer I know.

Fred Seibert is a self-proclaimed “serial entrepreneur” who among other things was largely responsible for branding MTV and currently has several of the top-rated animated TV shows. But, I don’t hold any of that against him especially since these accomplishments don’t always mean that he’s always right—even though visitors to his old office were warned by a large sign that they best leave their opinions outside the door because the person they would find inside was infallible.

Still, like the agent provocateur he is, Fred said, “The methodology to reach the mass market no longer exists.” Now, maybe I’m taking his observation out of context for the sake of this post, so I duly note that his comment originated with respect to the state of the music industry. But, we were also talking about how the television business was bound to follow suit sooner or later.

When I was watching an old episode of “The Honeymooners” on TV Land recently, the difference between the world of the long form, mass market universe of yesteryear and today’s short form, micro media markets was brought into high relief. The scene featuring a typical argument between Jackie Gleason and Audrey Meadows lasted for almost two minutes without interruption and only used one wide shot. The relationship of early television with stage performances is clear when watching this series as well as other fifties classics like “The Jack Benny Program” and “Amos and Andy”. It’s no accident that live drama like CBS’ “Playhouse 90” made up a lot of 50’s TV fare.

In the 60’s, television scenes got shorter, influenced most likely by the tempo of rock and roll. With the introduction of MTV in the early 80’s, quick cutting and handheld techniques became the order of the day and “scenes” lasted a matter of seconds, serving up music cuts instead of video edits, and in turn, influencing highly stylized, network TV series like “Miami Vice”. Media critic and sci-fi writer, Paul Levinson, has offered a granular look in Digital McLuhan of the dwindling length of scenes for small screen time from earliest television through the 90’s. He also notes that, in a reversal of fortunes that Marshall McLuhan would have appreciated, many movies in the last two decades are remakes of classic TV shows—so many so, in my view, that one wonders how many are left to dredge up in the archives. As Frank Zappa once said to me, “The world will end in nostalgia”.

Last year’s introduction of long form downloads of its primetime hour dramas by ABC displayed a fascinating metric—Nielsen Digital measured that there were some 40 million total downloads. But, the average time viewed was—guess what? Three minutes. The consummation of this sea change movement to short form was realized with the one-second Miller High Life commercial in this year’s Super Bowl. At $3 million per 30-second spot, it was also a relative bargain.

Still, Fox’s American Idol is still reaching what is undeniably a huge mass audience even when compared with the former power of top-rated shows from broadcast TV’s height such as “MASH”, “Cosby” and “Seinfeld”, which characteristically reached scores of millions of TV viewers. According to Entertainment Weekly, last Thursday’s Idol show attracted 21.2 million viewers beating Survivor’s 12 million. If I’m a consumer brand trying to reach a mass market, then even a portion of the total TV universe on any given night still represents a viable methodology compared to the short form universe of the Internet. However, television advertising has never been proven to have a direct correspondence between commercials and purchase. In the television business, it’s all about growing brand awareness. Even so, Short Attention Span Theater has arrived even as just a relatively unmonetized consumer trend. While YouTube’s valuation is $1.5billion, its 2008 revenues were $150million, a paltry sum compared with the $65 billion TV ad business.

Despite this stark earnings contrast, Cynthia Turner reports in that the overall Internet video audience is now 135 million strong. But, a growing share of audience isn’t necessarily market share. It isn’t a question of size that matters, but of how this new online video medium works as discrete from others. Largely as a result of the Obama Inaugural, YouTube was up after a flat December to 5.86 billion video streams in January with over 100m uniques. Paidcontent.org reported a week ago that Yahoo, MySpace, MTV.com, and YouTube are all considering eventual upgrades to HD as a way to keep up with broadcast. But, the question presented by mass media is not a matter of how many streams but where is the mainstream? And what matters is not necessarily how people are watching at any given time, and not even what they are watching, but how and for how long?

Appointment, scheduled viewing was the original standard for broadcast television. Video and cable chipped away at this model, but it was the Internet and personalization that finally did it in. TV is literally background to my daughter’s generation and a complement to other multitasked media input. In the on demand, VOD, PVR, short form universe, video consumption is not tied to time in the same way that hit TV shows once defined an evening when families had to sit down together in front of the pixel campfire to catch their favorite show—or else miss it entirely.

Even though CBS’s March Madness is nearly sold out for online ads, it is unclear how the short form universe is reaching users in a meaningful way. Short form video may have the eventual power of narrowing the focus to very specific demographics. Consumer viewing habits will continue to morph. In a recent piece, Phil Swann asks whether Blockbuster will go away. Maybe, but my answer to Fred’s question is that TV is still the methodology to reach the mass market.

Audience share is transformed with the introduction of every new visual medium. But each medium has its own value proposition and attendant feature set that can vary in differentiation from others with respect to process and content. But movies didn’t replace radio and TV didn’t replace radio—and the Internet didn’t replace TV. The introduction of a new medium doesn’t replace extant forms, but displaces them by defining new audiences as well as cannibalizing old ones—and their power to do so is always based on how they increase value for the consumer.

The bigger question is what impact the generational shift of video consumers who have grown up in the short form universe will have on making the video stream the standard and long form an occasional luxury seen at the movies or as PVR saved fare of five or ten minute shows on future integrated online and offline "broadcast" networks. But in concentrating on the expanding video web, we are looking in the wrong direction. My prediction is that it’s going to be the mobile video web that is the definitive, disruptive platform to watch. Whatever happens, one thing is sure—it’s already a short form world after all and our children will inevitably be faced with tough questions from their own kids who won’t believe them when they roll out their saved iTunes playlists and talk about how cool HD and iPhones were.