Created by Materia for OpenMind Recommended by Materia
24
Start First the Media, Then Us: How the Internet Changed the Fundamental Nature of the Communication and Its Relationship with the Audience
Article from the book Change: 19 Key Essays on How the Internet Is Changing Our Lives

First the Media, Then Us: How the Internet Changed the Fundamental Nature of the Communication and Its Relationship with the Audience

Estimated reading time Time 24 to read

In just one generation the Internet changed the way we make and experience nearly all of media. Today the very act of consuming media creates an entirely new form of it: the social data layer that tells the story of what we like, what we watch, who and what we pay attention to, and our location when doing so.

The audience, once passive, is now cast in a more central and influential role than ever before. And like anyone suddenly thrust in the spotlight, we’ve been learning a lot, and fast.

This social data layer reveals so much about our behavior that it programs programmers as much as they program us. Writers for the blog website Gawker watch real-time web consumption statistics on all of their posts—and they instantly learn how to craft content to best command an audience. The head programmer for Fox Television Network similarly has a readout that gives an in-depth analysis of audience behavior, interest, and sentiment. In the run-up to the final episode of the American television drama Breaking Bad, the series was drawing up to 100,000 tweets a day, a clear indication that the audience was as interested in what it had to say as what the producers were creating.

All this connected conversation is changing audiences as well. Like Narcissus, we are drawn to ourselves online and to the siren of ever-more social connections. In her book Alone Together, Sherry Turkle (2011) points out that at this time of maximum social connection, we may be experiencing fewer genuine connections than ever before. The renowned media theorist Marshall McLuhan (1968, 73) saw the potential for this more than 40 years ago when he observed that augmentation leads to amputation. In other words, in a car we don’t use our feet—we hit the road and our limbs go into limbo. With cell phones and social devices, we are connected to screens and virtually to friends worldwide, but we may forfeit an authentic connection to the world. Essentially, we arrive at Turkle’s “alone together” state.

In the past, one could turn the media off—put it down, go offline. Now that’s becoming the exception, and for many, an uncomfortable one. Suggest to a young person today that she go offline and she’ll ask, “Offline, what’s that?” or “Why am I being punished?” We are almost always connected to an Internet-enabled device, whether in the form of a smartphone, fitness monitor, car, or screen. We are augmented by sensors, signals, and servers that record vast amounts of data about how we lead our everyday lives, the people we know, the media we consume, and the information we seek. The media, in effect, follows us everywhere, and we’re becoming anesthetized to its presence.

It is jarring to realize that the implication of this total media environment was also anticipated more than 40 years ago by McLuhan. When he spoke of the “global village,” his point was not just that we’d be connected to one another. He was concerned that we’d all know each other’s business, that we’d lose a measure of privacy as a result of living in a world of such intimate awareness. McLuhan (1969) called this “retribalizing,” in the sense that modern media would lead us to mimic the behavior of tribal villages. Today, the effects of this phenomenon help define the media environment: we consciously manage ourselves as brands online; we are more concerned than ever with each other’s business; and we are more easily called out or shamed than in the bygone (and more anonymous) mass communication era.

We maintain deeply intimate relationships with our connected devices. Within minutes of waking up, most of us reach for a smartphone. We go on to check them 150 or more times throughout the day, spending all but two waking hours with a mobile device nearby (IDC 2013). As these devices become omnipresent, more and more data about our lives is nearly permanently stored on servers and made searchable by others (including private corporations and government agencies).

This idea that everything we do can be measured, quantified, and stored is a fundamental shift in the human condition. For thousands of years we’ve had the notion of accountability to an all-seeing, all-knowing God. He kept tabs on us, for our own salvation. It’s one of the things that made religion effective. Now, in just a few thousand days, we’ve deployed the actual all-seeing, all-knowing network here on earth—for purposes less lofty than His, and perhaps even more effective.

We are also in the midst of an unprecedented era of media invention. We’ve passed from the first web-based Internet to the always-connected post-PC world. We will soon find ourselves in an age of pervasive computing, where all devices and things in our built world will be connected and responsive, with the ability to collect and emit data. This has been called the Internet of Things.

In the recent past, the pace of technological change has been rapid—but it is accelerating quickly. One set of numbers tells the story. In 1995, the Internet connected together about 50 million devices. In 2011, the number of connections exceeded 4.3 billion (at the time roughly half of these were people and half were machines). We ran out of Internet addresses that year and are now adopting a new address mechanism called IPv6. This scheme will allow for about 340 billion billion billion billion unique IP addresses. That’s probably the largest number ever seriously used by mankind in the design of anything. (The universe has roughly 40 orders of magnitude more atoms than we have Internet addresses, but man didn’t invent the universe and for the purpose of this chapter it is not a communication medium, so we’ll move on.)

Here is a big number we will contend with, and soon: there will likely be one trillion Internet-connected devices in about 15 years. Nothing on earth will grow faster than this medium or the number of connected devices and the data they emit. Most of these devices will not be people, of course, but the impact of a trillion devices emitting signals and telling stories on our mediated world cannot be overstated.

To visualize the size of all this, imagine the volume of Internet connections in 1995 as the size of the Moon. The Internet of today would be the size of Earth. And the Internet in 15 years the size of giant Jupiter!

Exponential change like this matters because it points out how unreliable it is to predict how media will be used tomorrow. Examining the spotty record of past predictions is humbling and helps open our minds to the future.

In 1878, the year after he invented the phonograph, Thomas Edison had no idea how it would be used; or rather, he had scores of ideas—but he could not come up a priori with the killer application of his hardware. Edison was a shrewd inventor who kept meticulous notes. Here were his top 10 ideas for the use of the phonograph:

  1. Letter writing, and all kinds of dictation without the aid of a stenographer.
  2. Photographic books, which will speak to blind people without effort on their part.
  3. The teaching of elocution.
  4. Music—the phonograph will undoubtedly be liberally devoted to music.
  5. The family record; preserving the sayings, the voices, and the last words of the dying members of the family, as of great men.
  6. Music boxes, toys, etc.—A doll which may speak, sing, cry or laugh may be promised our children for the Christmas holidays ensuing.
  7. Clocks, that should announce in speech the hour of the day, call you to lunch, send your lover home at ten, etc.
  8. The preservation of language by reproduction of our Washingtons, our Lincolns, our Gladstones.
  9. Educational purposes; such as preserving the instructions of a teacher so that the pupil can refer to them at any moment; or learn spelling lessons.
  10. The perfection or advancement of the telephone’s art by the phonograph, making that instrument an auxiliary in the transmission of permanent records.

He first attempted a business centered on stenographer-free letter writing. That failed, largely because it was a big threat to the incumbent player—stenographers. It would be years (and a few recapitalizations) later that music would emerge as the business of phonographs. And this was a business that survived for well over 100 years before cratering.

When I reflect on my own career, I see this pattern of trying to understand—“Exactly what is this anyway?”—constantly repeat itself. In 1993, I collaborated with Bill Gates (1995) as he wrote The Road Ahead. The book outlined what Gates believed would be implications of the personal computing revolution and envisioned a future profoundly impacted by the advent of what would become the Internet. At the time, we called this a “global information superhighway.”

I was working with Gates on envisioning the future of television. This was one year before the launch of the Netscape (then Mosaic) browser brought the World Wide Web to the masses. In 1993, we knew that in the coming years there would be broadband and new distribution channels to connected homes. But the idea that this would all be based on an open Internet eluded us completely. We understood what technology was coming down the pike. But we could not predict how it would be used, or that it would look so different from what we had grown accustomed to, which was centralized media companies delivering mass media content from the top down. In 1993 what we (and Al Gore) imagined was an “information superhighway”—Gates and I believed that this would be a means to deliver Hollywood content to the homes of connected people.

We understood that the Internet would be a means to pipe content to connected homes and to share information. But here’s what we missed:

  • User-Generated Anything. The idea that the audience, who we treated as mere consumers, would make their own content and fascinate one another with their own ideas, pictures, videos, feeds, and taste preferences (Likes) was fantastical. We knew people would publish content—this had been taking place on online bulletin boards and other services for years. But the idea that the public would be such a big part of the media equation simply did not make sense.
  • The Audience As Distributor, Curator, Arbiter. We’d all be able to find content, because someone big like Microsoft would publish it. The idea that what the audience liked or paid attention to would itself be a key factor in distribution was similarly unfathomable. It would take the invention of Google and its PageRank algorithm to make clear that what everyone was paying attention to was one of the most important (and disruptive) tools in all of media. In the early 2000s, the rise of social media and then social networks would make this idea central.
  • The Long Tail. In retrospect, it seems obvious: in a world of record shops and video rental stores it cost money to stock physical merchandise. Those economics meant stocking hits was more cost-effective than keeping less popular content on the shelves. But online, where the entire world’s content can be kept on servers, the economics flip: unpopular content is no more expensive to provision that a blockbuster move. As a result, audiences would fracture and find even the most obscure content online more easily than they could at Blockbuster or Borders. This idea was first floated by Clay Shirky in 2003, and then popularized by Wired’s Chris Anderson in 2004. That was also the year Amazon was founded, which is arguably the company that has capitalized on this trend most. It has been one of the most pervasive and disruptive impacts of the Internet. For not only has the long tail made anything available, but in disintermediating traditional distribution channels it has concentrated power in the hands of the new media giants of today: Apple, Amazon, Google, and Facebook. (And Microsoft is still struggling to be a relevant actor in this arena.)
  • The Open Internet. We missed that the architecture of the Internet would be open and power would be distributed. That any one node could be a server or a directory was not how industry or the media business, both hierarchal, had worked. The Internet was crafted for military and academic purposes, and coded into it was a very specific value set about openness with no central point of control. This openness has been central to the rapid growth of all forms of new media. Both diversity and openness have defined the media environment for the last generation. This was no accident—it was an act of willful design, not technological determinism. Bob Khan at DARPA and the team at BBN that crafted the Internet had in mind a specific and radical design. In fact, they first approached AT&T to help create the precursor of the Internet and the American communication giant refused—they wanted no part in building a massive network that they couldn’t control. They were right: not only was it nearly impossible to control, but it devoured the telephony business. But as today’s net neutrality battles point out, the effort to reassert control on the Internet is very real. For 50 years the Cold War was the major ideological battle between the free world and the totalitarian world. Today, it’s a battle for openness on the Internet. The issues—political and economic at their core—continue to underpin the nature of media on the Internet.

The Internet Gives Television a Second Act

New media always change the media that came before it, though often in unexpected ways. When television was born, pundits predicted it would be the death of the book. (It wasn’t.) The death of television was a widely predicted outcome of Internet distribution, the long tail, new content creators, and user-generated media. This caused fear in Hollywood and a certain delight, even schadenfreude in Silicon Valley. At conferences, technology executives took great pleasure in taunting old media with its novel forms and reminding the establishment that “it is only a matter of time.” New media would fracture audiences, and social media would hijack the public’s attention. The Internet was set to unleash an attention-deficit-disorder epidemic, leading viewers away from traditional television programming en masse. Yet television is doing better than ever. What happened?

As it turns out, the most widely discussed topic on social media is television. One third of Twitter users in the United States post about television (Bauder 2012), and more than 10 percent of all tweets are directly related to television programming (Thornton 2013). New forms of content (as well as new distribution methods) have increased the primacy of great programming, not diminished it. Competing platforms from Google, Apple, Amazon, Netflix, and others have meant more competition for both network and cable television networks—and more power for program creators over whose content all the new distributors are fighting.

Despite the volume of content accessible via online platforms—100 minutes of video is uploaded to YouTube every minute—people still spend much of their time watching television, and television programming continues to reach a large majority of the population in developed countries. In the United States, people consume an average of 4 hours and 39 minutes of television every day (Selter 2012). In the United Kingdom, nearly 54.2 million people (or about 95 percent of the population above the age of four) watch television in a given week (Deloitte 2012). Thus, it appears that the “demise of television” is far from imminent (Khurana 2012).

In fact, television is better than it has ever been. Few predicted, even five years ago, that we would find ourselves in the middle of a new golden age in television. There is more content vying for our attention than ever before, and yet a number of rich, complex, and critically-acclaimed series have emerged. Shows like Heroes,Mad MenBreaking BadGame of Thrones, and Homeland are a testament to the success with which television has adapted to a new and challenging climate.

Networks are now developing niche shows for smaller audiences, and thrive on distribution and redistribution through new platforms. Hulu, Netflix, YouTube, and HBO GO have pioneered new forms of viewing and served as the catalyst for innovative business deals. The practice of binge viewing, in which we watch an entire season (or more) of a program in a short amount of time, is a product of on-demand streaming sites and social media. Before, viewers would have to consume episodes of televisions as they were aired or wait for syndication. Boxed DVD seasons were another way that audiences could consume many episodes at once, but this often meant waiting for networks to trickle out seasons spaced over time. Now, networks are pushing whole seasons to platforms such as Netflix at once. With enough spare time, one can now digest a whole series in an extremely condensed time frame.

This has changed not only our viewing habits, but also the nature of television content. Screenwriters are now able to develop deeper and more complex storylines than they ever had before. Where once lengthy, complex, and involved storylines were the domain of video games, we see this type of storytelling in drama series with some regularity. In addition, television shows are now constructed differently. As audiences become more conscious of the media and media creators, we find that programming is much more self-referential. Jokes on shows like The SimpsonsFamily Guy30 Rock, and The Daily Show are often jokes about the media.

The consumption of television via on-demand streaming sites is not the only significant change to how we consume television content. There has been a tremendous shift in how we engage with television programming and how we interact with one another around television.

During the early decades of television, television viewing was a scheduled activity that drew groups of people together in both private homes and public spaces. The programming served as the impetus for such gatherings, and television watching was the primary activity of those who were seated in living rooms or stood before television sets in department stores or bars. Television continued to serve as a group medium through the 1960s and 1970s, but technological innovations ultimately transformed viewer behavior. The remote control, the videotape, the DVR, and mobile devices have led people to consume television content in greater quantities, but they do so increasingly in isolation. Once a highly anticipated social event, television programming is now an omnipresent environmental factor.

As television moved from a communal appointment medium to an individual activity initiated on demand, the community aspect of television has moved to the Internet. We have recreated the social function of television, which was once confined to living rooms, online—the conversation about television has expanded to a global level on social networking sites.

The sharp rise in multiscreen consumption is perhaps one of the most significant changes in modern media consumption, and has been a source of both excitement and concern among television network and technology executives alike. This form of media multitasking, in which a viewer engages with two or more screened devices at once, now accounts for 41 percent of time spent in front of television screens (Moses 2012). More than 60 percent of tablet users (Johnson 2012) and nearly 90 percent of smartphone users (Nielsen 2012) report watching television while using their devices.

Currently, television viewers are more likely to engage with content about television programming (such as Tweets or Facebook status updates) on complementary devices than they are to consume supplementary programming (such as simulcast sports footage) on a second screen. What is clear is that even if we are watching television in isolation, we are not watching alone.

Even when we’re alone, we often watch television with friends. Some 60 percent of viewers watch TV while also using a social network. Of this group, 40 percent discuss what they are currently watching on television via social networks (Ericsson 2012). More than half of 16 to 24-year-olds regularly use complementary devices to communicate with others via messaging, e-mail, Facebook, or Twitter about programs being watched on television (Ericsson 2012).

With all of this online communication, of course, comes data. With exacting precision, Twitter can monitor what causes viewers to post about a given program. During the 2011 MTV Video Music Awards, a performance by Jay-Z and Kanye West generated approximately 70,000 tweets per minute (Twitter 2013). Later in the program, the beginning of a performance by Beyoncé generated more than 90,000 tweets per minute. Before she exited the stage, the superstar revealed her pregnancy by unbuttoning her costume. Tweets spiked at 8,868 per second, shattering records set on the social network shortly after such significant events as the resignation of Steve Jobs and the death of Osama Bin Laden (Hernandez 2011).

It is clear that television programming drives social media interaction. But do tweets drive consumers to tune in to a particular program? A report by Nielsen (2013) suggests that there is a two-way causal relationship between tuning in for a broadcast program and the Twitter conversation about that particular program. In nearly half of 221 primetime episodes analyzed in the study, higher levels of tweeting corresponded with additional viewers tuning in to the programming. The report also showed that the volume of tweets sent about a particular program caused significant changes in ratings among nearly 30 percent of the episodes.

The second-screen conversation about television programming is not limited to Twitter. Trendrr (2013), a social networking data analysis platform, recorded five times as much second-screen Facebook activity during one week in May 2013 than on all other social networks combined. Facebook recently released tools that will allow partner networks, including CNN and NBC, to better understand second-screen conversation taking place on the social network as it happens (Gross 2013). Using these tools, it is now possible to break down the number of Facebook posts that mention a certain term during a given time frame.

This real-time data—about who is watching television, where they are watching it from, and what they are saying about it—is of interest not just to television executives and advertisers, but the audience, too. There are several drivers for social television watching behavior, including not wanting to watch alone and the desire to connect with others (Ericsson 2012). Beyond connecting with the audience at large, dual-screen television viewers report using social networks to seek additional information about the program they are watching and to validate their opinions against a public sample.

I’ve witnessed times in my own life where watching TV alone became unacceptable. In order to make my viewing experience tolerable, I needed to lean on the rest of the viewing audience’s sensibility. Moments like these changed my relationship to the medium of television forever.

In January 2009, I watched the inauguration of President Barack Obama on television along with 37.8 million other Americans. As Chief Justice John Roberts administered the oath of office, he strayed from the wording specified in the United States Constitution. I recognized that something had gone wrong—the president and the chief justice flubbed the oath? How could that be? What happened? I immediately turned to Twitter—and watched as everyone else was having the same instantaneous reaction. The audience provided context. I knew what was going on.

Twitter was equally useful to me during Super Bowl XLV when the Black Eyed Peas performed at the halftime show. The pop stars descended from the rafters of Cowboys Stadium and launched into a rendition of their hit song “I Gotta Feeling.” It sounded awful. I turned to my girlfriend in dismay: “There is something wrong with the television. My speakers must have blown! There is no way that a performance during the most-watched television event of all time sounds this horrible.” After tinkering with my sound system to no avail, I thought, “Maybe it’s not me. Could it be? Do they really sound this bad?” A quick check of Twitter allayed my fears of technical difficulties—yes, the Black Eyed Peas sounded terrible. My sound system was fine.

As the level of comfort with and reliance upon multiscreen media consumption grows among audiences, content producers are developing rich second-screen experiences for audiences that enhance the viewing experience.

For example, the Lifetime channel launched a substantial second-screen engagement for the 12th season of reality fashion competition Project Runway (Kondolojy 2013). By visiting playrunway.com during live broadcasts of the show, fans could vote in opinion polls and see results displayed instantly on their television screens. In addition to interactive voting, fans could access short-form video, blogs, and photo galleries via mobile, tablet, and desktop devices.

There are indications that second-screen consumption will move beyond the living room and into venues like movie theaters and sports stadiums. In connection with the theatrical rerelease of the 1989 classic The Little Mermaid, Disney has created an iPad app called “Second Screen Live” that will allow moviegoers to play games, compete with fellow audience members, and sing along with the film’s score from their theater seats (Stedman 2013). In 2014, Major League Baseball will launch an application for wearable computing device Google Glass that will display real-time statistics to fans at baseball stadiums (Thornburgh 2013).

Music: Reworked, Redistributed, and Re-Experienced Courtesy of the Internet

The Internet has also completely transformed the way music is distributed and experienced. In less than a decade physical media (the LP and the CD) gave way to the MP3. Less than a decade after that, cloud-based music services and social sharing have become the norm. These shifts took place despite a music industry that did all it could to resist the digital revolution—until after it had already happened! The shareable, downloadable MP3 surfaced on the early web of the mid-1990s, and the music industry largely failed to recognize its potential. By the early 2000s, the Recording Industry Association of America had filed high-profile lawsuits against peer-to-peer file sharing services like Napster and Limewire (as well as private persons caught downloading music via their networks). Total revenue from music sales in the United States plummeted from $14.6 billion in 1999 to $6.3 billion in just ten years (Goldman 2010).

The truth was inescapable: its unwillingness to adopt new distribution platforms had badly hurt the music industry’s bottom line. Television (having watched the music debacle) adjusted far better to the realities of the content business in the digital age. But the recording industry was forced to catch up to its audience, which was already getting much of its music online (legally or otherwise). Only in recent years did major labels agree to distribution deals with cloud-streaming services including Spotify, Rdio, iHeartRadio, and MOG. The music industry has experienced a slight increase in revenues in the past year, which can be attributed to both digital music sales and streaming royalties (Faughnder 2013).

Ironically, what the music industry fought so hard to prevent (free music and sharing) in the early days of the web is exactly what they ended up with today. There is more music available online now than ever before, and much of it is available for free.

Applications like Spotify and Pandora give users access to vast catalogs of recorded music, and sites like SoundCloud and YouTube have enabled a new generation of artists to distribute their music with ease. There is also a social layer to many music services. Their sites and applications are designed to allow users to share their favorite songs, albums, and artists with one another. Spotify, SoundCloud, and YouTube (among others) enable playlist sharing.

The rapid evolution of online music platforms has led to fundamental changes in the way we interact with music. The process of discovering and digesting music has become an almost frictionless process. Being able to tell Pandora what you like and have it invoke a personalized radio station tailored to your tastes is not only more convenient that what came before it, it’s a qualitatively different medium. Gone are the days when learning about a new artist required flipping through the pages of a magazine (not to mention through stacks of albums at the record store).

As a kid I didn’t have much of a popular music collection, which was somewhat traumatic whenever it came to throwing a party or having friends over. The cool kids had collections; the rest didn’t. Telling friends to bring all their LPs over for the night didn’t make a lot of sense growing up in New York City, where they’d have to drag them along in a taxi or public bus. Fast forward to 2011. I was hosting a cocktail party at my home in San Francisco, which became an experiment in observing the effect of different kinds of Internet music services. In the kitchen, I played music via an iPod that contained songs and albums I had purchased over the years. (And my collection still was not as good as my cool friends.) In the living room, I streamed music via the Pandora app on my iPhone. Guests would pick stations, skip songs, or add variety as the night went on. Upstairs, I ran Spotify from my laptop. I had followed, as the service allows you to do, two friends whose taste I really admired—a DJ from New York, and a young woman from the Bay Area who frequently posted pictures of herself at music festivals to Facebook. In playing a few of their playlists, I had created the ultimate party soundtrack. I came across as a supremely hip host, without having to curate the music myself. Ultimately, everyone gravitated upstairs to dance to my Spotify soundtrack.

The iPod, Pandora, and Spotify all allowed me to digitally deliver music to my guests. However, each delivery device is fundamentally different. Adding music to an iPod is far from a frictionless process. I had purchased the songs on my iPod over the course of several years, and to discover this music I depended on word of mouth of friends or the once-rudimentary recommendations of the iTunes store. Before the introduction of iCloud in 2011, users had to upload songs from their iTunes library to an iPod or iPhone, a process that took time (and depending on the size of a user’s library, required consideration of storage constraints).

With Pandora came access to a huge volume of music. The Internet radio station boasts a catalog of more than 800,000 tracks from 80,000 artists. And it is a learning system that becomes educated about users’ tastes over time. The Music Genome Project is at the core of Pandora technology. What was once a graduate student research project became an effort to “capture the essence of music at the fundamental level.” Using almost 400 attributes to describe and code songs, and a complex mathematical algorithm to organize them, Pandora sought to generate stations that could respond to a listener’s taste and other indicators (such as the “thumbs down,” which would prevent a song from being played on a particular station again).

Spotify has a catalog of nearly 20 million songs. While the size of the service’s catalog is one of its major strengths, so too are its social features. The service, which launched in the United States in 2011 after lengthy negotiations with the major record labels, allowed users to publish their listening activity to Facebook and Twitter. The desktop player enabled users to follow one another, and make public playlists to which others could subscribe. In addition, users could message each other playlists. The sharing of Spotify playlists between connected users mimicked the swapping of mixtape cassettes in the late eighties and early nineties.

All of these are examples of how what the audience creates is a growing part of the creative process.

In the heyday of the album, the exact flow of one song to another and the overall effect was the supreme expression of overall artistic design and control. It wasn’t only the songs—the album represented 144 square inches of cover art and often many interior pages of liner notes in which to build a strong experience and relationship and story for your fans. It was a major advance over the 45, which provided a much smaller opportunity for a relationship with the band. With the arrival of MP3s, all of this was undone. Because we bought only the songs we were interested in, not only was the artist making less money, but he had lost control of what we were listening to and in what order. It didn’t much matter, because we were busy putting together playlists and mixtapes where we (the audience) were in charge of the listening experience.

The Internet has given us many tools that allow us to personalize the listening experience. More than that, listening to music has increasingly become a personal activity, one that is done in isolation. The simplicity with which music can be consumed online has changed music from an immersive media to a more ambient media, one that is easily taken for granted.

Interestingly, the rise in personal consumption of music (via MP3 and the cloud) has coincided with a sharp rise in festival culture. Now more than ever, audiences seek to be together—whether in Indio, California for Coachella; Black Rock City, Nevada for Burning Man; Chicago, Illinois for Lollapalooza; or Miami, Florida for the Ultra Music Festival—to experience music as a collective group.

At a time where we collectively listen to billions of hours of streamed music each month, nothing compels us in a stronger fashion than the opportunity to come together, outdoors, often outside of cell phone range, to bask in performances by our favorite artist. Festival lineups are stacked with independent artists and superstars alike. Interestingly, a lineup is not unlike a long playlist on iTunes. There is no way to catch every performance at South by Southwest or Electric Daisy Carnival—but there is comfort in knowing that many of your favorite artists are there in one place.

This has also proven out economically. At a time when selling recorded music had become ever-more challenging, the business of live music is experiencing a renaissance. In 2013, both weekend-long installments of the Coachella festivals sold out in less than 20 minutes and raked in $47.3 million in revenue (Shoup 2013). The rise of festivals (now one in every state of the U.S.) is a response to the Internet having made the act of consuming recorded music more ambient and banal than ever before while creating the need for greater social and immersive experiences.

At the core of going to a music festival or listening to The White Album with a group of friends is the need to experience music collectively. It is a realization that beyond even the song itself, perhaps the most inspiring and rousing element of music is not just the music itself, but our collective human experience of it.

Today, as the audience is restlessly making its own media, it is also learning fast that with new media come new rules and new exceptions. Media confer power on the formerly passive audience, and with that comes new responsibilities.

This was made startlingly evident in the wake of the April 15, 2013 Boston Marathon bombings. At five o’clock in the evening on April 18, the FBI released a photo one of the suspects and asked the public for help in identifying him. Hours later, the Facebook page of Sunil Tripathi, a student who bore a resemblance to the suspect and was reported missing, was posted to the social news site Reddit. Word spread that this was the bomber. Within hours the story was amplified by the Internet news site BuzzFeed and tweeted to its 100,000 followers. Only, Tripathi had nothing to do with the crime. His worried family had created a Facebook page to help find their missing son. Over the next few hours Tripathi’s family received hundreds of death threats and anti-Islamic messages until the Facebook page was shut down.

The audience was making media, and spontaneously turning rumors into what appeared to be facts but weren’t, and with such velocity that facts were knocked out of the news cycle for hours that day (Kang 2013).

Four days later, an editor of Reddit posted to the blog a fundamental self-examination about crowd-sourced investigations and a reflection of the power of new media:

This crisis has reminded all of us of the fragility of people’s lives and the importance of our communities, online as well as offline. These communities and lives are now interconnected in an unprecedented way. Especially when the stakes are high we must strive to show good judgement and solidarity. One of the greatest strengths of decentralized, self-organizing groups is the ability to quickly incorporate feedback and adapt. reddit was born in the Boston area (Medford, MA to be precise). After this week, which showed the best and worst of reddit’s potential, we hope that Boston will also be where reddit learns to be sensitive of its own power.

(erik [hueypriest] 2013)

We are now able to surround ourselves with news that conforms to our views. We collect friends whose tastes and opinions are our own tastes and opinions. The diversity of the Internet can ironically make us less diverse. Our new media are immersive, seductive, and addictive. We need only turn to today’s headlines to see how this plays out.

On October 8, 2013, a gunman entered a crowded San Francisco commuter train and drew a .45-caliber pistol. He raised his weapon, put it down to wipe his nose, and then took aim at the passengers.

None of the passengers noticed because they were attending to something far more interesting than present reality. They were subsumed by their smartphones and by the network beyond. These were among the most connected commuters in all of history. On the other side of their little screens, passengers had access to much of the world’s media and many of the planet’s people. They were not especially connected to the moment or to one another. They were somewhere else.

Only when the gunman opened fire did anyone look up. By then, 20-year-old Justin Valdez was mortally wounded. The only witness to this event, which took place on a public train, in front of dozens of people, was a security camera, which captured the scene of connected bliss interrupted. The San Francisco Chroniclereported the district attorney’s stunned reaction:

“These weren’t concealed movements—the gun is very clear,” said District Attorney George Gascón. “These people are in very close proximity with him, and nobody sees this. They’re just so engrossed, texting and reading and whatnot. They’re completely oblivious of their surroundings.”

(Ho 2013)

Gascón said that what happened on the light-rail car speaks to a larger dilemma of the digital age. As glowing screens dominate the public sphere, people seem more and more inclined to become engrossed, whether they are in a car or a train or are strolling through an intersection.

In 1968, Marshall McLuhan observed how completely new media work us over. In War and Peace in the Global Village he wrote, “Every new technological innovation is a literal amputation of ourselves in order that it may be amplified and manipulated for social power and action.” (73)

We’ve arrived in full at an always-on, hyper-connected world. A network that connects us together yet can disconnect us from our present reality. An Internet that grants us the ability to create and remix and express ourselves as never before. One that has conferred on us responsibilities and implications we are only beginning to understand. The most powerful tools in media history are not the province of gods, or moguls, but available to practically all mankind. The media has become a two-way contact sport that all of us play. And because the media is us, we share a vital interest and responsibility in the world we create with this, our extraordinary Internet.

References

Anderson, Chris.
“The Long Tail.” Wired, 12.10 (October 2004). http://www.wired.com/wired/archive/12.10/tail.html

Bauder, David.
“Study Shows Growth in Second Screen Users.” Associated Press, December 3, 2012. http://bigstory.ap.org/article/study-shows-growth-second-screen-users

Deloitte.
“TV: Why? Perspectives on TV: Dual-Screen, Catch-Up, Connected TV, Advertising, and Why People Watch TV.” MediaGuardian Edinburgh International Television Festival, 2012. http://www.deloitte.com/assets/Dcom-UnitedKingdom/Local%20Assets/Documents/Industries/TMT/uk-tmt-tv-why-perspectives-on-uk-tv.pdf

Edison, Thomas.
“The Phonograph and its Future.” North American Review, no. 126 (May–June 1878).

Ericsson.
“TV and Video: An Analysis of Evolving Consumer Habits.” An Ericsson Consumer Insight Summary Report, August 2012.http://www.ericsson.com/res/docs/2012/consumerlab/tv_video_consumerlab_report.pdf

Erik [hueypriest].
“Reflections on the Recent Boston Crisis.” Reddit, April 22, 2013. http://blog.reddit.com/2013/04/reflections-on-recent-boston-crisis.html

Faughnder, Ryan.
“Global Digital Music Revenue To Reach $11.6 Billion in 2016.” Los Angeles Times, July 24, 2013. http://articles.latimes.com/2013/jul/24/entertainment/la-et-ct-digital-global-music-industry-20130724

Gates, Bill, with Nathan Myhrvold and Peter Reinearson.
The Road Ahead. New York: Viking Penguin, 1995.

Goldman, David.
“Music’s Lost Decade: Sales Cut in Half.” CNN Money, February 3, 2010. http://money.cnn.com/2010/02/02/news/companies/napster_music_industry/

Gross, Doug.
“CNN Among First with New Facebook Data-Sharing Tools.” CNN.com, September 9, 2013. http://www.cnn.com/2013/09/09/tech/social-media/facebook-media-data/index.html

Hernandez, Brian Anthony.
“Beyonce’s Baby Inspired More Tweets Per Second Than Steve Jobs’ Passing.” Mashable, December 6, 2011. http://mashable.com/2011/12/06/tweets-per-second-2011/

Ho, Vivian.
“Absorbed Device Users Oblivious to Danger.” San Francisco Gate, October 7, 2013. http://www.sfgate.com/crime/article/Absorbed-device-users-oblivious-to-danger-4876709.php?cmpid=twitter

IDC (International Data Corporation).
“Always Connected: How Smartphones and Social Keep Us Engaged.” An IDC Research Report, Sponsored by Facebook. 2013. https://fb-public.app.box.com/s/3iq5x6uwnqtq7ki4q8wk

Johnson, Lauren.
“63pc of Tablet Owners Use Device while Watching TV: Study.” Mobile Marketer, September 17, 2012.http://www.mobilemarketer.com/cms/news/research/13781.html

Kang, Jay Caspian.
“Should Reddit Be Blamed for the Spreading of a Smear?” New York Times, July 25, 2013. http://www.nytimes.com/2013/07/28/magazine/should-reddit-be-blamed-for-the-spreading-of-a-smear.html?pagewanted=1&_r=1

Khurana, Ajeet.
“The End of the Television.” Technorati, February 11, 2012. http://technorati.com/entertainment/tv/article/the-end-of-the-television/

Kondolojy, Amanda.
“Project Runway To Reveal Biggest Second-Screen Interactivity in Its History for Launch of Season 12.” TV by the Numbers, July 17, 2013.http://tvbythenumbers.zap2it.com/2013/07/17/project-runway-to-reveal-biggest-second-screen-interactivity-in-its-history-for-launch-of-season-12/192387/

McLuhan, Marshall.
“The Playboy Interview.” Playboy Magazine, March 1969.

———. War and Peace in the Global Village. New York: McGraw-Hill, 1968.

Moses, Lucia.
“Data Points: Two-Screen Viewing.” AdWeek. November 7, 2012. http://www.adweek.com/news/technology/data-points-two-screen-viewing-145014

Nielsen Company.
“Double Vision: Global Trends in Tablet and Smartphone Use while Watching TV.” Nielsen Newswire, April 5, 2012.http://www.nielsen.com/us/en/newswire/2012/double-vision-global-trends-in-tablet-and-smartphone-use-while-watching-tv.html

———. “The Follow-Back: Understanding the Two-Way Causal Influence Between Twitter Activity and TV Viewership.” Nielsen Newswire, August 6, 2013.http://www.nielsen.com/us/en/newswire/2013/the-follow-back–understanding-the-two-way-causal-influence-betw.html

Selter, Brian.
“Youths Are Watching, but Less Often on TV.” New York Times, February 8, 2012. http://www.nytimes.com/2012/02/09/business/media/young-people-are-watching-but-less-often-on-tv.html?_r=2&ref=technology&

Shirky, Clay.
“Power Laws, Weblogs, and Inequality.” First published on the “Networks, Economics, and Culture” mailing list, February 8, 2003.http://shirky.com/writings/powerlaw_weblog.html

Shoup, Brad.
“Deconstructing: Coachella and the Music Festival Industry.” Stereogum, January 28, 2013. http://www.stereogum.com/1245041/deconstructing-coachella-and-the-music-festival-industry/top-stories/lead-story/

Stedman, Alex.
“Disney Invites Kids to Bring iPads to Theaters for ‘The Little Mermaid’ Re-Release.” Variety, September 11, 2013. http://variety.com/2013/digital/news/disney-invites-kids-to-bring-ipads-to-theaters-for-the-little-mermaid-re-release-1200608309/

Thornburgh, Tristan.
“Blue from Google Glass Allows You to Get Real-Time Info at Baseball Games.” Bleacher Report, September 12, 2013.http://bleacherreport.com/articles/1771741-blue-from-google-glass-allows-you-to-get-real-time-info-at-baseball-games

Thornton, Kirby.
“Nielsen Engages Twitter for TV Insights.” Media Is Power, March 21, 2013. http://www.mediaispower.com/nielsen-engages-twitter-for-tv-insights/

Trendrr.
“New Facebook Data Strengthens Tools for Measuring Second-Screen Activity.” Trendrr (blog), July 23, 2013. https://blog.trendrr.com/2013/07/23/new-facebook-data-strengthens-tools-for-measuring-second-screen-activity/

Turkle, Sherry.
Alone Together: Why We Expect More from Technology and Less from Each Other. New York: Basic Books, 2011.

Twitter.
“Twitter on TV: A Producer’s Guide.” Twitter. https://dev.twitter.com/media/twitter-tv (accessed October 9, 2013).

YouTube.
“Press: Statistics.” YouTube. http://www.youtube.com/yt/press/statistics.html (accessed October 9, 2013).

Quote this content
Listening
Mute
Close

Comments on this publication

Name cannot be empty
Write a comment here…* (500 words maximum)
This field cannot be empty, Please enter your comment.
*Your comment will be reviewed before being published
Captcha must be solved