Sunday, July 24, 2011
Saturday, March 19, 2011
The application to news follows directly. Say you have 500 people in your newsroom. These 500 people are able to procure, analyze, package, and distribute some set of information with some set of competitive advantage. Before, in the broadcast era, this was fine, as the cost of distribution meant only a small group of organizations could rally the resources to do it. There’d be slight variation in the quality and type of products offered (accounting for differences in viewership), but players couldn’t specialize, as the smaller audiences this would cater to couldn’t support the huge cost of the necessary delivery infrastructure. Today by contrast, an independent distribution layer has emerged, the Internet, and anyone can access it. Production and marketing costs falling in parallel, major news publishers now compete not with a few other firms, but with an entirely new mode of production: with everybody... with network production.
On the web, things switch from a 'filter-then-publish' mode to an opposite, 'publish-then-filter' one. Instead of a couple companies trying to assemble full packages of finished content, everybody just throws everything out there. Most of it’s of course crap. But the best can be quite great, even better than supposedly premium content. And more importantly, individual pieces tend often to be better than that premium content, at least for some particular person, at some particular time. When you want it, tech news from the tech guys (or directly from the relevant company). When you want it, fashion news from the fashion guys (or from the model herself). Live video of Sri Lanka during the tsunami from someone who's actually there. Policy analysis from the retired DoD wonk who happens to have worked on the legislation for forty years. And the collection of all of these tailored to your preferences by algorithmic behavioral and social aggregators.
I believe there’s a solution though. The tagline is NYTimes meets World of Warcraft. It’s a solution in fact surprisingly similar to what I think needs to take place on the entertainment side. The idea is that if publishers want to compete with networks of production, they can’t do it alone. They need somehow to produce tons more of each of the services they provide (information discovery, analysis, presentation, and distribution) to have options available that will line up with each individual’s preferences, but somehow not add to cost. How do they do this? They get others to do it for them! They leverage their brands, their loudspeakers, and those unique, highest-quality services only they can provide to create a community of collaborative production, augmenting the value they create with value created by others, including both users and fellow suppliers. They become a network of production themselves. A closest approximation today I’d think would be something like what HuffPo drives toward. Or something -- differently monetized, and in the news rather than entertainment space -- but like this example of Jimmy Fallon crowdsourcing development of show material via Twitter : http://www.youtube.com/watch?v=nouZPPtDibE
By shifting from a reporting outlet to a reporting community, a publisher is able to tap the vast resources embedded in its user base. With something like CNN’s iReporter UGC platform, the company creates a coverage mesh far more extensive then it could achieve on its own – when a disaster strikes anywhere in the world, it now effectively has a team on the ground already. Community contributions aren’t limited though to bottom-end UGC. Huffpo taps its broad network of domain experts to produce professional, in-depth analysis for free. Techcrunch uses its community to help with investigation, relying on its passionate user base of industry insiders to contribute leads and even help flesh out stories. In its form of so called process journalism, the blog releases stories early, is transparent about uncertainty, and then lets readers help source and draw out details. In so doing, it’s not able to produce truly differentiated scoops, but finds a way to compete with the rapid ‘shoot first, ask questions later’ (publish-then-filter) cycle of independent publishers and network production generally.
Other benefits come with the community approach as well. In the case of Techcrunch and other niche news communities (Style, Politico…), audiences are more targeted and therefore can be marketed to more efficiently – billboards in Silicon Valley, conference sponsorship, etc. And with any community production effort, publishers capture the benefit of viral social marketing. Perhaps more importantly though, the community approach opens up new avenues for revenue generation. Advertisers will pay premiums to reach audiences they believe are deeply engaged – through simple display media, and even more through the unique forms of deep brand engagement user participation makes possible (‘best news photo submission contest, sponsored by Nikon’). Creating participation meanwhile lets publishers move beyond advertising alone, allowing them to segment and more fully monetize their audience with offerings to more passionate community members like paid content, virtual goods, merchandise, and conferences.
Graphic adapted from this excellent post by Kim-Mai Cutler.
Such distributed production should also of course involve collaboration beyond a community’s walls. Inbound, this means pulling in the best content the web has to offer, a strategy Huffpo and other aggregators have successfully demonstrated. Outbound, it means opening your differentiated assets and tools to external value creation (thesis part II). Embeddable players are a perfect example here – share and monetize the video, let other people build and monetize value around it. But news organizations can go further, releasing content in more remixable formats, perhaps via API. For differentiated material (including archive content), publishers can license access or pair content with robust ad network services. They could build tools and release in-house ones to support external creation – again, with access either licensed or supported vertically via ad sales. And completing the cycle, all this external creation can then be filtered back into the community a publisher managers.
Critically, a shift to community production provides not only these immediate production, marketing, and monetization benefits, also it addresses media's long-term structural problem. In the Internet age, media must enter a new, third phase of business model. Content is a funny creature in that it’s largely non-rivalrous, and through its history, varying degrees of non-excludable. Its monetization then has always been about finding natural points of control, sites where people have to come to a publisher if they want the services that company provides. In the broadcast era, distribution technology meant there was no way to ask this of consumers; to put out their product, signal, it had to be to everyone, so no one would come ask permission to use it. Who though would come us ask for a publisher’s services? Advertisers – to whom they could sell their aggregations of audience. In the cable era, media companies found a way to establish a retail model. By running a pipe to people’s homes (or having their partners, the MSOs, do it), they found a way to control distribution, such that if people wanted access to services, they’d have to come ask and pay for the privilege, just as they do with every other retail product they buy in stores.
With the Internet though, the retail model breaks down, as distribution can no longer be controlled. What's needed then is to create some new service people will have to come ask, and pay for, to use. A big part of the answer I believe is community, a service publishers can provide rather than simply an asset. This is critical in news, given the short shelf life of content, it's easy replicability, and the wide range of individuals who'll provide it for free, all of which make it a bad candidate for withheld-access monetization strategies. Here, the MMOG analogy is particularly helpful – anyone can pirate a copy of World of Warcraft’s base software and have a local copy of the environment it creates. But what good is that? You’d have gained the ability only to walk around the game’s landscapes by yourself. To enjoy the product, you need to participate in the community Blizzard orchestrates, and to do that, you have to ask the company's permission to enter its servers.
A shakeout in the news industry was of course inevitable. As Shirky describes, now that everybody can speak to everybody, many of the services a publisher provides, which used to protected by exclusive distribution capabilities, are now exposed to competition, or worse, rendered unnecessary entirely. No longer can publishers control revenue from businesses they don’t specialize or innovate in, like classifieds. No longer are they needed in every geography simply as distribution hubs (news originators can simply distribute directly). And no longer can they claim an exclusive channel to audiences for advertisers. On top of all of this, with delivery now unified across formats, professional outlets previously separated by medium (newspapers, magazines, broadcasters, etc) all now compete against one another directly.
For sure, there’s still a place for those services only integrated news organizations can provide. There remains a market, if a smaller one, for the sort of lean back consumption only traditional, ‘filter then publish’ credentialing can provide – the twin quality-of-service guarantees of coverage breadth and reliability. There will always be a place for differentiated components, in investigative reporting, analysis, production, aggregation, or anywhere else. When you can assemble a collectively strong enough bundle of these, you can even sell it to consumers directly… the Economist and WSJ note are both thriving today even behind paywalls, with readers and advertisers alike showing perfect willingness to pay for a product they believe provides differentiated value. And of course, I believe there will certainly be a place for the sort of news ‘orchestration’ described in this post – a difficult, intricate task that almost certainly requires the precision and coordination of an integrated entity.
But of equal certainty, the type, mix, and perhaps quantity of news organizations that exist tomorrow will look profoundly different than those today. Everyone in the news world I think basically gets that journalism is changing and that it’s headed in the direction of access and participation. But I don’t think even the people who are doing these things well – the HuffPos of the world, the Flipboards – truly understand the mechanics driving their success… the why. I think if we can do this, if we can recognize the core mechanics of production in a connected age, it could make for the basis of a truly sustainable vision, a real strategy to help reinvent news.
Sunday, March 13, 2011
As some of you know, I’ve been struggling a bit of late with my elevator pitch for the thesis. While clearly it’s not quite down to thirty seconds, I’ve decided to take a real stab at a single formulation of the idea to date. I wrestled a lot with the simplest way to present the ideas, but have settled on basically the order I came to understand them myself. I’m hoping this approach will help make clear the path that’s led me to my current thinking and hopefully help folks to join me on the journey.
INTEGRATING DISTRIBUTED RESOURCES
In 1937, economist Ronald Coase wrote a seminal paper called The Nature of the Firm, discussing why firms act the way they do, and in particular, why they choose to supply their particular set of services. Coase pointed out that in the process of producing goods, firms choose to supply some services directly, investing in capital assets, employees, etc and coordinating them appropriately. Other services they procure indirectly, simply licensing them from external firms. When firms produce services internally, they have to pay the overhead costs of management. The more services a firm supplies, the greater these costs. Why then Coase asked don’t firms simply get all their services via the market from external providers? He explained the reason they don’t is that externally provided services too come with costs – transaction costs. The cost of finding appropriate partners for instance, the costs of negotiating contracts with them, and the cost of integrating production systems together.
The key insight of the thesis (which, admittedly, some like Shirky had already discovered, although for the record, I came to independently…) is that new communication technologies have radically altered this equation. In the days of Ford, finding costs would have involved poor Henry trudging around to different parts of the country physically inspecting factories. Negotiations would be uncertain and time consuming, and physical production integration would have been difficult if not impossible. Today by contrast, as new communication technologies collapse the cost of collaboration (and IP, more easily shared than physical assets, comes to represent ever-greater portions the value of products), these costs plummet. The Internet facilitates partner finding; clearinghouses and development platforms pre-set contracts; globalization and cloud infrastructures support turn key integrations. As a result, the balance of the production mix for any given firm should reasonably be expected to shift toward greater reliance on external partners.
But I realized this pattern isn’t confined merely to organizational production. Having spent time in school studying so-called New Media Literacy (thank you ProfDaer!!), a movement that explores what the notion of ‘literacy’ should look like in the 21st century, I realized the dynamic is exactly the same as applies to individual cognition processes. The idea here is, fifty years ago, if you wanted to know some fact, say, George Washington’s birthday, it made sense to memorize it… to make that upfront investment, to internalize the data inside your mind. Were you not to do so, to access that fact later on would be a major pain. You’d have to hike to the local library, thumb through the card catalogue, find the relevant book, find the right page, skim through it, and only then would you finally have your desired data. Today by contrast, the benefits of memorizing are mitigated, as anytime you have access to the Internet, you can easily and instantaneously access information via Google. Or in other words, the transaction cost of producing information via external resources has collapsed, such that the method by which we generally perform cognition processes should reasonably be expected to shift toward on demand access to external supplies.
Generalizing, the thesis suggests that fixed investments – in capital production assets, in permanent employees rather than contracted services, in internalized data, in making plans with friends in advance rather than coordinating via txt once out – are only a way to solve the problem of difficult on demand access. As transaction costs lower, these fixed structures become unnecessary. Moreover, they become inefficient… you’re stuck with some particular set of resources, optimal for only some particular set of things, rather than accessing the best, most-tailored solutions the ecosystem has to offer at exactly the times you need them. Worse in cases, you may be forgoing not only the best alternative the network has to offer, but in fact the sum of alternatives, where collaborative production platforms would allow for the channeling of contributions from many sources. As lowered transaction costs grow the network (increasing the potential suppliers who could provide or contribute to a service), it becomes increasingly unlikely your internal capabilities can rival. At the very least, the hurdle and subsequent exposure inherent in upfront investments discourages exploratory innovation and can steer companies toward higher end solutions than the market needs.
RELEASING RESOURCES TO OPEN INNOVATION
So this is the first piece of the thesis: organizations and individuals shouldn’t try to do most things on their own – they should harness the power of collaboration by utilizing external resources. But there then emerges an equally important flip side to the argument, which is the idea that for the things organizations and individuals do do, they should harness the power of collaboration by releasing these resources to others and letting the network help build value on top of them.
The idea here is that, just as reduced transaction friction increases the opportunity cost of supplying services internally, so too it increases the opportunity cost of channeling services only to your own outputs. Again, firm behavior provides perhaps the clearest example. Every enterprise faces a choice about how to monetize the services it renders. When a firm produces components of a good, each of these may be monetized vertically, by packaging it with other components and selling the bundle to consumers, or horizontally, by selling it to other suppliers who incorporate it into their own vertical stacks and then sell those to consumers. When transaction costs are high, a firm can only really do the former – the costs of finding interested partners and coordinating licensing with them would be too high. But today the situation is different. While there remains a valid assessment as to the opportunity forgone in not exploiting a component exclusively – using its differentiation to drive consumption of a broader bundle of supplied services – the question that emerges again: can your internal capabilities in producing that bundle, your ability to add value on top of the component, rival not only the best actor in the network, but the sum of all actors?
You can see the dynamic in every organization that offers products as ‘platforms’ rather than merely as finished goods or services. For instance, Google releasing its Android operating system to any device manufacturer or carrier who wants to build products with it (in contrast to Apple, who channels its software almost exclusively to its own products). Or Opentable creating a platform for restaurants to take advantage of and build into their overall products (dining experiences) . But perhaps a best example of this today would be Amazon. Amazon offers a complete, vertically integrated retail product to consumers. On Amazon.com, a digital storefront the company hosts and manages itself, you can purchase goods sitting in Amazon’s warehouses, through Amazon’s payment processing system, shipped to you directly from Amazon’s built from scratch fulfillment system. But critically, each of these services the company offers in its own production stack, it also releases openly for anyone to come along and build value on top of.
Imagine a small e-tailer who specializes in mountain climbing – let’s call it Gear N’ Stuff. Gear N’ Stuff could create a website with interactive maps of mountains in the Northeast, listing what shoes you’d want for each. The company could then link through to Amazon’s programming interfaces and let customers buy and receive shoes through Amazon’s systems, collecting a revenue share on each sale through the company's affiliate fee program. A mom and pop operation, Gear N’ Stuff couldn’t have managed its own warehouses and wouldn’t know anything about accepting digital forms of payment. What they do know though is mountain climbing, something Amazon wouldn’t have the time or expertise to do anything with. Because Amazon has released its capabilities to external innovation, this co-supplier (without having to ask specific permission or create any sort of formal agreement) can add value and drive incremental sales that Amazon couldn’t. Other retailers might use the Amazon.com storefront, handling payment and fulfillment themselves; companies outside retail altogether might license serving capacity from Amazon’s web services business and build value on top of it in areas Amazon would never enter (gaming for instance). Amazon has already made the investments in all these platforms anyway for its own product, why not amortize the costs of these investments as widely as possible by letting others help return revenue using them?
And of course, the dynamics extend beyond merely firms. When then-candidate Obama focused campaign resources in 2008 not on staffing call centers, but on building call-center tools unmanaged volunteers could utilize, he was tapping the power of network production (as in fact he did opening his brand to innovation with remixable symbols like Hope and Change). Of non-profits, before Wikipedia, you had Encarta, Britannica, World Book, etc… who all produced and sold final products 95% the same and 5% differentiated. Wikipedia came along and said instead, we’ll build a platform for information assembly, and anyone can help build value on top of it. Everyone collaborates together. Critically, the dynamic applies equally to individuals. Much has been made of Mark Zuckerberg’s constant nudging of users toward more liberal privacy settings, but he’s right – sharing is the way of the future. When we release information and content – like for instance this blog post! – it makes it available to others to add value on top of. And even if only a tiny percentage of the things we share ‘hit,’ finding that someone, somewhere in the world, at some point in time to whom it’s valuable, that’s fine, as collapsed distribution costs means the act of sharing has become literally free. It’s the back and forth, the collaborative value production, that allows the things we do to truly take off, precisely why skills of production and sharing lie at the heart of new media literacy, and not merely internalization of data as was historically true.
A NEW WORLD OF COLLABORATION
And so in the end, you arrive at this radically new landscape. Horizontal supply layers emerge, who instead of trying to do everything on their own, release services openly for others to build value on top of. In so doing, they optimize resource investments, spreading them across as wide a base as possible, gaining the efficiency and expertise of scale and specialization. Prime systems integrators then come along and swoop up all these layers, saving the burden and exposure of fixed investments by licensing services on demand, building some value independently but then combining it with all the resources the network has to offer via now available horizontal platforms. We come to a world of meshed production, a complex ecosystem of mass collaboration.
There are tons of complexities I’m of course leaving off here. The mechanics of why distributed collaboration can now compete with internal production… a change in degree creating a change in kind, as the huge redundancy and aggregating power of digital networks comes to approximate the quality assurance of command-and-control structures. The counterpoint benefits of internal processes and the limits of network production. Specifics of this new resource allocation system where investments are made then shared as platforms, where investments are made jointly, and where investments distributed within populations can be collected and harnessed. The massive displacement this all implies, but also the incredible gains to efficiency and sustainability. The implications for the media industry, for education, for government, politics, social relations, social action, sustainability, medicine, and other areas. It’s I suppose what happens in an ‘elevator pitch,’ but I’ll of course continue posting on these subjects in the hope they too may eventually become clear.
The other day, I ended up watching the movie Castaway with my roommate. While I sat there, I couldn’t stop thinking, the thing that would have driven me crazy wouldn’t be the rain, or the heat, the constant physical duress, or even the daily dose of raw crab and coconut. It would be the isolation. And not simply in an emotional sense, but in an informational, a productive one. Any of my friends who’ve gone to dinner with me know how antsy I get in the three minutes between when someone asks a question we don’t know the answer to and I pull out my cell to Google it. Watching the movie, I couldn’t stop thinking the agony I would feel being stuck on an island, wanting to know something, even something trivial, and realizing there was simply no way that information could ever be accessed. On recognizing the staggering inefficiency of having to reinvent absolutely everything for myself. The knowledge that if I had some thought, some idea, produced some innovation – it would be lost to history, it could never be communicated to anyone, no one beside me would ever be able to do anything with it.
The problem with traditional production is it’s like being on an island. Every process needs to be recreated and every output is thrown away, inaccessible to anyone beside the individual or organization that created it. I’m doing a research project at work right now around digital news and the different players in the space. It kills me to know that there must be a thousand other people who’ve done this exact exercise and moreover, when I finish, it’ll be locked away on our corporate servers somewhere, almost certainly never to be touched or built on again. (I remember feeling the same way in school, knowing every paper I wrote, at least one of the millions of graduate students from the last fifteen years must have written exactly, if only I could access it).
Why shouldn’t we open the project up, do it collaboratively say via wiki? We could create a grid on all the players that would evolve constantly over time, such that no one would ever have to do it again from scratch and we wouldn’t just keep circling over on ourselves. For any official use, each organization would of course have to do their own vetting, and the unique strategies that would be launched off this information would certainly be kept proprietary. It would be a question merely of creating a shared resource layer for commodity information we have no differentiation in producing and there would in fact be huge benefits for everyone, given the distributed stocks of knowledge involved, in producing collaboratively. It would be akin to what certain companies in the pharmaceutical industry have done around diabetes research and the open genome project. Or even more basically, akin to shared infrastructures like highway and electricity systems we all support together and then all reap the non-rivalrous benefits of, shifting the locus of competition to layers higher up the stack where true differentiation can occur.
The media revolution taking place today is unquestionably as profound a change as any previously, from the printing press, to mechanical recorded media, to the telephone, telegraph, and interactive communication, to broadcast mass media. More though, the Internet, the frictionless connection of everyone to everyone, is a revolution of societal organization. In the Neolithic revolution, man first emerged from Hanks’ Castaway plight and realized that if we work together, if we form societies, we can achieve infinitely more than we can as individuals and individual groups. In the Industrial Revolution, we realized that if we temporarily suspend the constraints of market interaction and form little communes (companies) where we invest upfront and all work together in an organized way, we can overcome the difficulties and uncertainty of markets and do some cool things we couldn't do previously.
Today though, we come to a third (perhaps final?) stage in this evolution, where the stop-gap measure that was industrial organization becomes, while never irrelevant, less and less necessary. We come to a world of ubiquitous connectivity where we collectively pursue humanity’s promise, constantly endeavoring onward and upward as we build on each other’s progress rather than simply recreating and then reburying it. Coordination becomes less important, but collaboration paramount, as everyone begins working together-apart to drive the staggering, unprecedented innovation and value creation of the coming century.
Wednesday, March 9, 2011
Thanks to helpful input from Matt Schlosser, just sent in the following suggestion to Facebook...
(PS go collaborative production / nascent group forming... posted the q a week or two ago, it sat there for a while, and then eventually it found its way to someone who was interested in it and could add value on top of it)
Allow users to share status updates with friends only in a certain geography
All the time I blast things like, 'anybody want to come to the movies tonight?' Clearly, this message is intended only for my friends who live in the same city as me, and would be great to be able to spare the rest of my friends the (mostly) irrelevant spam.
More broadly, give users more control in targeting the content they publish (to geographies, interests, I'd even add a hash tag system). Key problem Facebook is trying to solve is discovery -- how do you match the incredible mass of content users publish with the often small groups of people to whom any given piece is interesting? Answer... the same way Facebook does everything: collaboratively, by letting users do it for them.
With regard to content broadcast, user 'privacy' settings aren't just about privacy... they're about targeting. Users are in the best position to know who will and will not be interested in the content they publish, and given they're sharing the content for some reason, they have every incentive to try targeting it correctly. Facebook should tap that potential"
Sunday, March 6, 2011
Saturday, February 12, 2011
Sunday, February 6, 2011
This was a post I wrote in early October about new forms of literacy, responding to Nicholas Carr's argument in his article "Is Google Making Us Stupid?" in the Atlantic. The post builds off some of the work I'd done for my undergrad thesis around new media literacy, although it came before the 'distributed production epiphany,' so I'll definitely look to come back and give this an updated treatment sometime soon.
In the meantime, the basic idea is that Carr suggests we are being inundated by data and it's causing us to lose the ability to think deeply about things. I respond by suggesting what we're actually losing (orperhaps, choosing to abandon) is simply that traditional notion of 'deep thinking' that involves only acquiring and internalizing knowledge through passive textual decoding. Today, I suggest in the post and will elaborate on in future ones, literacy demands the ability not only to receive knowledge, but to actively pursue, evaluate, combine, and deploy it. So while I think Carr is right to note this change in the way we interact with information, I think he and all of us should be careful to think about the profound opportunities and necessities of new systems, and not simply disregaurd them out of hand because they're different.
In his article “Is Google Making Us Stupid?,” Nicholas Carr laments what he sees to be the harmful, homogenizing effect proliferating access to information is having on human mental faculties. Inundated by data, he suggests, we jump around from topic to topic and are gradually losing the ability for deep thought.
I strongly disagree with Carr.
Presumably, what Carr is concerned with is critical thinking – our ability to acquire and deploy knowledge to impactful ends, the foundation of both individual and societal progress. Critical thinking has two components: which generally I dub computation and experimentation. Computation is the simpler of the two... it involves executing a set of already-decided steps to determine some answer. It is by nature goal directed. Experimentation is the process that determines which of these sets of steps should be executed. It is ‘random,’ or alternatively, is creative in nature.
High level thought processes involve recursive deployment of computation and experimentation. Imagine a scholar seeking to answer some fundamental and yet to be understood question in her field. Likely, she would start by acquiring knowledge – reading up on topics around the question. She probably doesn’t know exactly where she’s going or exactly what knowledge she needs to obtain, but she’s creating a base dataset with which to begin her inquiry. Gradually, she will likely develop some overarching hypothesis. To prove it, she will head down some particular path. She will start reading up on some sub branch of the field, design some experiments to test out the hypothesis and execute them. If these don’t bear out, she will find some other path and try it out, or perhaps she will disprove the hypothesis entirely, look for some new explanation and then repeat the process until eventually she finds the answer.
At each point in this process where the scholar is selecting a new mode of inquiry – in choosing hypotheses, in choosing the paths by which to pursue them – the scholar is engaged in experimentation. She is somehow synthesizing the information she’s internalized and using it to creatively generate an idea and a set of actions. At each point where she’s executing these sets of actions, she’s engaged in computation .
In all modes of critical thinking, we use media as aids to both experimentation and computation. When a person is brainstorming and taking notes on a whiteboard, she’s using a physical encoding medium to augment the capabilities of her brain, freeing up processing power in her working memory by offloading the storage of certain data elements. The person may for instance be writing down each area she explores as she does so. This aids in her experimentation by making clear which areas she’s already explored and which are yet free for her to venture.
With the invention of computers though, media can take on the even more important role of handling computational processes altogether. As described above, computation is the set of elements in critical thinking that involve execution of predetermined steps. These steps – not involving any sort of randomness or human creativity – can be offloaded to machines. When children learn math at early stages, they first are taught the process of computation. They memorize multiplication tables and do long division with pencil and paper. When it’s time though for them to move onto more complex areas of mathematics such as algebra, calculus, and trigonometry, these more rudimentary computational processes get offloaded to a calculator.
This process of using external resources as part of mental processes is known as distributed cognition (examples thus far have included offloading only to media and computational devices, although distributed cognition can equally well involve offloading to other people). As with any complex information architecture, the key qualities of a distributed cognition setup are data storage, data access, and data processing. How much information can be stored? Where will it be stored and what does this mean for how easy it is to get to? What sort of manipulation on the data is done once retrieved?
The driving motivation for things like Carr’s ‘deep reading’ or a scholar spending years researching in an area to become expert in it is internalization of data for fast access and abstract processing. A literature scholar writing on a novel needn’t memorize its every word to efficiently analyze it. When she’s engaged in critical thinking on a particular paragraph, certainly she may read that paragraph deeply and try to bring much of it into her working memory cache where she can more easily manipulate its components toward creative and analytic ends. Generally though, the text is encoded in a permanent physical medium (it’s written down), and so when she turns to analyze the next paragraph, there’s no need for her to expend limited mental resources holding the exact data of that previous paragraph in memory.
Computers today are great at storage and computation – they can hold hundreds of gigabytes worth of data with perfect fidelity indefinitely and can execute trillions of serial processing tasks every second. The problem though is that they’re not good at creative thinking, which means we need to do it, and creative thinking requires fast access to data and computation – you need to be able to dart around to different ideas, know quickly whether a particular path is even worth exploring. Historically, the time it would take to access data stored externally or launch an external computation process meant that the only was to achieve this sort of creative analysis was to internalize everything.
As access to information and computational processes proliferate however, and becomes infinitely more efficient, this changes. More and more of the lower levels of critical thinking can be offloaded to external, distributed sources – I can check the relevant wiki page when I need some fact about Georgia in the 1850s, I can see what the first Google link is when I type in “current social pressures in Batswana” – leaving us free to roam in the more important, more abstract and creative elements of critical thinking. Not only though does information proliferation make this sort of offloading possible, it makes it ever more critical. For as information proliferates, it becomes impossible to internalize it all and the value of any particular piece of information declines proportionately.
Distributed critical thinking is of course complex, and teaching it is one of the main goals of so called “New Media Literacy” education programs. Relevant to Carr’s argument, new media literacy involves learning not only how to deploy distributed resources, but also when. Indeed, it will often still make sense to ‘read something deeply’ so as to internalize it. And individuals must practice and retain this skill. But the skill that is equally and ever more important is the ability to determine which content to process and how to do so. To learn how to jump around from link to link, finding relevant tidbits that help inform the main argument and add additional components beside it. How to skip paragraphs entirely when they describe information you already posses internally. How to execute any number of information consumption variants besides simply sitting down and reading a thing from point A to point B.
Certainly, not every act of digression is an enlightened demonstration of distributed cognitio. Some people may well have become a bit ADD sometimes – reading a lot and internalizing little, high or low level. Each citizen of this century though owes it to herself to learn the skill of information negotiation, and we as a society owe it to one another to aid in this process. While Carr's correct to call out what's clearly an evolution of literacy and a problem that amounts to one of it's biggest traps, we must be sure to view it as such -- a call for smart, controlled change, and not merely a blind or nostalgic opine for a literacy of the past.