Sunday, July 24, 2011

Google+

Hey guys. Not back in action or anything, although bunch of people have been asking my thoughts on Google+, so figured I'd throw out a comment I just added to Tom Anderson's article discussing the program's sharing model. I'll do a real treatment as soon as I organize my ideas, although figured I'd post at least a couple thoughts for now...

Agreed. The biggest challenge for Facebook or any other social network is reducing the signal to noise ratio -- finding the 12 people for whom the post about Billy's birthday is tremendously meaningful (Billy's Gramma for instance) and keeping it away from everyone else.

Google's right that the people most qualified to help in this effort are users. It is posters themselves who know the most about the content being shared and which people will be interested in it. Moreover, posters have every incentive to try to get their content to the right people... they're sharing it for a reason, to be seen, and spam accrues to their reputation. Hayek in purest form.

Google's mistake I believe though is asking users to predefine categories. The point of connecting tiny latent groups (the set of people who may care about Billy's birthday, the set of who may want to come out to the movies with me on some particular Thursday night, etc) is flexibility. To have to rigidly structure my communication channels ex ante defeats the purpose. How many posts will really fit neatly into the set of circles you establish? How often will you have to go through modifying your circles and how cumbersome will this process be (even if you can add old circles to new ones, are you going to have to be manually combing through friend sets to get them right)?

Twitter's boon, an external design gift from its early user base, is hash tags. Hash tags allow users to identify what a post is about, and then go even further by allowing recipients to participate in the process as well, identifying themselves as target audience through keyword search. Certainly this process can be meaningfully augmented with preset acquisition or sharing channels (such as followers or circles), with algorithmic suggestion (critical for discovery and helping us find the things we don't yet *know* we'd be interested in), and with demographic or behavioral levers ('share this only with my friends living in NY', 'share this only with people who click on more than x% of video links').

But an approach based on rigid preset structures I believe is clunky and inadequate. Moreover, for the company that has championed not only tag cloud organization but the very essence of paring content with those who might be interested in it any given time on an ad hoc basis (ie search), surprisingly ungoogly.

Saturday, March 19, 2011

NYTimes meets World of Warcraft… the future of news?

As I mention in my previous post, I’ve been doing some work of late around the future of news, which fortunately has helped me to figure out some of the thesis’ implications for the space. For anyone not familiar, the thesis (in 30 seconds) is that as new communication tools make collaboration easier, efforts to produce services internally become increasingly inefficient. The reason people invest in fixed assets, be they capital production assets, full-time employees, information an individual chooses to memorize rather than Google at the time it’s needed, or plans a person makes with friends in advance rather than coordinating ad hoc via txt once out, is that the transaction costs of accessing the relevant resources on demand are too high. As new communication and service infrastructures though radically reduce the cost of this on demand access, the benefits of fixed upfront investments dissipate. You’re stuck with some particular set of resources, optimal for only some particular set of things, rather than accessing the best, most-tailored solutions the ecosystem has to offer, at exactly the times you need them.

The application to news follows directly. Say you have 500 people in your newsroom. These 500 people are able to procure, analyze, package, and distribute some set of information with some set of competitive advantage. Before, in the broadcast era, this was fine, as the cost of distribution meant only a small group of organizations could rally the resources to do it. There’d be slight variation in the quality and type of products offered (accounting for differences in viewership), but players couldn’t specialize, as the smaller audiences this would cater to couldn’t support the huge cost of the necessary delivery infrastructure. Today by contrast, an independent distribution layer has emerged, the Internet, and anyone can access it. Production and marketing costs falling in parallel, major news publishers now compete not with a few other firms, but with an entirely new mode of production: with everybody... with network production.

On the web, things switch from a 'filter-then-publish' mode to an opposite, 'publish-then-filter' one. Instead of a couple companies trying to assemble full packages of finished content, everybody just throws everything out there. Most of it’s of course crap. But the best can be quite great, even better than supposedly premium content. And more importantly, individual pieces tend often to be better than that premium content, at least for some particular person, at some particular time. When you want it, tech news from the tech guys (or directly from the relevant company). When you want it, fashion news from the fashion guys (or from the model herself). Live video of Sri Lanka during the tsunami from someone who's actually there. Policy analysis from the retired DoD wonk who happens to have worked on the legislation for forty years. And the collection of all of these tailored to your preferences by algorithmic behavioral and social aggregators.

I believe there’s a solution though. The tagline is NYTimes meets World of Warcraft. It’s a solution in fact surprisingly similar to what I think needs to take place on the entertainment side. The idea is that if publishers want to compete with networks of production, they can’t do it alone. They need somehow to produce tons more of each of the services they provide (information discovery, analysis, presentation, and distribution) to have options available that will line up with each individual’s preferences, but somehow not add to cost. How do they do this? They get others to do it for them! They leverage their brands, their loudspeakers, and those unique, highest-quality services only they can provide to create a community of collaborative production, augmenting the value they create with value created by others, including both users and fellow suppliers. They become a network of production themselves. A closest approximation today I’d think would be something like what HuffPo drives toward. Or something -- differently monetized, and in the news rather than entertainment space -- but like this example of Jimmy Fallon crowdsourcing development of show material via Twitter : http://www.youtube.com/watch?v=nouZPPtDibE

By shifting from a reporting outlet to a reporting community, a publisher is able to tap the vast resources embedded in its user base. With something like CNN’s iReporter UGC platform, the company creates a coverage mesh far more extensive then it could achieve on its own – when a disaster strikes anywhere in the world, it now effectively has a team on the ground already. Community contributions aren’t limited though to bottom-end UGC. Huffpo taps its broad network of domain experts to produce professional, in-depth analysis for free. Techcrunch uses its community to help with investigation, relying on its passionate user base of industry insiders to contribute leads and even help flesh out stories. In its form of so called process journalism, the blog releases stories early, is transparent about uncertainty, and then lets readers help source and draw out details. In so doing, it’s not able to produce truly differentiated scoops, but finds a way to compete with the rapid ‘shoot first, ask questions later’ (publish-then-filter) cycle of independent publishers and network production generally.

Other benefits come with the community approach as well. In the case of Techcrunch and other niche news communities (Style, Politico…), audiences are more targeted and therefore can be marketed to more efficiently – billboards in Silicon Valley, conference sponsorship, etc. And with any community production effort, publishers capture the benefit of viral social marketing. Perhaps more importantly though, the community approach opens up new avenues for revenue generation. Advertisers will pay premiums to reach audiences they believe are deeply engaged – through simple display media, and even more through the unique forms of deep brand engagement user participation makes possible (‘best news photo submission contest, sponsored by Nikon’). Creating participation meanwhile lets publishers move beyond advertising alone, allowing them to segment and more fully monetize their audience with offerings to more passionate community members like paid content, virtual goods, merchandise, and conferences.


Graphic adapted from this excellent post by Kim-Mai Cutler.

Such distributed production should also of course involve collaboration beyond a community’s walls. Inbound, this means pulling in the best content the web has to offer, a strategy Huffpo and other aggregators have successfully demonstrated. Outbound, it means opening your differentiated assets and tools to external value creation (thesis part II). Embeddable players are a perfect example here – share and monetize the video, let other people build and monetize value around it. But news organizations can go further, releasing content in more remixable formats, perhaps via API. For differentiated material (including archive content), publishers can license access or pair content with robust ad network services. They could build tools and release in-house ones to support external creation – again, with access either licensed or supported vertically via ad sales. And completing the cycle, all this external creation can then be filtered back into the community a publisher managers.

Critically, a shift to community production provides not only these immediate production, marketing, and monetization benefits, also it addresses media's long-term structural problem. In the Internet age, media must enter a new, third phase of business model. Content is a funny creature in that it’s largely non-rivalrous, and through its history, varying degrees of non-excludable. Its monetization then has always been about finding natural points of control, sites where people have to come to a publisher if they want the services that company provides. In the broadcast era, distribution technology meant there was no way to ask this of consumers; to put out their product, signal, it had to be to everyone, so no one would come ask permission to use it. Who though would come us ask for a publisher’s services? Advertisers – to whom they could sell their aggregations of audience. In the cable era, media companies found a way to establish a retail model. By running a pipe to people’s homes (or having their partners, the MSOs, do it), they found a way to control distribution, such that if people wanted access to services, they’d have to come ask and pay for the privilege, just as they do with every other retail product they buy in stores.

With the Internet though, the retail model breaks down, as distribution can no longer be controlled. What's needed then is to create some new service people will have to come ask, and pay for, to use. A big part of the answer I believe is community, a service publishers can provide rather than simply an asset. This is critical in news, given the short shelf life of content, it's easy replicability, and the wide range of individuals who'll provide it for free, all of which make it a bad candidate for withheld-access monetization strategies. Here, the MMOG analogy is particularly helpful – anyone can pirate a copy of World of Warcraft’s base software and have a local copy of the environment it creates. But what good is that? You’d have gained the ability only to walk around the game’s landscapes by yourself. To enjoy the product, you need to participate in the community Blizzard orchestrates, and to do that, you have to ask the company's permission to enter its servers.

A shakeout in the news industry was of course inevitable. As Shirky describes, now that everybody can speak to everybody, many of the services a publisher provides, which used to protected by exclusive distribution capabilities, are now exposed to competition, or worse, rendered unnecessary entirely. No longer can publishers control revenue from businesses they don’t specialize or innovate in, like classifieds. No longer are they needed in every geography simply as distribution hubs (news originators can simply distribute directly). And no longer can they claim an exclusive channel to audiences for advertisers. On top of all of this, with delivery now unified across formats, professional outlets previously separated by medium (newspapers, magazines, broadcasters, etc) all now compete against one another directly.

For sure, there’s still a place for those services only integrated news organizations can provide. There remains a market, if a smaller one, for the sort of lean back consumption only traditional, ‘filter then publish’ credentialing can provide – the twin quality-of-service guarantees of coverage breadth and reliability. There will always be a place for differentiated components, in investigative reporting, analysis, production, aggregation, or anywhere else. When you can assemble a collectively strong enough bundle of these, you can even sell it to consumers directly… the Economist and WSJ note are both thriving today even behind paywalls, with readers and advertisers alike showing perfect willingness to pay for a product they believe provides differentiated value. And of course, I believe there will certainly be a place for the sort of news ‘orchestration’ described in this post – a difficult, intricate task that almost certainly requires the precision and coordination of an integrated entity.

But of equal certainty, the type, mix, and perhaps quantity of news organizations that exist tomorrow will look profoundly different than those today. Everyone in the news world I think basically gets that journalism is changing and that it’s headed in the direction of access and participation. But I don’t think even the people who are doing these things well – the HuffPos of the world, the Flipboards – truly understand the mechanics driving their success… the why. I think if we can do this, if we can recognize the core mechanics of production in a connected age, it could make for the basis of a truly sustainable vision, a real strategy to help reinvent news.

Sunday, March 13, 2011

The Future of Everything

As some of you know, I’ve been struggling a bit of late with my elevator pitch for the thesis. While clearly it’s not quite down to thirty seconds, I’ve decided to take a real stab at a single formulation of the idea to date. I wrestled a lot with the simplest way to present the ideas, but have settled on basically the order I came to understand them myself. I’m hoping this approach will help make clear the path that’s led me to my current thinking and hopefully help folks to join me on the journey.


INTEGRATING DISTRIBUTED RESOURCES

In 1937, economist Ronald Coase wrote a seminal paper called The Nature of the Firm, discussing why firms act the way they do, and in particular, why they choose to supply their particular set of services. Coase pointed out that in the process of producing goods, firms choose to supply some services directly, investing in capital assets, employees, etc and coordinating them appropriately. Other services they procure indirectly, simply licensing them from external firms. When firms produce services internally, they have to pay the overhead costs of management. The more services a firm supplies, the greater these costs. Why then Coase asked don’t firms simply get all their services via the market from external providers? He explained the reason they don’t is that externally provided services too come with costs – transaction costs. The cost of finding appropriate partners for instance, the costs of negotiating contracts with them, and the cost of integrating production systems together.


The key insight of the thesis (which, admittedly, some like Shirky had already discovered, although for the record, I came to independently…) is that new communication technologies have radically altered this equation. In the days of Ford, finding costs would have involved poor Henry trudging around to different parts of the country physically inspecting factories. Negotiations would be uncertain and time consuming, and physical production integration would have been difficult if not impossible. Today by contrast, as new communication technologies collapse the cost of collaboration (and IP, more easily shared than physical assets, comes to represent ever-greater portions the value of products), these costs plummet. The Internet facilitates partner finding; clearinghouses and development platforms pre-set contracts; globalization and cloud infrastructures support turn key integrations. As a result, the balance of the production mix for any given firm should reasonably be expected to shift toward greater reliance on external partners.


But I realized this pattern isn’t confined merely to organizational production. Having spent time in school studying so-called New Media Literacy (thank you ProfDaer!!), a movement that explores what the notion of ‘literacy’ should look like in the 21st century, I realized the dynamic is exactly the same as applies to individual cognition processes. The idea here is, fifty years ago, if you wanted to know some fact, say, George Washington’s birthday, it made sense to memorize it… to make that upfront investment, to internalize the data inside your mind. Were you not to do so, to access that fact later on would be a major pain. You’d have to hike to the local library, thumb through the card catalogue, find the relevant book, find the right page, skim through it, and only then would you finally have your desired data. Today by contrast, the benefits of memorizing are mitigated, as anytime you have access to the Internet, you can easily and instantaneously access information via Google. Or in other words, the transaction cost of producing information via external resources has collapsed, such that the method by which we generally perform cognition processes should reasonably be expected to shift toward on demand access to external supplies.


Generalizing, the thesis suggests that fixed investments – in capital production assets, in permanent employees rather than contracted services, in internalized data, in making plans with friends in advance rather than coordinating via txt once out – are only a way to solve the problem of difficult on demand access. As transaction costs lower, these fixed structures become unnecessary. Moreover, they become inefficient… you’re stuck with some particular set of resources, optimal for only some particular set of things, rather than accessing the best, most-tailored solutions the ecosystem has to offer at exactly the times you need them. Worse in cases, you may be forgoing not only the best alternative the network has to offer, but in fact the sum of alternatives, where collaborative production platforms would allow for the channeling of contributions from many sources. As lowered transaction costs grow the network (increasing the potential suppliers who could provide or contribute to a service), it becomes increasingly unlikely your internal capabilities can rival. At the very least, the hurdle and subsequent exposure inherent in upfront investments discourages exploratory innovation and can steer companies toward higher end solutions than the market needs.


RELEASING RESOURCES TO OPEN INNOVATION

So this is the first piece of the thesis: organizations and individuals shouldn’t try to do most things on their own – they should harness the power of collaboration by utilizing external resources. But there then emerges an equally important flip side to the argument, which is the idea that for the things organizations and individuals do do, they should harness the power of collaboration by releasing these resources to others and letting the network help build value on top of them.


The idea here is that, just as reduced transaction friction increases the opportunity cost of supplying services internally, so too it increases the opportunity cost of channeling services only to your own outputs. Again, firm behavior provides perhaps the clearest example. Every enterprise faces a choice about how to monetize the services it renders. When a firm produces components of a good, each of these may be monetized vertically, by packaging it with other components and selling the bundle to consumers, or horizontally, by selling it to other suppliers who incorporate it into their own vertical stacks and then sell those to consumers. When transaction costs are high, a firm can only really do the former – the costs of finding interested partners and coordinating licensing with them would be too high. But today the situation is different. While there remains a valid assessment as to the opportunity forgone in not exploiting a component exclusively – using its differentiation to drive consumption of a broader bundle of supplied services – the question that emerges again: can your internal capabilities in producing that bundle, your ability to add value on top of the component, rival not only the best actor in the network, but the sum of all actors?


You can see the dynamic in every organization that offers products as ‘platforms’ rather than merely as finished goods or services. For instance, Google releasing its Android operating system to any device manufacturer or carrier who wants to build products with it (in contrast to Apple, who channels its software almost exclusively to its own products). Or Opentable creating a platform for restaurants to take advantage of and build into their overall products (dining experiences) . But perhaps a best example of this today would be Amazon. Amazon offers a complete, vertically integrated retail product to consumers. On Amazon.com, a digital storefront the company hosts and manages itself, you can purchase goods sitting in Amazon’s warehouses, through Amazon’s payment processing system, shipped to you directly from Amazon’s built from scratch fulfillment system. But critically, each of these services the company offers in its own production stack, it also releases openly for anyone to come along and build value on top of.


Imagine a small e-tailer who specializes in mountain climbing – let’s call it Gear N’ Stuff. Gear N’ Stuff could create a website with interactive maps of mountains in the Northeast, listing what shoes you’d want for each. The company could then link through to Amazon’s programming interfaces and let customers buy and receive shoes through Amazon’s systems, collecting a revenue share on each sale through the company's affiliate fee program. A mom and pop operation, Gear N’ Stuff couldn’t have managed its own warehouses and wouldn’t know anything about accepting digital forms of payment. What they do know though is mountain climbing, something Amazon wouldn’t have the time or expertise to do anything with. Because Amazon has released its capabilities to external innovation, this co-supplier (without having to ask specific permission or create any sort of formal agreement) can add value and drive incremental sales that Amazon couldn’t. Other retailers might use the Amazon.com storefront, handling payment and fulfillment themselves; companies outside retail altogether might license serving capacity from Amazon’s web services business and build value on top of it in areas Amazon would never enter (gaming for instance). Amazon has already made the investments in all these platforms anyway for its own product, why not amortize the costs of these investments as widely as possible by letting others help return revenue using them?


And of course, the dynamics extend beyond merely firms. When then-candidate Obama focused campaign resources in 2008 not on staffing call centers, but on building call-center tools unmanaged volunteers could utilize, he was tapping the power of network production (as in fact he did opening his brand to innovation with remixable symbols like Hope and Change). Of non-profits, before Wikipedia, you had Encarta, Britannica, World Book, etc… who all produced and sold final products 95% the same and 5% differentiated. Wikipedia came along and said instead, we’ll build a platform for information assembly, and anyone can help build value on top of it. Everyone collaborates together. Critically, the dynamic applies equally to individuals. Much has been made of Mark Zuckerberg’s constant nudging of users toward more liberal privacy settings, but he’s right – sharing is the way of the future. When we release information and content – like for instance this blog post! – it makes it available to others to add value on top of. And even if only a tiny percentage of the things we share ‘hit,’ finding that someone, somewhere in the world, at some point in time to whom it’s valuable, that’s fine, as collapsed distribution costs means the act of sharing has become literally free. It’s the back and forth, the collaborative value production, that allows the things we do to truly take off, precisely why skills of production and sharing lie at the heart of new media literacy, and not merely internalization of data as was historically true.


A NEW WORLD OF COLLABORATION

And so in the end, you arrive at this radically new landscape. Horizontal supply layers emerge, who instead of trying to do everything on their own, release services openly for others to build value on top of. In so doing, they optimize resource investments, spreading them across as wide a base as possible, gaining the efficiency and expertise of scale and specialization. Prime systems integrators then come along and swoop up all these layers, saving the burden and exposure of fixed investments by licensing services on demand, building some value independently but then combining it with all the resources the network has to offer via now available horizontal platforms. We come to a world of meshed production, a complex ecosystem of mass collaboration.


There are tons of complexities I’m of course leaving off here. The mechanics of why distributed collaboration can now compete with internal production… a change in degree creating a change in kind, as the huge redundancy and aggregating power of digital networks comes to approximate the quality assurance of command-and-control structures. The counterpoint benefits of internal processes and the limits of network production. Specifics of this new resource allocation system where investments are made then shared as platforms, where investments are made jointly, and where investments distributed within populations can be collected and harnessed. The massive displacement this all implies, but also the incredible gains to efficiency and sustainability. The implications for the media industry, for education, for government, politics, social relations, social action, sustainability, medicine, and other areas. It’s I suppose what happens in an ‘elevator pitch,’ but I’ll of course continue posting on these subjects in the hope they too may eventually become clear.


The other day, I ended up watching the movie Castaway with my roommate. While I sat there, I couldn’t stop thinking, the thing that would have driven me crazy wouldn’t be the rain, or the heat, the constant physical duress, or even the daily dose of raw crab and coconut. It would be the isolation. And not simply in an emotional sense, but in an informational, a productive one. Any of my friends who’ve gone to dinner with me know how antsy I get in the three minutes between when someone asks a question we don’t know the answer to and I pull out my cell to Google it. Watching the movie, I couldn’t stop thinking the agony I would feel being stuck on an island, wanting to know something, even something trivial, and realizing there was simply no way that information could ever be accessed. On recognizing the staggering inefficiency of having to reinvent absolutely everything for myself. The knowledge that if I had some thought, some idea, produced some innovation – it would be lost to history, it could never be communicated to anyone, no one beside me would ever be able to do anything with it.


The problem with traditional production is it’s like being on an island. Every process needs to be recreated and every output is thrown away, inaccessible to anyone beside the individual or organization that created it. I’m doing a research project at work right now around digital news and the different players in the space. It kills me to know that there must be a thousand other people who’ve done this exact exercise and moreover, when I finish, it’ll be locked away on our corporate servers somewhere, almost certainly never to be touched or built on again. (I remember feeling the same way in school, knowing every paper I wrote, at least one of the millions of graduate students from the last fifteen years must have written exactly, if only I could access it).


Why shouldn’t we open the project up, do it collaboratively say via wiki? We could create a grid on all the players that would evolve constantly over time, such that no one would ever have to do it again from scratch and we wouldn’t just keep circling over on ourselves. For any official use, each organization would of course have to do their own vetting, and the unique strategies that would be launched off this information would certainly be kept proprietary. It would be a question merely of creating a shared resource layer for commodity information we have no differentiation in producing and there would in fact be huge benefits for everyone, given the distributed stocks of knowledge involved, in producing collaboratively. It would be akin to what certain companies in the pharmaceutical industry have done around diabetes research and the open genome project. Or even more basically, akin to shared infrastructures like highway and electricity systems we all support together and then all reap the non-rivalrous benefits of, shifting the locus of competition to layers higher up the stack where true differentiation can occur.


The media revolution taking place today is unquestionably as profound a change as any previously, from the printing press, to mechanical recorded media, to the telephone, telegraph, and interactive communication, to broadcast mass media. More though, the Internet, the frictionless connection of everyone to everyone, is a revolution of societal organization. In the Neolithic revolution, man first emerged from Hanks’ Castaway plight and realized that if we work together, if we form societies, we can achieve infinitely more than we can as individuals and individual groups. In the Industrial Revolution, we realized that if we temporarily suspend the constraints of market interaction and form little communes (companies) where we invest upfront and all work together in an organized way, we can overcome the difficulties and uncertainty of markets and do some cool things we couldn't do previously.


Today though, we come to a third (perhaps final?) stage in this evolution, where the stop-gap measure that was industrial organization becomes, while never irrelevant, less and less necessary. We come to a world of ubiquitous connectivity where we collectively pursue humanity’s promise, constantly endeavoring onward and upward as we build on each other’s progress rather than simply recreating and then reburying it. Coordination becomes less important, but collaboration paramount, as everyone begins working together-apart to drive the staggering, unprecedented innovation and value creation of the coming century.

Wednesday, March 9, 2011

Suggestion to Facebook...









Thanks to helpful input from Matt Schlosser, just sent in the following suggestion to Facebook...

(PS go collaborative production / nascent group forming... posted the q a week or two ago, it sat there for a while, and then eventually it found its way to someone who was interested in it and could add value on top of it)


"Suggestion:

Allow users to share status updates with friends only in a certain geography


Additional Details:

All the time I blast things like, 'anybody want to come to the movies tonight?' Clearly, this message is intended only for my friends who live in the same city as me, and would be great to be able to spare the rest of my friends the (mostly) irrelevant spam.


More broadly, give users more control in targeting the content they publish (to geographies, interests, I'd even add a hash tag system). Key problem Facebook is trying to solve is discovery -- how do you match the incredible mass of content users publish with the often small groups of people to whom any given piece is interesting? Answer... the same way Facebook does everything: collaboratively, by letting users do it for them.


With regard to content broadcast, user 'privacy' settings aren't just about privacy... they're about targeting. Users are in the best position to know who will and will not be interested in the content they publish, and given they're sharing the content for some reason, they have every incentive to try targeting it correctly. Facebook should tap that potential"

Sunday, March 6, 2011

To the Limitations of Network Production and the Convergence of Passions

This week, my friend Mitch sent me a copy of the excellent paper, Working Wikily, by authors at the Social Innovation group at the Standford Business School. I highly recommend it to anyone interested in issues of collaborative production or social action generally. As I said to him, these guys definitely get it. They do a decent job selling the potential of network production, but the real thing I'd say they do, which a lot of folks who get excited about this stuff forget to, is describe the limitations. Network production is incredibly powerful in tons and tons of situations -- the majority of the time I'd say exponentially more so than command and control structures. But almost always, network production needs to be directed by (or at least come in conjunction with) more traditional forms of organization.

The paper reminded me of a long, interesting discussion I had with another friend earlier in the week about a dinner party I hosted the week before. I'd spent a good bit of effort at the time trying to decide whether we should do it potluck style and everybody bring something (ie network production), or whether I should just collect $15 from everybody and do it myself. It ended up being a bit of a hybrid, and basically worked out, but I realized in the future, I'd definitely opt for command and control. The reason for this is that network production has huge advantages. Often it can rally resources far in excess of what any centralized effort could produce, tapping the latent pool of distributed investments in both physical and human capital members of the network have already made that are simply sitting around idle. In so doing, it often allocates production far more efficiently, optimizing for the different talents and interests of the various members of the network. And in a case like this, it would also optimize for the diminishing utility of time (20 mins of labor would have trivial impact for everyone, whereas the 3 hours it would take for me to do it myself would clearly have an impact). Network production meanwhile generally produces much wider diversity of outputs, allowing people to find things more customized to their tastes, even if the average output were of lower quality than the more regulated, more limited output of a command and control structure.

But the problem is that for these parties, I need to be able to offer quality of service (QoS) -- otherwise I can't justify the risk of inviting certain people, friends who this is perhaps the first time I'm hanging with and therefor need to give a good show for. Networks aren't great at this, and can generally overcome the problem only either with strong cultural pressures (eg the original Northwest Native American potlucks) or with tremendous redundancy (eg the web, where 95% of things are crap, but there's such a huge abundance that the remaining 5% is more than enough to compete with centrally organized products). Neither of these conditions would have been met in the case of the party, as you'd have needed near 100% participation and it would have been within an ad hoc group amongst which there was limited social pressure (I'd sent out invites only the night prior, and it was a mix of people from different places, such that most people didn't know each other). Meanwhile, there are always efficiencies of centralization -- if it takes 15 people 20 mins each, that's 5 hrs total, vs the 3 hrs it took me alone given I could make just 1 trip to Eately, 1 trip to the wine store, 1 trip to get plates/cups etc. And the benefits of distributed production need always be weighed against the transaction costs, which for this situation would still be high... I (or a platform) would have to somehow know who's good at what, who lives near what speciality stores, dole out assignments without redundancy, etc. Emerging platforms might facilitate this sort of coordination (why for instance I'm pushing friends to join services like Latitude), but nothing at present I think would bring those transaction costs low enough to make the equation work for the level of quality assurance I was looking for.

Earlier this month, I posted the following on Facebook:

"I've said the thesis explains a lot of my personal prefs. Turns out, actually quite central. The idea is that upfront, fixed investments (in production assets, in employees v contracted services, in internalized data, in who you hang out with on any given night) are only a way to solve the problem of difficult on demand access. As transaction costs lower, these fixed structures become both unnecessary and inefficient."

I love this idea of my two great passions in the world coming together, digital media and good times with friends (which any of you who know me know what that's partially code for ;). When one is lucky enough to have this occur, the two amplify one another. Leveraging the mechanics of distributed production I'm coming increasingly to understand I feel is making my life infinitely better and easier... it's helping me understand and channel the way I work, helping me build networks of people I care about, helping me plan dinner parties. And then it runs in the other direction... I think of that scene in A Beautiful Mind where John Nash discovers non-cooperative equilibria because he and his buddies are deciding how best to chase some girls in bar. It's these situations I think, the ones we really understand because we've lived them, that help make clear some of the fundamental issues, with the added benefit of helping others understand.

And so a toast... to the limitations of network production and the convergence of passions.

Saturday, February 12, 2011

Wisdom of the Rat - In Defense of Network Production

This post is pretty much straight from some NYU work. It discusses a critical notion of network production, the idea of publish-then-filter (based on market selection) rather than filter-then-publish (credential then trust as authority). It's part of the broader notion of the thesis that as transaction costs decline and the benefit of making upfront investments and decisions declines, it makes more and more sense to do these things ad hoc, tailored to each situation and for each particular question.

In Pixar’s award winning animated feature Ratatouille, genius is found in the most unlikely of places. Remy, the story’s rodent hero, comes from a long line of undifferentiating rats, who eat whatever they find, garbage or otherwise. But Remy is different. His profound love of food and flavor drives him to the culinary arts and a secret career in the strictly-gated circles of Parisian haute cuisine. What sets Remy off on his journey is the motto of the chef Gusteau, “anybody can cook.” At first, Remy interprets this, optimistically enough, to mean that cooking is for everyone – that if they try, all people can understand its beauty and create great things. By the end of the film though, it becomes clear the slogan embodies a narrower, if still encouraging, meaning. It’s not that cooking is easy and with a little effort, everyone can master it – some people will always be better than others (as the film's obliging foil Linguine comes ultimately to realize). But the idea is that genius might be found anywhere, and you don’t know who that great cook is going to be. It might be someone unexpected or uncredentialed... perhaps someone not even human.

Remy’s journey is of course emblematic of the production studio behind its creation, the upstart Pixar, who believed it could bring beauty and inspiration to that oft disregarded genre of animation. But it also brings valuable insight to this week’s discussion on new modes of production and how crowdsourcing is changing methods of labor on the Internet.

While I disagreed with much of it, I really enjoyed the video at the bottom of the Wolfshead post, featuring commentary from Andrew Keen and others criticizing crowdsourcing as diffusing knowledge by superseding traditional notions of credentialing. Keen compares enthusiasm for web 2.0 technologies and their empowering of everyday individuals over established experts to the theosophy of Rousseau. This he frames as belief in the inner greatness of man and the innocence of youth (e.g. Emile), gradually corrupted by society. The analogy however is off. Keen paints web enthusiasts as Remy at first interprets Gusteau’s motto... as a belief that we all have it inside of us, that anyone can cook. This is not right. What the web enthusiast rejects is not the notion that an expert might be better informed, but merely the credentialing system that, due to historical constraints no longer in place, assumes this necessarily to be the case.

In Ratatoullie, Remy’s culinary talent is portrayed as in conflict with the established French
cooking scene. In the end, he’s basically right and they’re basically wrong. This is the point Keen seems not to be able to sign onto. In the video, he and others seem constantly to lament the idea that on Wikipedia,14-year-old kids can correct well established professors.

To begin with, the proposed conflict is again misleading... often it’s in fact traditional experts writing articles in their areas of focus, but just doing so voluntarily. Moreover, something like Wikipedia covers areas far beyond the spheres in which ‘experts’ typically operate, for instance obscure MMOGs or very local subjects. In these cases, not only might a particular 14- year-old in fact be the most qualified to comment (if that 14-year-old was say the highest ranking in the game or happened to be on hand at the scene), but even if he weren’t, it’s him or no one, and I’d think something is better than nothing.

But these are merely side points. In the web context, the real counter to Keen is that it’s not a question of justifying credentials, but rather of letting content stand for itself. If a professor writes a Wiki on a subject she knows a great deal about, that’s great. If a 14-year- old then comes along and corrects some piece of it, the community has the opportunity to decide whether this correction is right or wrong. This doesn’t mean ignoring credentials... if the professor supported her facts with a link to her own or someone else’s published paper, whereas the 14-year-old had much less established references, the professor’s should win out. It’s a question of opening the production process to market/democratic selection to determine.

Keen of course dismisses this as the core problem inherent in the Cult of the Amateur... that it creates what comedian Stephen Colbert calls ‘truthiness,’ where fact is determined by the whims of the crowd rather than by reality and the individuals qualified to judge it. This is misleading though, for there’s no such thing as objective reality. All ‘facts’ are subjective... they are simply proposals that society has generally accepted as true. Often, we designate certain individuals (like scientists or academics) to tell us what's true, and then simply trust them. But there’s no reason to believe these individuals always are 100% correct about everything they discuss. New media collaboration platforms such as Wikipedia offer the opportunity to balance this trust with input and evaluation from a much wider group.

To the extent there are systematic biases in any collaborative production system, such as those Wolfshead laments, it should be worked to correct these. And new media literacies are absolutely required to help us understand the limitations of this new form of information production and the appropriate ways to negotiate its products. But collaborative, democratized production is an evolutionary process that is constantly expanding the record of human knowledge at a rate far beyond any effort previously. And empirical evidence suggests it’s working pretty well – independent studies show time and again quality of the average Wikipedia article to be on par with that of an average encyclopedia Brittanica article. It would be a shame to disregard this incredible resource.

As Ratatouille’s Anton Ego points out, “the world is often unkind to new talents, new creations. The new needs friends.” Without a doubt, I believe Wikipedia and crowdsourced production in general belong in this category -- and, quite certainly, I intend to be one of them.

Sunday, February 6, 2011

The New Reading

This was a post I wrote in early October about new forms of literacy, responding to Nicholas Carr's argument in his article "Is Google Making Us Stupid?" in the Atlantic. The post builds off some of the work I'd done for my undergrad thesis around new media literacy, although it came before the 'distributed production epiphany,' so I'll definitely look to come back and give this an updated treatment sometime soon.

In the meantime, the basic idea is that Carr suggests we are being inundated by data and it's causing us to lose the ability to think deeply about things. I respond by suggesting what we're actually losing (orperhaps, choosing to abandon) is simply that traditional notion of 'deep thinking' that involves only acquiring and internalizing knowledge through passive textual decoding. Today, I suggest in the post and will elaborate on in future ones, literacy demands the ability not only to receive knowledge, but to actively pursue, evaluate, combine, and deploy it. So while I think Carr is right to note this change in the way we interact with information, I think he and all of us should be careful to think about the profound opportunities and necessities of new systems, and not simply disregaurd them out of hand because they're different.

In his article “Is Google Making Us Stupid?,” Nicholas Carr laments what he sees to be the harmful, homogenizing effect proliferating access to information is having on human mental faculties. Inundated by data, he suggests, we jump around from topic to topic and are gradually losing the ability for deep thought.

I strongly disagree with Carr.

Presumably, what Carr is concerned with is critical thinking – our ability to acquire and deploy knowledge to impactful ends, the foundation of both individual and societal progress. Critical thinking has two components: which generally I dub computation and experimentation. Computation is the simpler of the two... it involves executing a set of already-decided steps to determine some answer. It is by nature goal directed. Experimentation is the process that determines which of these sets of steps should be executed. It is ‘random,’ or alternatively, is creative in nature.

High level thought processes involve recursive deployment of computation and experimentation. Imagine a scholar seeking to answer some fundamental and yet to be understood question in her field. Likely, she would start by acquiring knowledge – reading up on topics around the question. She probably doesn’t know exactly where she’s going or exactly what knowledge she needs to obtain, but she’s creating a base dataset with which to begin her inquiry. Gradually, she will likely develop some overarching hypothesis. To prove it, she will head down some particular path. She will start reading up on some sub branch of the field, design some experiments to test out the hypothesis and execute them. If these don’t bear out, she will find some other path and try it out, or perhaps she will disprove the hypothesis entirely, look for some new explanation and then repeat the process until eventually she finds the answer.

At each point in this process where the scholar is selecting a new mode of inquiry – in choosing hypotheses, in choosing the paths by which to pursue them – the scholar is engaged in experimentation. She is somehow synthesizing the information she’s internalized and using it to creatively generate an idea and a set of actions. At each point where she’s executing these sets of actions, she’s engaged in computation .

In all modes of critical thinking, we use media as aids to both experimentation and computation. When a person is brainstorming and taking notes on a whiteboard, she’s using a physical encoding medium to augment the capabilities of her brain, freeing up processing power in her working memory by offloading the storage of certain data elements. The person may for instance be writing down each area she explores as she does so. This aids in her experimentation by making clear which areas she’s already explored and which are yet free for her to venture.

With the invention of computers though, media can take on the even more important role of handling computational processes altogether. As described above, computation is the set of elements in critical thinking that involve execution of predetermined steps. These steps – not involving any sort of randomness or human creativity – can be offloaded to machines. When children learn math at early stages, they first are taught the process of computation. They memorize multiplication tables and do long division with pencil and paper. When it’s time though for them to move onto more complex areas of mathematics such as algebra, calculus, and trigonometry, these more rudimentary computational processes get offloaded to a calculator.

This process of using external resources as part of mental processes is known as distributed cognition (examples thus far have included offloading only to media and computational devices, although distributed cognition can equally well involve offloading to other people). As with any complex information architecture, the key qualities of a distributed cognition setup are data storage, data access, and data processing. How much information can be stored? Where will it be stored and what does this mean for how easy it is to get to? What sort of manipulation on the data is done once retrieved?

The driving motivation for things like Carr’s ‘deep reading’ or a scholar spending years researching in an area to become expert in it is internalization of data for fast access and abstract processing. A literature scholar writing on a novel needn’t memorize its every word to efficiently analyze it. When she’s engaged in critical thinking on a particular paragraph, certainly she may read that paragraph deeply and try to bring much of it into her working memory cache where she can more easily manipulate its components toward creative and analytic ends. Generally though, the text is encoded in a permanent physical medium (it’s written down), and so when she turns to analyze the next paragraph, there’s no need for her to expend limited mental resources holding the exact data of that previous paragraph in memory.

Computers today are great at storage and computation – they can hold hundreds of gigabytes worth of data with perfect fidelity indefinitely and can execute trillions of serial processing tasks every second. The problem though is that they’re not good at creative thinking, which means we need to do it, and creative thinking requires fast access to data and computation – you need to be able to dart around to different ideas, know quickly whether a particular path is even worth exploring. Historically, the time it would take to access data stored externally or launch an external computation process meant that the only was to achieve this sort of creative analysis was to internalize everything.

As access to information and computational processes proliferate however, and becomes infinitely more efficient, this changes. More and more of the lower levels of critical thinking can be offloaded to external, distributed sources – I can check the relevant wiki page when I need some fact about Georgia in the 1850s, I can see what the first Google link is when I type in “current social pressures in Batswana” – leaving us free to roam in the more important, more abstract and creative elements of critical thinking. Not only though does information proliferation make this sort of offloading possible, it makes it ever more critical. For as information proliferates, it becomes impossible to internalize it all and the value of any particular piece of information declines proportionately.

Distributed critical thinking is of course complex, and teaching it is one of the main goals of so called “New Media Literacy” education programs. Relevant to Carr’s argument, new media literacy involves learning not only how to deploy distributed resources, but also when. Indeed, it will often still make sense to ‘read something deeply’ so as to internalize it. And individuals must practice and retain this skill. But the skill that is equally and ever more important is the ability to determine which content to process and how to do so. To learn how to jump around from link to link, finding relevant tidbits that help inform the main argument and add additional components beside it. How to skip paragraphs entirely when they describe information you already posses internally. How to execute any number of information consumption variants besides simply sitting down and reading a thing from point A to point B.

Certainly, not every act of digression is an enlightened demonstration of distributed cognitio. Some people may well have become a bit ADD sometimes – reading a lot and internalizing little, high or low level. Each citizen of this century though owes it to herself to learn the skill of information negotiation, and we as a society owe it to one another to aid in this process. While Carr's correct to call out what's clearly an evolution of literacy and a problem that amounts to one of it's biggest traps, we must be sure to view it as such -- a call for smart, controlled change, and not merely a blind or nostalgic opine for a literacy of the past.