read

First-time visitor: “Hi, my name is David, and for years I didn’t really understand the power of web apps.”

Support group participants, in concert: “Hello, David”

There, now I’ve said it—I am finally starting to understand what people have been getting so excited about for the last decade. That’s oversimplifying, of course, but I am more and more appreciating how the web is changing the rules.

A few factors that are shaping my current thinking are Gmail, bloglines, and Paul Graham’s Hackers and Painters.

Web apps are valuable because they’re public

Like many techies, I’ve been playing around with gmail (yes, I periodically have invites left over), mostly to read new mailing list traffic. It’s, not surprisingly, quite good at some things (finding old mail, structuring conversations, user interface), and surprisingly bad at others (i.e. prioritizing email for the sake of efficient slogging through new mail, e.g. by understanding the different types of emails). Mostly, though, it’s a very Googlish app because it redefines cool by throwing away decades of groupthink around the value of eye candy. Kudos to them for that. (I have lots of other thoughts about Gmail that I’ll defer to another post).

I’ve also become a fan of the blog aggregator bloglines. Gmail and bloglines both are fascinating because their value is (so far) tied to their living in the ‘public web’. Not only does their scalability make them interesting from a business point of view, not only are they remarkably simple systems (just ‘good ideas’!), not only are they better than desktop apps because they’re accessible from anywhere, but they have fascinating additional potential if they get to exploit Metcalfe’s law. Bloglines already achieves some of this by suggesting blogs I may want to read based on my reading list and correlations with other subscribers’ interests. Gmail, thus far, is bound by the constraint that email is email is email, even though communication between gmail accounts could be much richer than the ‘standard’. I’m fully expecting Google to do much more with gmail, google groups and blogger than they’ve done thus far—and to capture the power of the masses before the power of the fiefdoms that are corporations, based on their track record.

Tagging along the acknowledgement that for these apps at least ‘the power is in the web’ is the realization that it’s difficult for me to see how systems like gmail and bloglines could mesh with firewall-bound modes of operation. I can’t have work email on gmail. I can’t read internal blogs with bloglines. As long as that’s true, web apps will always have to live alongside similar apps that live inside the firewall (either on desktops or on internal websites). Frustrating to say the least, given that I don’t see how firewalls (or more generally the notion of private data) are going to go away anytime soon. I think I’m noticing this now because I never really cared to integrate “internal” data sources with Amazon, ebay, online radio, newspapers or any of the other “old” web sites that are part of my life. When it comes to email, blogs, and bookmarking services like del.icio.us, I’m finding that there is a fundamental tension between the need for some privacy and the public, with no clear answer in sight.

Publishing of web apps as a fundamentally different software publication model

One of the best parts of Paul Graham’s new book was the chapter in which Paul explains how much more sane he finds publishing software that lives on a server you (the publisher) run compared to shipping bits to customers. The biggest appeal from an end-user point of view (which in the end is all that should matter) is the fact that the application can change progressively when the code is ready, because there are no major releases. Gmail changes periodically, and sometimes I notice, sometimes I don’t. As long as the changes are subtle and for the better, my user experience is nothing but good. Another consequence of web application architectures is that, due to the simpler modes of interaction with the software (the browser), automated testing & QA is more easily achieved, thereby putting fewer steps between the conception of a feature and its delivery to customers.

The alternative model (employed by most publishers of “downloadable bits”) consists of releases which go through typically slow QA cycles, have associated marketing rollout plans, more or less rigid schedules, known bugs, patch releases, etc. Central to that is that releases are expensive (even if they’re cheap for the publisher, customers still need to upgrade, and they never all do). Some silver linings have been found in that state of affairs: “major” releases are often seen as valuable because they provide stable definitions of what’s in the product, which in turn makes it easier to define the value of the product, document it, discuss it, etc. It’s also true that marketers like occasions to make a splash, salespeople like occasions to talk to customers, and developers like to know that they’re “done!” (at least for now).

Note that most open source software projects follow this model, so it’s not just “marketing folks” who think this way. Open source communities bicker for months about what should go in a release, and users would be quite befuddled if there was no way of knowing “which version of Python you’re running”.

Which is better? Paul clearly believes the web-based, continual release model is better. Me, I’m not sure.

Speaking as a developer, I’m torn—continuous release is a goal most engineering teams strive for constantly—yet sometimes evolving an app through continual working stages is just too hard (and managing parallel development branches is often even harder).

As a manager, my mind reels at the agility that a continual release to the customer requires of every bit of the operation, from the doc writer (the doc plan is “what changed yesterday?”) to the sales people to the bug triage process (last week’s bugs may not be reproducible, but is that because they’re really gone?). At the same time, the appeal of reducing organizational and individual release stress is significant.

Speaking as someone who has to understand the impact of effort on revenue, I’m also torn. The two different models clearly imply different ways of monitoring the state of the business—in one case, one gets to judge releases by relatively coarse metrics such as upgrade rates, size of revenue “spikes” tied to launches, etc; in the other, one relies on day-to-day measurement of the efficiency of the online “machine”. I can imagine scenarios where it becomes very hard to know whether changes in the apparent health of the business are due to code changes, graphics changes, or the fact that it’s Friday.

As a user interface wonk, one of the concerns that comes to mind with the ‘constant evolution’ model is that if an app works “differently” every day, it can becomes harder for the user to gain comfort with it. I suspect that the model that Paul argues for works great for technically savvy, “high-end” users, but would be harder to get past “mom & pop”. Google’s search engine has changed veeerry sloooowly. Then again, it’s much too easy to underestimate the ability of people to adapt to change. I wonder if eBay evolved as rapidly as Paul’s online store generator did, even matching for ‘maturity’ of the apps.

Update: I’m not the only one to think that: see Drew McLellan‘s post on the topic.

Regardless, I’m grateful to Paul for making the point so clearly and helping frame my thinking. These bits of information, combined with the cumulative effects of hearing Tim O’Reilly speak about the changing face of the software ecosystem for years, are finally starting to affect my world view. It feels good.

Blog Logo

David Ascher


Published

Image

David Ascher

David Ascher's blog

Back to Overview