2005-12-21

Central hosting of all javascript libraries

Ian Holsman voices an idea I have long been considering ought to be done, which would do the web very much good: setting up a central repository of all versions of all major (and some minor ones too, of course) javascript libraries and frameworks, in one single place.

Pros

Why is this such a good idea? There are several reasons. Here are a few, off the top of my head:

  • Shorter load times. While bandwidth availability has generally increased in recent years, loading the same bits of code used everywhere for every site that use the very same library anyway, is wasteful. The more sites that use one canonic URL for importing a library, the quicker these sites will load, since visitors to larger extents will already have the code cached locally. Shared resources make better use of the browser cache.

  • Quicker deployment. There are reasons why projects who have already taken this approach (Google Maps, for instance) spread like wildfire.

    Multiply the number of libraries in active development with their number of releases over time and the average time it takes for a developer to download and install one such package. Multiply this figure with the number of developers that choose to use (or upgrade to) each such release, minus one, and we have the amount of developer time that could optimally instead be used for doing active development (or hugging somebody, or some other meaningful activity, adding value in and of itself).

    Moving on, this point has additional good implications, building on top of one another:

    • Lowering the adoption threshold for all web developers. This is especially pertinent to the still very large body of people stranded in environments where they have limited storage space of their own, or indeed can not even host text/javascript files in the first place.

    • Increased portability is a common effect of standing on the shoulders of giants such as Bob Ippolito, David Flanagan, Aaron Boodman, and all the other very knowledgeable, skilled artisans who know more about good code, best cross browser practices, good design, and more, than the average Joe cooking his own set of bugs and browser incompatibilities, from the very limited typical perspective of what works in their own favourite browser. Using code wrought by these skilled people, eyed through and improved by great masses of other skilled people adds up to some serious quality code, over time. While there are good reasons for not adopting typical frameworks, the same does not hold for the general case of libraries.
  • Better visibility of available libraries through gathering them all in one place, in itself likely to have many favourable consequences, such as spreading good ideas quicker across projects.

Cons

There will of course never be 100% adoption of any endeavour such as this, but benefits will grow exponentially as more applications and web pages join up under this common roof. There are also reasons for not joining up, of course, many ranging along axes such as Distrust, Fear, Uncertainty and Doubt.

Trust is important. A party taking on this project needs to have or build an excellent reputation for reliability, availability, good throughput, response times and security (this is not the site you would want to see hacked). Naturally also to never ever abuse the position of power it is to have the opportunity of serving any code they choose to that millions of web sites use for their core functionality. Failing to meet that trustworthiness is also an instant kiss of death for the hosting center's role in this, on the other hand, so that particular danger would not keep me awake at night.

Why?

Doing this kind of massive project, and doing it well, would be a huge goodwill boost for any company to attempt it and succeed (a good side to it is that it would not be bad for the web community, if multiple vendors were to take it on). It's the kind of thing I would intuitively expect of companies such as Dreamhost (but weigh my opinion on that as you will -- yes, it is an affiliate link), who have outstanding records for being attentive, offering front line services and aim high for growing better and larger at the same time. I'm not well read up in the field, so my guess is as good as anyone's -- though after having read Jesse Ruderman on their merits a year ago, I think my own migration path was set. But I digress.

How?

The first step to take is in devising a host name and directory structure for the site. Shorter is better; lots of people will type this frequently. For inspiration, di.fm is a very good host name (for another good service) -- but if you really want to grasp the opportunity of marketing yourself by name, picking some free spot in the root of your main site, http://google.com/lib/ for example, will of course work too. (But, in case you do, don't respond on that address with a redirect; that would defeat the goal of minimum response time.)

Let's say we end up at http://lib.js/. For reasons apparent to some of the readership we probably won't, but never mind that. Devise short and to the point paths, and place each project tree verbatim under its name and version number, however these may look. http://lib.js/mochikit/1.1/ could be one example. In case you would like to reserve room in your root for local content, be creative: http://lib.js//mochikit/1.1/ would work too; your URL namespace semantics are your own, though few challenge classic unix conventions today. Pick one and stay with it for all eternity.

Line up all other libraries side by side with the first, starting with those more popular today, moving on to less known niche software. Add as much of past versions of all projects as you can get your hands on, when you have achieved good coverage on current most wanteds. This makes it possible to migrate forward in due time, as each party choses, and to quickly import past projects and sites, without doing any prior future compatibility testing.

You might also opt to set up and maintain a rewrite rule list for linking the most recent version of every library under a common to all projects alias, such as http://lib.js//dojo/latest/. Not because it is a very good idea to use for published applications, but because it is a nice option to have. Maintaining two sets of rewrite rules, the latter doing proper HTTP redirects, is another nice option, where http://lib.js/latest/dojo/ would bounce away to http://lib.js//dojo/0.2.1/. Today, anyway, assuming above sketched URL semantics.

That's your baseline. Next, you may choose to line up project docs in their own tree branched off at the root, gathering up what http://lib.js/doc/prototype/1.0/ has to offer in way of documentation, otherwise keeping the native project file docs' structure below there. Similarly, you may of course over time also provide discussion forums, issue tracking and other services useful to library projects, but don't rush those bits; it's not your core value, and most projects already have their own and might well not look favourably on having them diluted by your (well meant) services.

Invocation

Ajaxian further suggests integrating such a repository with library mechanisms to do programmatical import by way of the library native methods of each library (or at least that of Dojo; not all libraries meet the same sophistication levels, unfortunately), rather than the time tested default and minimal footprint "write a <script> tag here" approach. Also an idea in very good taste.

Adding selective library import tools by way of native javascript function calls from the live system, rather than pasting static bits of HTML into the page head section, is where wise minds must meet and ponder the options, to come up with a really good, and probably rather small, set of primitives for defining the core loader API. It will very likely not end up shared among all projects, but over time projects will most likely move towards adopting it, if it gets as good as it should.

The coarse outline of it solves the basic problem of code transport and program flow control: fetching the code, and kicking off the code that sent for it in the first place. One rather good, clean and minimalist way of doing it is the JSONP approach, wherein the URL API gets passed a ?callback=methodname parameter, that would simply add a methodname() call to the end of the fetched file. (Given what ends up coming out of final API discussions from bearded gentlemen's tables, there may of course be some parameters passed back to the callback too, or even some other scheme entirely chosen.)

Methodname here is used strictly as a placeholder; any bit of javascript code provided by the calling part would be valid, and most likely the URL API will not even be exposed to the caller herself; the loader will probably maintain its own id/callback mapping internally. As present javascript-in-browsers design goes though, on some level the API will have to look like this, since there is no cross browser way of getting onload callbacks, especially not for <script> tags.

On the whole, this project is way too good not to be embarked on by anyone. Any takers? You would get lots of outbound traffic in a snap -- good for peering. ;-)
blog comments powered by Disqus