2015-08-17

During the recording of the web platform’s “Are Web Components ready?” podcast one of the comments stuck with me:

With web components we’re trying to bring ES6-aera technology into an ES5 world. That makes no sense.

There is a lot of interesting logic in that one. Right now, we’re in a bad place with the web. There is a big discussion what we should consider the “modern web” and how to innovate it. And the two sides of the it are at loggerheads:

Purists of the web frown upon JavaScript dependency and expect no new feature to break the web. Instead it should build upon what we already have. These are developers wearing battle scars of the browser wars. I count myself amongst them for long time now – I just like things that work and got disappointed by browsers once too often.
The more “pragmatic engineering” crowd sees the web as a software platform that needs evolving like every other one does. And one that is falling woefully behind. Native platforms on mobile for example do not worry about breaking existing experiences. It is OK to request the user to have a certain version of an OS to run. The same – in their view – should be OK on the web.

Both are correct, and both are wrong. And I am sick of it. Instead of trying to fix the situation, we bicker over ideas and methodologies. We christen new development approaches with grandiose names. Then we argue for days on end what these mean. We talk about “real world use” without looking at numbers not skewed in favor of certain solutions. And while all that happens, we’re losing the web.

I’m not buying the tales of woe that all users prefer native to the web now. That’s a short-sighted view backed up by sales numbers in affluent countries and our own circles.There is a massive audience out there with no smartphones.

I also don’t buy the argument that native is a fad and the web will prevail in the long run. We need to get better with the web and its technologies. And we won’t do that by pretending what we did twenty years ago is still great.

There is an argument for leaving old browsers without new functionality. The funny thing is that this is also what I, as a person who doesn’t want to break the web, believe in.

Should we stop pushing the web forward?

A few weeks ago Peter-Paul Koch kicked the hornets’ nest when proposing a one year innovation hiatus of browsers in his “Stop pushing the web forward” post.

He pointed out that there is a problem.

We have many standards and proposed solutions to the shortcomings of the web. But all are still far away from implementation across browsers.
He also pointed out a problem with adoption speed. None of the proposed standards managed to get any traction within a year.

Web Components is the biggest culprit there. This also was one of the findings of the Web Components/Modules panel of EdgeConf this year. It seems that without libraries, Web Components are more or less unusable right now. This is partly because there is a lot of consensus yet to be found as to what they should achieve.

It is hard to write a standard. It is hard to get consensus and buy-in from various browser vendors and their partners. And it is hard to make sure we don’t put a standard in our browsers that turns out to be less than optimal once we have it in there. We had enough of those already.

This is where JavaScript comes in. It has always been the means of adding upcoming functionality to the browsers of now and the ones of the past.

JavaScript is powerful

The great thing about JavaScript is, that it spans all the layers of web development. You can create both HTML and CSS with it (using the DOM and CSSOM or writing out styles inline). You can do so after you tested for capabilities of the browser and – to a degree – the end user. You can even create images, sounds, 3D environments, well – you name it.

JavaScript also successfully moved away from the client side and powers servers, fat-client applications and APIs. In these environments you control JavaScript engine. Originally this was only V8, but now also Chakra is available as an alternative. This sort of control is great for developers who know what they are doing. It also gives us the wrong impression that we could have the same on the web.

The bad thing about JavaScript is that this gives a lot of power to people too busy to use it in a thorough fashion.

User agent sniffing is rampant and woefully wrong.
A lot of solutions test for touch support and then assume a smartphone, leaving touch-screen devices with the wrong interface.
Many users of libraries trust them to fix all issues without verifying.
A lot of user agent sniffing checks for a name of a browser instead of the buggy version, thus making fixing those bugs a futile exercise for this product – it will always stay patched.

There is no doubt, that the use case for JavaScript has changed in the last few years and that – for good or worse – our solutions rely on it. This is OK, if we ensure that only those who can get this functionality. We can not make developer convenience result in empty pages.

Empty pages are empty pages

XHTML was killed off because it was too unforgiving. Any encoding problem in our files would have meant our users got an XML error message instead of our products. That’s why we replaced it with HTML5, which uses a much more forgiving parser.

The same problem applies with JavaScript. It is not fault tolerant. Any error that can happen is fatal to the execution of the program. To make matters worse, even the delivery of JavaScript to our end users is riddled with obstacles, bear traps and moats to cross. If you can’t rely on the delivery of your helper library you can easily end up with an empty page instead of the 60fps goodness we work so hard to deliver.

It is time to fix JavaScript

And we need to change JavaScript. It is virtually possible to do everything in JavaScript and you learn about new things and quirks of the language almost every week. Whilst this is fun, it also holds us back. Our success as developers used to be defined by knowing how browsers mess up. Now that we kind of fixed that our job should not be to know the quirks and oddities of a language – it should be to build maintainable, scalable and performant software products.

Our current attempts to improve JavaScript as a language has a few issues. Older browsers getting an EcmaScript 6 script taking advantage of all the good syntax changes see them as a syntax error and break.

That brings us to an impasse: we want to innovate and extend the language, but we have to deal with the issue of legacy environments. Our current solution is the one we always took in case of confusion: we abstract.

Abstraction languages and transpiling

On first glance, this seems like a great idea: we let computers do what they do best, crunching numbers and converting one thing into another. Using a language like TypeScript or a transpiler like Traceur or Babel we gain a lot of benefits:

End users don’t get broken experiences. The transpiler converts ES6 to understandable ES5. They may get a lot more code than they would in an ES6 capable environment, but that’d mean they’d need to change their environment – something people only do for very good reasons. Our products are not reason enough – sorry.
Developers can use the terser, cleaner syntax of ES6 right now without worrying about breakage in older browsers
We can embrace the more structured approach of classes in JavaScript instead of having to get into the “JavaScript mindset”. This means more developers can build for the web.
We control the conversion step – turning ES6 into code that runs across browsers happens in the transpiler; a resource we control. That way we can convert only what is necessary to legacy code and use supported features when they happen.

Alas, the last part doesn’t happen right now. Transpiling as it stands now is slow to work in the browser which is why we do it on the server side. There we can not do any capability testing, which means we convert all the ES6 features to the lowest common denominator. That way any of the native support for ES6 in browsers never gets any traction.

In essence, we added ES6 to browsers for internal use only. Developers write ES6, but always run ES5 in the browser.

This also means that we have the problem for developers that we don’t write the code that runs in the browser, but one level up from that. That makes debugging harder and we need to use sourcemaps to connect errors with the line in our source code that caused it. We might also run into the issue where the code generated by the transpiler is faulty and we can’t do anything about it.

The beauty of the web was its immediate connection between creation and consumption. You wrote the code that ran in the browser. Developer tools in the last years became amazingly sophisticated giving us insights into how our code behaves. With abstractions, we forfeit these beautiful options.

We already missed the boat once when DOM became a thing

Let’s turn back the clock a bit. Those of us who’ve been around when the DOM wasn’t standardised and DHTML was a thing remember clearly how terrible that was.

We rejoiced when we got DOM support and we had one API to write against. We even coined a term called “DOM scripting” to make a clear distinction between the DHTML of old and the “modern” code we write now. All of this was based on the simple principle of progressive enhancement using capability testing.

All you did was wrapping your code in a statement that checked if the “modern” DOM was supported:

if (document.getElementById) {
// … your code
}

And then you used all the crazy new stuff that made our life so much easier: createElement, appendChild, getElementsByTagName. These were great (until we found innerHTML).

Browsers that didn’t make the cut, didn’t get any JavaScript. This has a few benefits:

You move forward, without breaking things – browsers that can not run your code, don’t get it. There is a beautiful simplicity in that.
You have a clear cut-off point – you know what you support and what you don’t. This cuts down immensely on testing time of your solutions. As you know an IE6 never gets any JavaScript, there is no need to test on it – if you enhanced progressively.
You have a reason to write sensible HTML and plain CSS – you know this is your safety blanket that gets delivered to all browsers. And in most cases, having HTML that works without scripting is a great baseline for your more sophisticated solution that happens once JS does its magic.

It was a great idea, and it got some traction. But then it got replaced by something else: abstraction libraries like jQuery, prototype, mootools, YUI and hundreds of others, most of which forgotten. But sadly enough not removed from old implementations.

It’s a kind of magic: here come the libraries

Abstraction libraries promised (and in some cases still promise) us a lot of things:

They sanitised across browsers – under the hood, they do a lot of bug workarounds and forking for different browsers to make things work. This is a lot of effort which resulted in browser bugs never getting fixed.
They allow to write less and achieve more – which sounds like a very pragmatic way of working. It also means we create more code than is needed. It doesn’t look much to use a few plugins and write 10 lines of abstraction code to create a product. Under the hood, we made ourselves dependent on a lot of magic. We also have a responsibility to keep our helper libraries up-to-date and test in the browsers we now promise to support. We doubled our responsibilities for the sake of not having to be responsible for working around browser issues.
They protected us from having to learn about the DOM – we didn’t need to type in those long names or the convoluted way to add a new element using insertBefore().

There is no doubt that the DOM in hindsight is a terrible API and its implementations are buggy. But there is also no doubt that we allowed it to stay that way by abstracting our issues away. Libraries bred a whole generation of developers who see the browser as something to convert code into, not something to write code for. It also slowed down the demands of fixing problems in browsers.

Nowadays, abstraction libraries of the DOM scripting days are landfill of the web. Many don’t get updated and quite a few new features can not be implemented in a straight-forward fashion in browsers as they would break web products relying on library abstractions with the same names.

Cutting the mustard

The idea of DOM scripting was to test for capabilities and use them instead of simulating them with convoluted replacements that work in older browsers. It removed a lot of the hackiness of the web and unspeakable things like writing out content with document.write() inside inline script elements.

The problem with capability testing is that it can backfire:

Support for one feature doesn’t mean others are supported – a lot of times browsers support them in waves, depending on demand and complexity.
Browsers lie to you – often there was rudimentary support for an object, but browsers lacked the methods it should have come with
You never know what you’ll want to support – and testing for each and every feature is tedious

This is why we started defining other cut-off points. The developers in the BBC called this “cutting the mustard” and – after looking at support charts of browsers and testing the real support – defined the following test as a good one to weed out old browsers:

if('querySelector' in document
&& 'localStorage' in window
&& 'addEventListener' in window) {
// bootstrap the javascript application
}

Jake Archibald defined an even more drastic one for mobile products, filtering out both old versions of Internet Explorer and older WebKit on mobiles:

if (!('visibilityState' in document)) return;

You can then layer on more functionality in tests:

if ('visibilityState' in document) {
// Modern browser. Let's load JavaScript
if ('serviceWorker' in navigator) {
// Let's add offline support
navigator.serviceWorker.register('sw.js', {
scope: './'
});
}
}

This is great. So here is my proposal. Features of ES6 can be detected – even those that are completely new syntax. Kyle Simpson’s featuretests.io is a pretty comprehensive library full of tests that does exactly that.

How about we make support for a few ES6 features our new “cutting the mustard”?

This results in some good opportunities:

We will get real use of ES6 features in browsers – this allows us to improve its performance and find browser issues to fix.
We get promises – which not only make async work much easier, but also are the baseline of almost every new API the web really needs (see ServiceWorkers, for example)
We are one step closer to real modules in Javascript
We will get fetch – a much saner way to load dynamic content than Ajax
We have in-built templating

The biggest win: iOS and Safari

Safari has lately become the problem child of the browser space. Many an innovation agreed by other players will fail to get traction if iOS is not on board. It is the stock browser of iOS and no other browser engine is allowed. No matter how much service and interface layer you see in Chrome or Opera for iOS, under the hood ticks the same engine.

And iOS is the golden child of the mobile world: it has the most beautiful devices, the most affluent users not shy about spending money and it doesn’t suffer from the fragmentation issues Android has. Android has larger numbers, but much less revenue per person.

That means what doesn’t run in Safari iOS is not ready to reach the audience the people who pay us deem the most important. Safari is the one and only browser engine on iOS, its roadmap is much foggier than the one of other browsers and its spokespeople are fashionably absent at every public discussion.

This sounds familiar and brings back terrible memories of an Internet Explorer monoculture as explained by Nolan Lawson in Safari is the new IE.

This is not going away any time soon. And many of the standards proposals implemented in Chrome and Firefox are red boxes on caniuse.com in the mobile Safari column.

However, the ES6 support of Mobile Safari is much better.

Can ES6 features make a sensible cut off point?

This is a bold idea. But I think a great one.

We have a chance with ES6 to innovate the web much more quickly than we could with other standard proposals that need browser maker agreement.
Legacy browsers will never get new APIs, and patching for them with polyfills and libraries results in a mess – better to let them have HTML and CSS
This goes hand-in-hand with the extensible web manifesto
Using ES6 features in production is the only way to make them perform well in browsers. You can’t optimise what isn’t used.

What do you think? Let’s take a look at support, and define a new “cutting the mustard”, extending this idea from API support to also include syntax changes.

Photo Credit: frankieleon

Show more