My understanding is that accessibility is coming — they’re working on it, but it isn’t ready yet…
60 frames per second is not “would be nice”. It’s “must have”. And the DOM doesn’t have it.
Once again, the plain old web has been weighed in the balances and found wanting, as it was with Flash a decade ago1, and ActiveX before that, and Java oh god I’ll stop there. This time, it’s the smooth shiny immediacy of native apps on pocket supercomputers that shows up the DOM when it tries to follow, like a lead-footed celebrity gallumphing through the early rounds of Strictly Come Dancing.
We should be familiar with these push-pull moments by now. Jeremy Keith hinted at it in his recent post on Angular.js, which in turn taps into a broader unease about Javascript frameworks as Procrustean site-making machines, especially those that outsource the rendering workload to the browser. There’s a revived tension between the domain of professional front-enders and the thrill of tapping words wrapped with tags into a text editor, refreshing your browser, and seeing them appear for the world. Some of that’s just nostalgia mixed with the old fear that coders and browser-makers would love to seal the edges of the web and pen amateurs and dabblers and tinkerers into a nice cosy <textarea>. But the sense of opacity and closure is real: while ‘View Source’ hasn’t gone away (yet), it’s no longer the same enticement, an invitation to delve.
The places where popular websites are made are not the places where they are seen. From a strict comparison of hardware, the gap has certainly narrowed: we’re long past the time when large monitors and ISDN lines lulled developers into building bloated sites for people on dial-up and poky 640x480s. You can test on a best-selling smartphone or tablet or Chromebook and feel confident that your experience mirrors that of millions: a Coke is a Coke. However, this levelling of hardware can mask a different gap in the broader assumptions surrounding it: the capabilities of users; the full range of technology they have on hand; the amount they can afford to pay for data; the secrets they wish to keep. Ubiquity is more than a numbers game, and it is still unevenly distributed.
‘On the web, but not of the web.’ Designed in California for Californians. The allure of functionality and portability and ease of deployment, just an <embed> or an <object> or a <canvas> away.
All of this brings to mind Russell Davies’ recent piece on ‘principle drift’, which looks back at the pre-iPlayer days and (I think, correctly) argues that ‘[t]he BBC was most interestingly digital… when putting telly on the internet was incredibly hard.’ 2 Technological constraints, like financial and bureaucratic restrictions, often create space for innovation: the inter-bubble years produced PIPs/PIDs, ad hoc social networks to guide playlists,3 research into children’s online safety, the collection of social history, a gradual understanding of the intimate affinity between email and radio, so many things. You could argue that some of these experiments these were distractions, indulgence, a colonisation of online space that was others’ by right, but it’s hard to look back and think of other British institutions with the institutional clout and capacity to attempt them. (Tony Ageh’s vision of a ‘Digital Public Space’ built upon access to the wireless spectrum, unmediated, unmetered, unmonitored and unmonetised, taps into this.)
Once familiar routes are dredged out by Moore’s law and 5 Mbps downstream, they’ll be taken.4 Once taken, they’re easy to maintain and justify and perpetuate.
What Flipboard’s engineering team did is impressive, but when you’re paid to build native mobile apps and very good at doing so, you’ll be drawn to make a web browser behave like a native app before considering things like accessibility. ‘This area needs further exploration’ and ‘we’ve seen mixed results’ read far too easily as ‘we had more exciting things to make.’ Flipboard isn’t a chartered public broadcaster or a government operating under a set of institutional obligations, nor should it be expected to behave like one; however, building for the web is a form of participation, and comes with a set of tacit principles tied to its history and origins.
For long stretches of that short history, the aspirations of the web towards universality and inclusiveness have been little more than that, grimly carried through browser wars and CSS quirks and the dominance of proprietary plugins. Whenever the smoke clears, there’s room to build, and each lull produces something more to defend. Mark Pilgrim’s Dive Into Accessibility begins with the question ‘why bother?’, and answers it by describing in detail the people who benefit from accessible websites. It came online in 2002, before Firefox, Safari and Chrome. The concept of progressive enhancement dates from the same period, and slowly merged with the design-centric pursuit of ‘liquid layouts’ over the 2000s to become the loose, baggy field of responsive design (and now contextual design), its fundamental rule being to serve something that reflects and respects the position of the user, instead of chiding users for what they lack.
The model in 2015 is clear enough: begin with something that embraces universality, and augment, augment, augment. That’s why I’m more comfortable with Richard J. Pope’s recent challenge to developers to exploit the ‘unrealised but present potential’ in the untapped augmentations of the mobile browser and establish the design standards of the not-yet-present. It has taken over a decade for accessibility to take its proper place at the heart of web design, hard-fought all the way. In that context, choosing 60FPS at its expense feels flimsy and indulgent.
This is for everyone.