All posts by nick

the ease of indulgence

My understanding is that accessibility is coming — they’re working on it, but it isn’t ready yet…

60 frames per second is not “would be nice”. It’s “must have”. And the DOM doesn’t have it.

Once again, the plain old web has been weighed in the balances and found wanting, as it was with Flash a decade ago1, and ActiveX before that, and Java oh god I’ll stop there. This time, it’s the smooth shiny immediacy of native apps on pocket supercomputers that shows up the DOM when it tries to follow, like a lead-footed celebrity gallumphing through the early rounds of Strictly Come Dancing.

We should be familiar with these push-pull moments by now. Jeremy Keith hinted at it in his recent post on Angular.js, which in turn taps into a broader unease about Javascript frameworks as Procrustean site-making machines, especially those that outsource the rendering workload to the browser. There’s a revived tension between the domain of professional front-enders and the thrill of tapping words wrapped with tags into a text editor, refreshing your browser, and seeing them appear for the world. Some of that’s just nostalgia mixed with the old fear that coders and browser-makers would love to seal the edges of the web and pen amateurs and dabblers and tinkerers into a nice cosy <textarea>. But the sense of opacity and closure is real: while ‘View Source’ hasn’t gone away (yet), it’s no longer the same enticement, an invitation to delve.

The places where popular websites are made are not the places where they are seen. From a strict comparison of hardware, the gap has certainly narrowed: we’re long past the time when large monitors and ISDN lines lulled developers into building bloated sites for people on dial-up and poky 640x480s. You can test on a best-selling smartphone or tablet or Chromebook and feel confident that your experience mirrors that of millions: a Coke is a Coke. However, this levelling of hardware can mask a different gap in the broader assumptions surrounding it: the capabilities of users; the full range of technology they have on hand; the amount they can afford to pay for data; the secrets they wish to keep. Ubiquity is more than a numbers game, and it is still unevenly distributed.

‘On the web, but not of the web.’ Designed in California for Californians. The allure of functionality and portability and ease of deployment, just an <embed> or an <object> or a <canvas> away.

All of this brings to mind Russell Davies’ recent piece on ‘principle drift’, which looks back at the pre-iPlayer days and (I think, correctly) argues that ‘[t]he BBC was most interestingly digital… when putting telly on the internet was incredibly hard.’ 2 Technological constraints, like financial and bureaucratic restrictions, often create space for innovation: the inter-bubble years produced PIPs/PIDs, ad hoc social networks to guide playlists,3 research into children’s online safety, the collection of social history, a gradual understanding of the intimate affinity between email and radio, so many things. You could argue that some of these experiments these were distractions, indulgence, a colonisation of online space that was others’ by right, but it’s hard to look back and think of other British institutions with the institutional clout and capacity to attempt them. (Tony Ageh’s vision of a ‘Digital Public Space’ built upon access to the wireless spectrum, unmediated, unmetered, unmonitored and unmonetised, taps into this.)

Once familiar routes are dredged out by Moore’s law and 5 Mbps downstream, they’ll be taken.4 Once taken, they’re easy to maintain and justify and perpetuate.

What Flipboard’s engineering team did is impressive, but when you’re paid to build native mobile apps and very good at doing so, you’ll be drawn to make a web browser behave like a native app before considering things like accessibility. ‘This area needs further exploration’ and ‘we’ve seen mixed results’ read far too easily as ‘we had more exciting things to make.’ Flipboard isn’t a chartered public broadcaster or a government operating under a set of institutional obligations, nor should it be expected to behave like one; however, building for the web is a form of participation, and comes with a set of tacit principles tied to its history and origins.

For long stretches of that short history, the aspirations of the web towards universality and inclusiveness have been little more than that, grimly carried through browser wars and CSS quirks and the dominance of proprietary plugins. Whenever the smoke clears, there’s room to build, and each lull produces something more to defend. Mark Pilgrim’s Dive Into Accessibility begins with the question ‘why bother?’, and answers it by describing in detail the people who benefit from accessible websites. It came online in 2002, before Firefox, Safari and Chrome. The concept of progressive enhancement dates from the same period, and slowly merged with the design-centric pursuit of ‘liquid layouts’ over the 2000s to become the loose, baggy field of responsive design (and now contextual design), its fundamental rule being to serve something that reflects and respects the position of the user, instead of chiding users for what they lack.

The model in 2015 is clear enough: begin with something that embraces universality, and augment, augment, augment. That’s why I’m more comfortable with Richard J. Pope’s recent challenge to developers to exploit the ‘unrealised but present potential’ in the untapped augmentations of the mobile browser and establish the design standards of the not-yet-present. It has taken over a decade for accessibility to take its proper place at the heart of web design, hard-fought all the way. In that context, choosing 60FPS at its expense feels flimsy and indulgent.

This is for everyone.

  1. That sound in the background? Thousands of former Flash developers whistling through their teeth in Gruber’s direction.

  2. The same applies to Channel 4’s education programming a little later. The parallels are not coincidental.

  3. Spotify before its time. No, really.

  4. What becomes easy to transpose onto the digital space with guaranteed 5 Mbps upstream? I’m not yet sure.

hunting highs and lows

Take a widely-owned sensor with a reverse-engineered USB driver; hook it up to an Android phone with an OTG cable; install a custom app that pulls data from the sensor and pushes it to a free cloud-hosted Mongo database; fork a GitHub repo of a Node.js webapp, add your DB credentials, and deploy it onto Azure to parse and display all that data on the web. Perhaps even send it to a Pebble if you feel inclined.

Sounds fairly interesting, right? Small parts, loosely joined: a little bit of hardware-hacking, some Internet of Things, a splash of SaaS, a dollop of dataviz. The kind of thing that my smart techie friends get up to for love and/or money. Rough at the edges, perhaps a few too many joins for comfort, very much Not For Production, but very 2014.

Except what I’m describing is NightScout, an ongoing community project created and maintained by type 1 diabetics, their families and friends, with the aim of taking the data from Dexcom continuous glucose monitors and doing more with it than Dexcom’s crappy Windows-only software allows.

The CGM In The Cloud project’s been running for about a year: its hashtag motto is #WeAreNotWaiting. Not waiting for FDA approval, not waiting for companies to update their awful software, not waiting for fancier and better-funded projects like Tidepool to go beta. Not waiting for basic security for their data, either; not waiting for rewrites and refactoring to make it easier to deploy.

It’s simultaneously breathtaking and heartbreaking: for every T1D family that has managed to assemble the kit and liberated their CGM data, there’s one being gently talked through deploying from GitHub to Azure, presumably many that see the requirements and cost and decide that yes, they can wait a little longer, and more still that aren’t even aware of the project. It shouldn’t be this hard.

so amazingly primitive

There’s a little flea market down the road from us, just a few tables and a coffee stall every Saturday in the summer, where one of the regulars is an oldish bloke who sells men’s trinkets: penknives, hip flasks, metal badges, that kind of thing. One corner of his table is piled with broken wristwatches selling at fifty cents each, three for a dollar, and plenty of room to bargain for more: perfect, the sign above them says, for art projects, kids who like taking things apart, lovers of the obsolete.

To dig through them is a kind of archaeology, exposing the fashions and technology of the twentieth century in chipped plating, scratched acrylic, faded dials and wonky, detached hands. Watch designs are intensely stratified, their rough dates of origin easy to guess, whether it’s the small, simple designs of the 50s or the chunky reinforced plastic of the G-Shock and its many cheap imitators. [1]

Among the pile, it’s easiest to pick out watches from the 1970s, because everything goes crackers.

Armitron LED Watch, 1970s. All photos by Joe Haupt // CC BY-SA 2.0

Here you’ll find rotating hour and minute wheels, thick squared-off bezels, TV dials, gradient faces, blingy hour markers — and of course, the first generation of LED and LCD digital watches. Most of these design elements first appeared in the late 1960s, along with kipper ties and flares, but over the course of the 70s these space-age signifiers became mainstream.

The technological story’s been told often: Swiss and Japanese makers sought to miniaturise quartz clocks during the 1960s, producing along the way a range of short-lived alternatives harnessing accuracy from batteries: electric mainsprings powering a conventional balance, the tick replaced by the high-pitched hum of a tuning-fork oscillator. In 1969, the Seiko 35SQ Astron was placed on the market in limited numbers, at a price few people could afford; as production expanded, and other manufacturers followed suit, quartz still carried a significant premium over mechanical watches.

These early quartz movements retained a hybrid character, sandwiching the front half of mechanical watches against an integrated circuit, the watch battery occupying the space traditionally reserved for the balance wheel. They were power-hungry, draining expensive batteries in months; it became clear that the only way to make the new technology stick was to improve power efficiency, and that one way to do this was to abandon the traditional face and its multiple moving parts. The makers of calculators and industrial displays provided the means, at least for a short while, in the form of the digital LED: even then, you had to press a button to illuminate the display, in order to preserve more of the battery’s limited life.[2] These high-tech companies collaborated for some time with established watchmakers, then began putting out LED watches under their own names, especially in the USA. Some of those watches had tiny calculators of their own.

The Swiss mechanical watch industry at that time still had a structure Adam Smith would have understood: separate hand makers, movement makers, dial makers and case makers supplied the mass-market brands; a few companies produced everything in-house and sold to high-end customers. The arrival of quartz knocked the stuffing out of mass market production, provoked the high-end makers into weird experiments, forced massive consolidation of Swiss production, and threatened to demolish it outright.

As the decade ended, a Japanese calculator company incorporated LCDs into the watch face, and these efficient always-on displays swiftly killed off those LED makers who either chose not to switch over or couldn’t make the transition in time. We don’t really think of Casio as a calculator company today. A few years later, a new Swiss conglomerate introduced a quartz watch with a sealed plastic case and no serviceable parts that was explicitly designed to reflect changing fashions and be replaced on a regular basis. Swatch is the reason why Omega still exists.

In short, the 1970s was a decade of genuine transition for the watch industry, where radical styling accompanied rapid changes to the production process, electronics companies momentarily asserted their superiority to traditional manufacturers, and where customers accepted very clear compromises in order to strap the future to their wrists.

You can see where this is heading.

There are many Apple products that cite the design language of the past to tell stories about the present and future, and by ‘the past’, I mean ‘Dieter Rams‘. But I can’t think of an Apple product in recent history that implicitly signals its own limitations in the precedents it evokes.

The Apple Watch is telling that story.

There’s a willingness to combine gold [3] with a digital display. The squared case recalls designs by Pulsar, Commodore, Texas Instruments and Hewlett Packard. There’s no Oyster bracelet or Jubilee band or anything that would fit a modern Rolex or Omega. Instead, the bracelets and straps project an unabashedly retro aesthetic while offering technical improvements over their ancestors: the Milanese mesh is fastened magnetically instead of being threaded; the straightlinked stainless bracelet is reminiscent of the old-school expanding Speidel band but won’t snag the hair on your arms. The clasps and closures point to what’s possible when you’re not hobbled by battery life or sensor capability or any of the fundamental problems of smartwatch internals. They’re a statement of purpose, but can’t avoid being an acknowledgement of constraint.

We’ve often heard that Apple launches products ‘when they’re ready’ — debatable, but let’s run with that for now. What would a product from Apple look like if it wasn’t fully ready — nor likely to satisfy people’s imaginations for another few years because the underlying technology isn’t there for anybody yet — but needed to be released anyway because everybody else is having a go, and those internal improvements won’t happen without a few generations of mass production? And what if Apple decided to signify that this product wasn’t quite there yet by recalling a previous era of not-quite-there-yet technology that, even with its limitations, seemed like a pretty neat idea?

My guess is that it would look a lot like the Apple Watch.


[1] The men’s wristwatch was born on the battlefield just over a hundred years ago, and grew up in the trenches and the new theatre of the skies. It accompanied a new form of warfare carried out at scale, mechanically and mechanistically, where artillery volleys and infantry charges and aerial bombardments and mass slaughter needed to be synchronised and fought with both hands free. For decades after, wearing a watch was symbolic of having served in the war to end all wars. The woman’s watch arrived much earlier, delicate and bejewelled, and retains the memory of that distinct origin in its design.

[2] James Bond, prime engager-with-brands, wore a Pulsar P2 in Live and Let Die (1973) while his Rolex was in Q’s repair shop.

[3] If you think the terminology of the EDITION edition is clunky, you’re right; but if English is your first language, you may well be missing the point.