Category Archives: Worlds, real & imagined

false starts, true beginnings

The English language doesn’t carve out clear distinctions between varieties of knowledge: savoir and connaître; know-of and knowhow; the things you learn and the knowing that comes from familiarity. Perhaps the hardest to pin down is the knowledge derived from continuous incremental experience, the area under the curve where the x-axis is time.

We count ‘big time’ in centuries, but our sense of them doesn’t map to round figures. In literature, milestones often come early in a century, but rarely at the beginning: 1922 delivered high Modernism with Ulysses, Jacob’s Room and The Waste Land, echoing the Augustan moment of 1726-28 that gave us Gulliver’s Travels, The Beggar’s Opera and The Dunciad.1

The year is 2015; we don’t quite know what year it is. The notion that 2020 will show up five years from now seems absurd, that ‘10 years ago’ refers to ‘2005’ hardly less so. We recognise that some things weren’t part of our lives a decade ago (iPhones, Twitter, a black US president, an impending sense of doom) but digging back to the point of their emergence sets off a temporal slippage, a missed gear. This is the crest of the beginning, and still the not-quite-begun.

Perhaps it’s not entirely that. We began the next century ahead of time, anticipated it, sent Marty McFly to explore it and are waiting for his second coming among us, where we will know him by his hoverboard. This first decade-point-five becomes a continuation, uncovering what we projected of our fin-de-siècle desires, until the second-order effects converge to wash them all away.

‘Los Angeles // November, 2019’

  1. The equivalent in the 19th century? Perhaps 1847, which began with Vanity Fair and ended with Jane Eyre and Wuthering Heights, a truly late beginning.

the ease of indulgence

My understanding is that accessibility is coming — they’re working on it, but it isn’t ready yet…

60 frames per second is not “would be nice”. It’s “must have”. And the DOM doesn’t have it.

Once again, the plain old web has been weighed in the balances and found wanting, as it was with Flash a decade ago1, and ActiveX before that, and Java oh god I’ll stop there. This time, it’s the smooth shiny immediacy of native apps on pocket supercomputers that shows up the DOM when it tries to follow, like a lead-footed celebrity gallumphing through the early rounds of Strictly Come Dancing.

We should be familiar with these push-pull moments by now. Jeremy Keith hinted at it in his recent post on Angular.js, which in turn taps into a broader unease about Javascript frameworks as Procrustean site-making machines, especially those that outsource the rendering workload to the browser. There’s a revived tension between the domain of professional front-enders and the thrill of tapping words wrapped with tags into a text editor, refreshing your browser, and seeing them appear for the world. Some of that’s just nostalgia mixed with the old fear that coders and browser-makers would love to seal the edges of the web and pen amateurs and dabblers and tinkerers into a nice cosy <textarea>. But the sense of opacity and closure is real: while ‘View Source’ hasn’t gone away (yet), it’s no longer the same enticement, an invitation to delve.

The places where popular websites are made are not the places where they are seen. From a strict comparison of hardware, the gap has certainly narrowed: we’re long past the time when large monitors and ISDN lines lulled developers into building bloated sites for people on dial-up and poky 640x480s. You can test on a best-selling smartphone or tablet or Chromebook and feel confident that your experience mirrors that of millions: a Coke is a Coke. However, this levelling of hardware can mask a different gap in the broader assumptions surrounding it: the capabilities of users; the full range of technology they have on hand; the amount they can afford to pay for data; the secrets they wish to keep. Ubiquity is more than a numbers game, and it is still unevenly distributed.

‘On the web, but not of the web.’ Designed in California for Californians. The allure of functionality and portability and ease of deployment, just an <embed> or an <object> or a <canvas> away.

All of this brings to mind Russell Davies’ recent piece on ‘principle drift’, which looks back at the pre-iPlayer days and (I think, correctly) argues that ‘[t]he BBC was most interestingly digital… when putting telly on the internet was incredibly hard.’ 2 Technological constraints, like financial and bureaucratic restrictions, often create space for innovation: the inter-bubble years produced PIPs/PIDs, ad hoc social networks to guide playlists,3 research into children’s online safety, the collection of social history, a gradual understanding of the intimate affinity between email and radio, so many things. You could argue that some of these experiments these were distractions, indulgence, a colonisation of online space that was others’ by right, but it’s hard to look back and think of other British institutions with the institutional clout and capacity to attempt them. (Tony Ageh’s vision of a ‘Digital Public Space’ built upon access to the wireless spectrum, unmediated, unmetered, unmonitored and unmonetised, taps into this.)

Once familiar routes are dredged out by Moore’s law and 5 Mbps downstream, they’ll be taken.4 Once taken, they’re easy to maintain and justify and perpetuate.

What Flipboard’s engineering team did is impressive, but when you’re paid to build native mobile apps and very good at doing so, you’ll be drawn to make a web browser behave like a native app before considering things like accessibility. ‘This area needs further exploration’ and ‘we’ve seen mixed results’ read far too easily as ‘we had more exciting things to make.’ Flipboard isn’t a chartered public broadcaster or a government operating under a set of institutional obligations, nor should it be expected to behave like one; however, building for the web is a form of participation, and comes with a set of tacit principles tied to its history and origins.

For long stretches of that short history, the aspirations of the web towards universality and inclusiveness have been little more than that, grimly carried through browser wars and CSS quirks and the dominance of proprietary plugins. Whenever the smoke clears, there’s room to build, and each lull produces something more to defend. Mark Pilgrim’s Dive Into Accessibility begins with the question ‘why bother?’, and answers it by describing in detail the people who benefit from accessible websites. It came online in 2002, before Firefox, Safari and Chrome. The concept of progressive enhancement dates from the same period, and slowly merged with the design-centric pursuit of ‘liquid layouts’ over the 2000s to become the loose, baggy field of responsive design (and now contextual design), its fundamental rule being to serve something that reflects and respects the position of the user, instead of chiding users for what they lack.

The model in 2015 is clear enough: begin with something that embraces universality, and augment, augment, augment. That’s why I’m more comfortable with Richard J. Pope’s recent challenge to developers to exploit the ‘unrealised but present potential’ in the untapped augmentations of the mobile browser and establish the design standards of the not-yet-present. It has taken over a decade for accessibility to take its proper place at the heart of web design, hard-fought all the way. In that context, choosing 60FPS at its expense feels flimsy and indulgent.

This is for everyone.

  1. That sound in the background? Thousands of former Flash developers whistling through their teeth in Gruber’s direction.

  2. The same applies to Channel 4’s education programming a little later. The parallels are not coincidental.

  3. Spotify before its time. No, really.

  4. What becomes easy to transpose onto the digital space with guaranteed 5 Mbps upstream? I’m not yet sure.

hunting highs and lows

Take a widely-owned sensor with a reverse-engineered USB driver; hook it up to an Android phone with an OTG cable; install a custom app that pulls data from the sensor and pushes it to a free cloud-hosted Mongo database; fork a GitHub repo of a Node.js webapp, add your DB credentials, and deploy it onto Azure to parse and display all that data on the web. Perhaps even send it to a Pebble if you feel inclined.

Sounds fairly interesting, right? Small parts, loosely joined: a little bit of hardware-hacking, some Internet of Things, a splash of SaaS, a dollop of dataviz. The kind of thing that my smart techie friends get up to for love and/or money. Rough at the edges, perhaps a few too many joins for comfort, very much Not For Production, but very 2014.

Except what I’m describing is NightScout, an ongoing community project created and maintained by type 1 diabetics, their families and friends, with the aim of taking the data from Dexcom continuous glucose monitors and doing more with it than Dexcom’s crappy Windows-only software allows.

The CGM In The Cloud project’s been running for about a year: its hashtag motto is #WeAreNotWaiting. Not waiting for FDA approval, not waiting for companies to update their awful software, not waiting for fancier and better-funded projects like Tidepool to go beta. Not waiting for basic security for their data, either; not waiting for rewrites and refactoring to make it easier to deploy.

It’s simultaneously breathtaking and heartbreaking: for every T1D family that has managed to assemble the kit and liberated their CGM data, there’s one being gently talked through deploying from GitHub to Azure, presumably many that see the requirements and cost and decide that yes, they can wait a little longer, and more still that aren’t even aware of the project. It shouldn’t be this hard.