In April this year I attended the Render Conference, a rebrand and reorganisation of 2015’s jQuery UK Conference. The name change signifies better the broader content of the conference, covering all sorts of front-end topics from CSS and JavaScript to form content and development philosophy.
In this post I’m going to go through each of the talks over the two days and summarise what the speakers talked about. I’ll also adds links to the slides and videos as they become available for those who want to look a little bit deeper. In a separate post, I’ll talk about what the lessons are that we can learn in the University of Edinburgh; and what we can start doing today.
If any speakers happen to be reading this (hi!) and want to correct me on something, you can email me at Greg.Tyler@ed.ac.uk. Though bear in mind that I may have skipped some content intentionally for brevity.
Header image copyright Katura Jensen, reproduced with permission.
Contents
- Bruce Lawson – web.next
- Jack Franklin – How jQuery has influenced The Web: Now and Beyond
- Val Head – Designing meaningful animation
- Alicia Sedlock – The landscape of front-end testing
- Harry Roberts – CSS for software engineers for CSS developers
- Sara Soueidan – SVG in motion
- Mariko Kosaka – Drawing on <canvas> – a few things I learned about pixels
- Katie Fenn – Chrome DevTools: Inside Out
- Robin Christopherson – Technology – the power and the promise
- Frederik Vanhoutte – Rendering the obvious
- Jake Archibald – Show Them What You Got
- Ricardo Cabello – Programmed Animations
- Chad Gowler – How to ask about gender
- Jade Applegate – How Fastly designs for complexity
- Gordon Williams – Controlling the real world
- Ola Gaidlo – I’m offline, cool! What now?
- Martin Naumann – The next frontier: 3D and immersion
- Ashley Williams – If you wish to learn ES6/2015 from scratch, you must first invent the universe
- Lee Byron – Immutable User Interfaces
- Jeremy Keith – Resilience
Bruce Lawson – web.next
Bruce talked about how the web has always had rivals. In 2009, native web applications were competing with Flash and Silverlight, tools which I think we can safely say have been defeated. But now the web has found a new competition: native apps. Although the HTML standard has added some native-competing features like videos, geolocation, canvas and WebRTC, adoption has still been slow and time users spend in apps continues to outstrip their time on the web.
It might not seem revolutionary, but this addition to the web platform is a big opportunity for developers to create a single source for their product. Why create separate apps for iOS, Android, Windows and Blackberry when you can just create a progressive web app that users can access in the exact same way?
Progressive web apps come with some additional benefits. In a fight for precious storage space on a user’s phone, installing a shortcut to a website comes in much lighter than a fully-formed web application. Users won’t need to ask the question of “should I delete that photo album so I can install Infinity Wizard?”, and can just get on with playing your rad rogue-like sci-fi RPG. There’s also no gatekeeper in the form of an App Store, no updating procedure (since when you load the web page it automatically updates) and, unlike native apps, everything is searchable, linkable and indexable.
The obvious gap in progressive web apps is that a hyperlink requires at least one network request (and probably many more). They aren’t inherently offline-ready, and could be expensive on data. However, this impact can be minimised by smart use of service workers, which we’ll see in a later talk. Further, as web standards continue to develop, the tools available to make apps feel more native will also grow.
Support for progressive web apps is good and getting better. Chrome and Opera already support them, and the functionality landed in Firefox Nightly this week. But even if your user base is almost entirely based on browsers which don’t support the feature, there’s no real harm in adding it anyway.
Bruce also talked briefly about the Houdini project, which aims to revolutionise CSS by allowing developers to create custom layout and paint events by exposing them to the JavaScript engine. Houdini is still in early phases, but will be an exciting addition this year.
Jack Franklin – How jQuery has influenced The Web: Now and Beyond
In a world that suddenly wants to forget jQuery, Jack explained how we shouldn’t belittle the great things it has done. jQuery is now over 10 years old, having first been announced in 2006 (John Resig has released annotated source code of the first version). Since then, many parts of the web platform have been created based on parts of jQuery’s functionality: querySelector
and promises
being two particularly large features.
jQuery has also been fundamental in lowering the barrier to entry to JavaScript, particularly during times when browsers were so inconsistent and full of bugs. It’s been a great learning tool for people getting into JavaScript and its source code continues to provide interesting insight into web development, as demonstrated by Paul Irish’s 10 Things I Learned from the jQuery Source.
Today, jQuery provides an abstraction layer which is much nicer to work with than native browser implementations, and continues to work to fix cross-browser bugs. However, as features such as querySelector
and the Fetch API
get more browser compatibility, jQuery is becoming peripheral to the very tools it helped created. So in a world where we’re getting tired of a game-changing library, it was nice to see Jack give it the love and respect it deserves.
Val Head – Designing meaningful animation
CSS animations are semi-permanently in the “cool, but difficult” section of my fictional development tools Rolodex. Despite decent browser support, natural progressive enhancement, and a fairly simple API, I’ve always thought that the actual act of designing animations seems to require a different set of skills to my own.
Val Head challenged this by encouraging us to take a more pragmatic view of animations: to look for purpose, to start small, and to apply suitable design principles. In particular, she recommended the Disney Animation: The Illusion of Life, the incredibly detailed guide to how Disney designs and animates its movies. One of the most famed parts of the book is its “Twelve Basic Principles of Animation”, which Val suggests can be applied too to CSS animations.
Val took us through some of those principles and they of course are much easier to appreciate when seen, so I highly recommend watching the video of the talk if you’re interested in getting a better feel for what can be done. I’ll provide an overview of some key points though.
In the sphere of “timing and spacing”, Val pointed us towards the duration and easing of CSS animations. These, she says, help demonstrate physics and can establish mood and emotion of objects (so far as users inferring whether your modal is excited or frustrated). She also suggests using the cubic-bezier
function to fine-tune the easing of your animations.

In “follow through”, Val suggests that “not everything comes to a stop at once” and that trying to do so can make an animation feel unreal. And with respect to “secondary action”, she says that “supplemental action reinforces and adds dimension”.
Val pointed towards the great developer tools for inspecting, debugging and editing CSS animations in both Firefox and Chrome, which can give anyone who wants to experiment with animations a great starting point.
Val’s slides are also beautiful, which is enough of a reason to look at them alone.
Alicia Sedlock – The landscape of front-end testing
Testing is an oft-discussed topic here in the SSP (we’ve written several blog posts on it). We’ve been using Selenium for a couple of years, in combination with Selenium Grid, Robot, and every other tool we can get our hands on. In general though, we use “testing” as a short-hand for “automated acceptance testing”. In her talk, Alicia explains some of the categories of “testing” and the sort of tools which are currently recommended for them. I’ll go through some summaries but, as with the rest of these talks, there’s much more covered in the presentation itself.
I think that things get particularly interesting after the first couple of bullet points, as we start to look at less familiar forms of testing.
- Unit testing: Ensure smallest pieces of code work as expected. e.g. Jasmine, Mocha, Chai, QUnit, Uni.js, Sinon
- Acceptance testing: Check that the small bits work when put together. e.g. jasmine-integration, karma, nightwatch, webdriver.io
- Visual regression testing: Check for visual differences by comparing before-and-after screenshots. e.g. Percy, Wraith, PhantomCSS
- Accessibility testing: Check that your pages are accessible to all users. e.g. a11y, pa11y
- Performance testing: Ensure that your site meets pre-defined performance budgets. e.g. perf.js, grunt-perfbudget
Alicia suggests the following criteria to assess what tools you need:
- How much business logic is on the client-side?
- What is our history with regression testing?
- What are we always fixing/re-fixing?
- What piece of our stack is least stable?
And, on the question of how to get started in the testing world, particularly with a legacy code base (oh, hi EUCLID!), Alicia suggested that every time a bug is fixed, the developer writing the fix must also write a test. That way, you slowly grow coverage across your system as more things get fixed.
Harry Roberts – CSS for software engineers for CSS developers
I’ve lately been interested in how other fields of engineering can relate to software engineering (get hyped for upcoming blog posts about trains and web development), so this talk by Harry Roberts was a very welcome idea to me. In it, he looked at traditional software engineering paradigms and how they can be applied to CSS. As well as applying some good coding principles to CSS, it was a welcome reminder of some classic rules we should be thinking about in all development we do.
Harry started with Don’t Repeat Yourself (DRY), encouraging the use of pre-processors to store variables which can be re-used across multiple CSS classes. This means that, if you need to switch out a font or colour, or increase the spacing of elements in a page, you only need to do it once. This also couples in with the Single Source of Truth (SSOT), a “single, unambiguous, authoritative representation of a piece of knowledge”.
Harry talked about the Single Responsibility Principle in relation to Subway: by separating each component of a sandwich, they allow you to create one of 6,442,450,944 sandwich options. Similarly, CSS should be written so that you can compose classes together to make the object you want. In practice, this is demonstrated quite well through Bootstrap’s button classes in which you can compose together btn btn-success btn-lg btn-block
to get a large block-level button with success styling, but which is disabled.
Separation of Concerns was the fourth paradigm. In CSS, this means that you only use classes for CSS (leaving IDs and data-attributes for JavaScript), that you don’t use DOM-like selectors (e.g. div.profile-image
) and that you give each item that needs styling its own class (e.g. .nav-item {}
rather than .nav > li
).
On the subject of Immutability Harry suggested that the state of CSS properties shouldn’t be overwritten in a way that is unpredictable or makes debugging difficult. An example of this is overwriting a button size in a certain context:
.profile .btn { font-size: 1.2rem; }
when you should just give the button a specific class:
.btn--profile { font-size: 1.2rem; }
Harry also suggested using !important
on properties which should never be overwritten to force immutability. For example, an element with a class of text-left
should never align anything but left, so that property could be made immutable.
Lastly, Harry spoke about the Open/Closed Principle which states that objects should be “open for extension, but closed for modification”. In CSS, this means adding a new class when you need something to act differently rather than changing an existing one and risking a domino effect that you’ll have to fix later. The only exception to this is when you’re fixing a bug.
This idea of “building forwards” came up a few times throughout sessions at the conference and is something that can be a particular struggle on monolithic codebases like EUCLID, where a mixture of legacy and in-development components are sharing code and style.
Sara Soueidan – SVG in motion
Sara’s talk reminded me a lot of a talk last year by Soledad Penadés on the state of Web Components. In both cases, a big takeaway from the talk was “oh boy, there is a lot of variability here”. This year, Sara pointed out the various ways of including an SVG in an HTML document, and how each of those play with various animation techniques. She also explained some of the inconsistencies between how browsers treat SVG compared to other images and DOM elements.
Sadly, there’s no clear answer of how best to work with SVGs. The method of inclusion depends on your animation, JavaScript interactivity, browser support, fallback and caching needs. The method of animation then depends on what type of animation you’re doing, where browser support sits, and how performant you need that animation to be. It’s a confusing landscape.
Tools exist to make it easier though. For animation, libraries such as GreenSock, Snap.svg, Velocity.js and D3 provide much needed abstraction and interfacing. And, if you’re not actually doing SVG animation, they all come with sweet demos that will make you wish you were.
Also, use SVGO to optimise your SVGs.
Mariko Kosaka – Drawing on <canvas> – a few things I learned about pixels
In a talk that captured the imagination of the room and surely converted a few attendees to knitting fans, Mariko spoke about her journey turning digital photos into physical knitted items (via creating a language for making knitting patterns). And specifically about how canvas helped her there.
In a 2D canvas element, you can use a function called getImageData
to return a massive array of the RGBA components of all the pixels on the canvas. You can then manipulate these individual pixels to perform typical image-manipulation effects. For example, to reduce brightness of an image, you just reduce the individual RGB components by a certain amount, flooring them at 0.
I haven’t before considered that you could visualise the brightness function on a graph, and it turns out you can do the same for contrast, posterization, solarization, threshold and inversion effects:
All of these effects can be seen (and played with!) in the slides from Mariko’s talk.
Mariko then demonstrated how blur and sharp effects are calculated from the neighbours. For a simple box blur, each pixel takes the RGB components of itself and its surrounding cells and takes the average of each to get its new colour, as Mariko’s slides demonstrate.
For more complicated blurs, the values of the cells are weighted. In a gaussian blur, the central pixel has a weight of 4, its horizontal neighbours of 2 and its diagonal neighbours of one. These weights make components of certain pixels count for more in the final average. To sharpen a pixel, the weights are reversed slightly: the central pixel has a weight of 5, its horizontal neighbours of -1, and its diagonal neighbours are ignored. This pushes a pixel away from its neighbouring values.
Mariko then talked about Web Workers, another item on my “cool, but difficult” Rolodex (until now!). Web Workers allow you to process some of your JavaScript in a separate thread to the main JavaScript process. This means that bits of heavy computation can be done more quickly and without getting in the way of the user or freezing up the main thread (i.e. they can continue to process other JavaScript functions whilst this is working in the background).
Mariko summed up Web Workers by an analogy with a space station: you can send basic messages to it, it processes them, and then sends you back a result. The Web Worker doesn’t have access to the DOM or to various methods and properties of window
, just as a space station can’t just use resources on Earth and has to make do with what it’s got.
Overall, Mariko’s talk took some complex components of the web platform and made them easier to understand and much less intimidating. It was also an inspiring example of taking a hobby in the real world and applying technical skills to it. And she had some beautiful hand-drawn slides which are well worth looking at.
Katie Fenn – Chrome DevTools: Inside Out
Talks on development tools are quite hard to write about, since they involve large amounts of demonstration, so I’m not going to try. If you’re not using JavaScript breakpoints, network debugging with the film strip or the timeline to check the FPS of your application, the talk is well worth a look.
Robin Christopherson – Technology – the power and the promise
Robin talked to us about the importance of accessibility, bringing a really positive message and some moving videos. He demonstrated how great accessibility has been for disabled users, and the sort of opportunities it opened up. Because it contained lots of videos, Robin’s is another talk that’s hard to do justice in text.
Something that came through very strong from the talk was that accessibility and inclusivity stretches beyond our typical preconception of users who are disabled. Users who only have one hand free, or have large fingers, or are in a car, or in a hurry, can all benefit from the principles and potential of accessibility. In all of these situations, things like text-to-speech, voice recognition and headless UI – tools which I think of as primarily for disabled users – could bring massive benefits.
Robin also played us a typical “accessible” captcha: one where a sequence of numbers is read out that the user must type in. Throughout the whole hall, not a single attendee transcribed the same sequence as someone sat next to them. This was a damning indictment of what we consider to be accessible.
I think there were two major things that I learnt from Robin’s talk: Firstly, that we need to broaden our understanding of accessibility. We need to think not just of users who are blind, or have limited input devices, but also of users who – perhaps even temporarily – fit outside our usual expectations of capabilities. Secondly, that we need to improve the quality of the accessibility we offer. Not because it’s terrible and unusable across the whole internet, but because there is such quality out there that we should all strive to live up to it.
Frederik Vanhoutte – Rendering the obvious
Day one finished with a much more philosophical talk than the rest. In a common trend for day-closing talks, it was a heavier on the fun and lighter on the technical side. Frederik talked first about creativity: that it is something we value greatly in our industry but an attribute you wouldn’t particularly look for in, say, a driver (that is, if someone described themselves as a “creative driver”, you wouldn’t be filled with hope). Yet, he highlighted, at interviews we rarely measure applicants by the creativity that we so crave.
Frederik then talked about the balance between science and beauty, and how they’re often at odds with each other. He spoke about how we learn science in boxes, meaning we struggle to put it together into a bigger picture, or to see something between the boxes that we understand. Creativity is the tool to fill that gap. He also spoke of our typical penchant for creating models and relying on their results, even though they aren’t fair representations of the initial situation.
As developers in a large and changing landscape, full of unknowns and gaps in knowledge, it is a great ability to be creative, and to be able to see both the bigger picture and between the boxes.
Frederik closed with a quote typically given by motivational speakers:
If you want to build a ship, don’t drum up people to collect wood and don’t assign them tasks and work, but rather teach them to long for the endless immensity of the sea.
However, he pointed out, this is not a fair translation for the quote from Citadelle. Something more appropriate would be (by my GCSE French):
Building a ship is not about making sails, forging nails and reading the stars. Given a taste of the sea, it is nothing less than a community in love.
The point being that Antoine does not mean that you should coax and cajole people into working harder, but that everyone (including you) must love what you do and long for more.
Jake Archibald – Show Them What You Got
In the conference’s worst kept secret, Jake Archibald’s mystery talk turned out to be about Service Workers.
The title of Jake’s title is about ensuring that you show the user whatever content you have, rather than trying to wait for everything to be fully loaded. I was particularly sold on idea of “Lie-Fi”: when your device insists it can connect to the Internet, but nothing’s coming down the line. At least when you’re offline your device honestly just says it’s “offline”.
Service Workers can help to solve this. A follow on to the disappointing AppCache, Service Workers allow you to take much more granular control of what happens when the user is offline. Once installed (for a particular website), Service Workers sit in between the browser and your website, and can interrupt if the network isn’t working out. So it’s quite straightforward to – for example – set up a custom “you are offline” page:
this.addEventListener('install', function(event) { event.waitUntil( caches.open('v1').then(function(cache) { return cache.addAll([ '/offline/index.html' ]); }) ); }); this.addEventListener('fetch', function(event) { event.respondWith(fetch(event.request).catch(function() { return caches.open('v1').then(function(cache) { return cache.match('/offline/index.html'); }); })); });
However, this doesn’t provide much of a user experience, and still doesn’t deal with the problems of Lie-Fi. What we really want to do is load the basic components of the page (the shell) through the service worker and then keep a set of data on the client-side that can be synced with the server when the internet is available.
The shell bit is just an extension of the example above. More files need to be added to the cache (CSS, JavaScript), so the offline index page can load them in. The data can then be stored in IndexedDB, though Jake noted that the API for IndexedDB is quite unpleasant so he recommends using an abstraction like his promise-based one.
So now, going back to our Lie-Fi situation, the shell is loaded from Service Worker and the data for the page is loaded through IndexedDB, so you wouldn’t know anything is wrong. All that’s left is syncing data to and from the server when a connection becomes available. This is done through background synchronisation, which is an extension of the Service Worker feature.
Of course, all this needs to be designed so that it’s clear to the user that they’re not looking at live content. There’s no point covering up for Lie-Fi just to create Lie-WebApp.
Jake also highlighted that all of this can quite easily be added through progressive enhancement, rather than having to be there from the start of a project. This is particularly helpful because of the variation in support for Service Workers across browsers, with background sync only being available in the latest two versions of Chrome, and not implementation of Service Workers at all in Microsoft Edge or Safari.
Ricardo Cabello – Programmed Animations
Ricardo is the creator of three.js, the library that abstracts WebGL to make it much easier and quicker to write. Building on the success of three.js, Ricardo is now looking at how to make it easier for non-developers to program animations. Through this, there are two major tools he’s working on which he demonstrated.
The first is the three.js editor, which is available to use today. This is an in-browser 3D editor which lets you compose scenes with cameras, lights and objects. If, like me, you’ve not seen much of WebGL before, it is absolutely mind-blowing. Ricardo also keeps a record of things he’s created in the editor, which shows just how powerful it can be.
The second project, much more in planning stages, is frame.js, which is a compositor to put together multiple three.js scenes into a sequence. It’s kind of like After Effects, but with JavaScript.
Ricardo’s work is really impressive and demonstrates some of the power of modern JavaScript. Whilst University applications don’t currently have much call for 3D modelling, I’m sure colleagues in research could get some fantastic use out of being able to create in-browser interactive 3D representations of their work.
Chad Gowler – How to ask about gender
Chad’s talk again reminded me of one from last year. This time, I was brought to recall Alice Bartlett’s talk about <select>
boxes in which she highlighted that the GOV.UK site asks for users’ titles as a freetext field rather than a dropdown list; partly because it doesn’t matter to the government what you want them to call you, and partly because suggesting titles can be uninclusive.
Chad’s talk went further into the second idea, centring around the question of how we ask for gender in forms. She explained how asking people only to select between “male” and “female” puts many users in a situation where they can’t answer, and that to put “other” as a third option can be dismissive. Instead, she suggests that the options should be “man”, “woman” and a free text field.
Chad also talked about titles and names, echoing Alice’s suggestion from last year that titles shouldn’t matter to you (and to never infer gender from them), and that people may have more than one name (such as stage names, transitioning names or online handles). This reminds me of the blog post Falsehoods Programmers Believe About Names, which challenges a lot of our preconceived ideas about what a name must be (there are many more falsehoods programmers believe).
Chad suggested some questions we should ask ourselves as developers before we query users for personal information:
- Who needs it?
- What do they need it for? (individual reasons, not vague statements about statistics)
- Is it required? Or should users be able to skip it?
- What happens if it’s filled with false data? (both what do users see, and what will effect will it have on your system?)
These are questions that we should not just answer, but also answer for our users. So when we ask a user their gender, ethnicity, or disability information, they should know why we need it and where it’s going.
Jade Applegate – How Fastly designs for complexity
Jade’s talk centered around the recent overhaul of their customer-facing dashboard. A talk about the challenges with overhauling a large, legacy codebase with lots of complex data and users of different skills levels, it would be an understatement to say there are parallels with the work we do. If there was one talk that I had to choose as “most relevant to our work”, this is it.
Jade provided a list of issues the team at Fastly identified with their dashboard, many of which I’m sure we can all find familiarity in:
- Lack of rich interactions
- Lack of consistency
- Outdated UI
- No sense of completion (i.a. screens confirming success)
- User not kept in mind
- Lack of UX principles
- Hard to quickly (and safely) make changes to codebase
- No test coverage (and therefore no confidence)
- Low code quality (i.a. multiple ways to call a modal)
- Not on modern architecture
- Ownership issues in the codebase (no owning team of each part)
- Many dependencies
- Lack of brand consistency with public site
Writing these out does seem a little harsh on the Fastly team now. But we have at least half of these problems in EUCLID, and you could probably argue for the remaining ones.
So Fastly took these issues into stead and started a complete design “refresh” of the dashboard, amongst other things moving from CoffeeScript and Backbone.js to ES2015 and Ember.js. They went through a process of UX interviews (talking to both experienced and inexperienced users) => Design sessions => Prototyping => Developing => Refining look and feel => Merging and release (including code reviews within the team).
Jade went on to demonstrate some of the improvements made, which you can see by watching the presentation. Some of the improvements that I think we could learn from include:
- Having consistent colours and knowing what they mean
- Having style guides for button and icon usage
- Adding help text next to form fields which expands on click, so users can always get help quickly if they need it
- Adding links at the top of forms to take users directly to guidance and documentation
- Providing sensible defaults in form fields where possible
- Encouraging best practices through your forms (for example, trying to lead users to write sensible learning outcomes)
- Giving users a sense of completion
Jade finished with some general advice about designing to users. Primarily, that users don’t know our sites as well as we do, so don’t necessarily know or remember where things are. For that reason, we should ensure we talk to users to identify their problems and check that our solutions fix them. We should also give users the autonomy to use our applications as they want, but provide guidance for those who are unsure (answering questions like “what do I do now?” and “where am I?”).
Gordon Williams – Controlling the real world
This may not seem like something that would have an immediate use in University web applications, but the idea of being able to enter the physical world is quite exciting and lends the possibility of engineering real-world solutions for our users. I’m determined to find a use-case for it in the progression and awards process.
The motivation for Espruino, Gordon explained, is that it’s not as much of a one-way process as you see on – for example – Arduinos. By using an interpretive language like JavaScript, you don’t need to worry about compilation and can much more easily debug your code. He also highlighted that whilst a Raspberry Pi can run JavaScript, they use about 10,000 times as much power as a simple microcontroller.
Ultimately, interacting with physical objects may seem like something a mile off what we currently do but, with tools like Espruino lowering the entry barrier, it certainly seems worth thinking about. Many speakers at Render suggested that we shouldn’t think based on the tools we currently have, but on all the possibilities available, something that Espruino certainly encourages.
Ola Gaidlo – I’m offline, cool! What now?
Ola’s talk covered a whole range of topics related to efficient and effective web development, centring around the practice of offline web apps and how their benefits can apply to non-offline web properties as well. She also spoke about how we can often lose sight of the fact that we build for human beings, a common theme of the conference.
Ola suggested that we rethink how we use technology by looking at the issue, not at the tools. Instead of trying to adapt what we know about our current solutions, we should be looking at the best way to solve the actual problem. Along with this, she suggested that users will use our developments in unexpected and unpredictable ways, and by thinking from further outside that box we can not be working against them.
She went on talk about network requests. Following from Jake Archibald’s discussion of “Lie-Fi”, Ola explained that minimising network requests benefits users with slow or little internet connections as well as making our applications leaner. She reiterated that we should be minifying assets, caching them locally and frequently where possible (or storing them in local data storage).
Ola talked about the difficulty of validating data on the server, particularly for applications which are front-end heavy, but explained that doing so was still essential. When you have client-side validation, it can be really discouraging to then have to write it a second time to be done server-side. This is something we’ve struggled with in EUCLID, so it’s always reassuring to hear that we’re not the only ones.
Martin Naumann – The next frontier: 3D and immersion
Virtual Reality is a hot topic at the moment, with many stating that it can bring benefits to most if not all sectors of industry. In universities, the idea of incorporating virtual reality into teaching, for example, our medical students is a very exciting prospect with quite clear benefits. However, I’m yet to find a particularly salient use case inside our student records system.
Either way Virtual Reality is pretty cool and, surprisingly, becoming web-ready as Firefox and Chrome begin to implement the WebVR spec.
Martin works in architecture, where the use case for 3D modelling and VR is quite clear. He explains: If you try to describe your apartment with words, it is very difficult. It’s easier with images, but still doesn’t give people a proper feel. With Virtual Reality, someone can instantly interpret a huge amount of detail.
As in Ricardo’s talk, the promises of rendering 3D with the HTML canvas are stunning to me. What has previously been the reserve of complicated and expensive modelling software is now becoming much more accessible to everyday programmers. However, with WebVR still in early implementation and WebGL incredibly verbose, it might be a while before students can expect to walk through a virtual reality corridor to read their course results from a sign outside the virtual teaching office. Still struggling for that use case.
Ashley G. Williams – If you wish to learn ES6/2015 from scratch, you must first invent the universe
Ashley’s talk was about abstraction which, working for NPM, she sure knows something about. She started by explaining that abstraction allows you to put all significant bits of functionality in just one place, making the structure of your code easier to understand, browse and test. Splitting code into modules and components helps with our separation of concerns, and with having a single source of truth, two concepts that Harry Roberts was keen to impose on day 1.
Ashley talked about the challenges people level against abstraction, particularly how it supposedly stops developers understanding how things work (particularly said after the recent left-pad incident). But, she pointed out, we don’t all right assembly code; so at what point should we accept a level of abstraction? Equally, no-one wants to write a left-pad function for every project they work on, which abstraction helps with.
Ashley spoke too about education, particularly teaching programming, equating it to the joke about drawing owls.
Instead of teaching syntax, she suggests, we should be teaching concepts. With a firm understanding of the concepts and reasoning behind programming, beginners will have a place to start and then be able to find the abstractions that can help them. However, this comes with the caveat that we should never introduce syntax which obfuscates concepts, a situation which ES2015 classes may be in danger of doing as they are not traditional classes. JavaScript implements prototypal inheritance, and ES2015 classes are likely to mask that from developers rather than provide a helpful abstraction.
Lee Byron – Immutable User Interfaces
The MVC pattern, Lee tells us, is going away. Instead, welcome the A(Q)SMCV pattern!
Less imposing than it sounds, the pattern Lee talked about is the one used by React and Flux, Facebook’s current flagship code offerings. In a world where two-way data binding could probably produce a conference of its own, Lee explained how React came to be the way it is: the core concepts of application flow and why that works. It seemed reasonable, exciting and a little bit confusing.

If you’ve used JavaScript frameworks, particularly React, some of this will be familiar to you. If not, it may appear quite alien. The general concept is based around immutability: that you can’t freely edit properties of objects. When something changes (an action), you instead go through the loop demonstrated in the diagram, updating the “state” of the object, then recalculating the model, passing the new model to the component and rendering a new view based on it.
The reason this isn’t incredibly slow is that when views are rendered, they’re calculated as diffs from the previous state/model. So rather than re-rendering the whole page, the framework knows exactly which bits to update without having complex DOM-binding.
The most complex bit of the diagram comes from the server/queue sections. The flow is quite straightforward though: The Queue is an in-client representation of the changes that happened, predicting what the result of the actions made will be. Alongside the queue, the action is sent to the server too, which accurately tells you the outcome of the action. This means that the application can continue to work using the outer loop without needing to talk to the server and, when it catches up (e.g. the user gets connection again), the application can ensure that all the assumptions made in the queue were correct and push updates if not.
Lee also talked about GraphQL, which Facebook sees as a replacement for REST. The issue with REST, he posits, is that you have to make multiple requests to get a complex set of data. For example, to find friends’ favourite movies, you might have to request data from /api/me/friends
, /api/users/10415/movies
, /api/users/20951/movies
(and perhaps even /api/movies/24
…), which is a lot of network requests and a lot of time for the user to wait for a simple query.
GraphQL moves querying back into the client, rather than behind the RESTful server. The client issues a single request with a JSON-like structure to define what they want from the server. So the same request might look like:
{ me { name friends { name movies: { name } } } }
And that would return:
{ "data": { "me": { "name": "Greg Tyler", "friends": [ { "name": "Terry Coldwell", "movies": [ { "name": "Star Trek III: The Search for Spock" }, { "name": "Snatch" } ] }, ... ] } } }
So what you put in is a similar format to what you get out. There’s also additional functionality for querying and handling types. It’s a very handy tool, and certainly a logical progression from REST. For those creating APIs, it’s definitely worth keeping in mind.
Lee’s talk was Facebook’s vision for the future of front end development. I really enjoyed hearing about React and GraphQL from a philosophical/architectural perspective, as a lot of people who use React seem to struggle to explain why it works the way it does. I understand much better now how it fits together. I’m fairly convinced by GraphQL, but not quite taken by React yet.
Jeremy Keith – Resilience
The web, Jeremy explains, has always been built for resilience. For example, HTML has always specified that any unknown information should skipped over: unrecognised tags and attributes don’t throw errors, the browser does its best and moves on. That’s why we can build pages using new HTML features like <video>
without worrying that people on older browsers will only see an error message, and it’s the foundation of how we can build forward.
That also means it’s easy to build fallbacks straight into HTML, something that the responsive images spec is reliant on. Older browsers won’t understand <picture>
, <source>
or srcset
, but they’ll handle the <img />
just fine.
<picture> <source media="(min-width: 40em)" srcset="big.jpg 1x, big-hd.jpg 2x"> <source srcset="small.jpg 1x, small-hd.jpg 2x"> <img src="fallback.jpg" alt=""> </picture>
Equally, CSS doesn’t complain when it comes across unknown selectors or properties. JavaScript, however, is imperative and will fail over at the slightest whiff of confusion. Hence the struggles with rolling out ES2015.
This comes back to the robustness principle, a rule that applies to HTML and CSS parsers and should dictate how forms are designed. It’s also a principle we apply in other everyday actions, such as driving, in which we prepare for the worst, but hope for the best.
Be conservative in what you send, be liberal in what you accept.
We should view progressive enhancement, Jeremy suggests, through this principle. He identifies three steps:
- Identify core functionality
- Make that functionality available using the simplest technology
- Enhance!
This means that not everything needs to be available to everyone, as long as they can perform the core task you want to offer. For a news website, that’s to read the news. For Twitter, it’s to read and post 140-character updates. For an application form to the University of Edinburgh, it’s being able to apply. And so, for that core functionality, you have to ensure it’s possible with the simplest technology: HTML and the backend. Users might not have CSS or JavaScript, or might have outdated versions of either, so we shouldn’t rely on them to perform the core business functionality.
Once you’ve got your core laid out, and accessible to all, you get to do the fun bit: Add design, web fonts, JavaScript, AJAX, WebSockets, Service Workers. And that may very well be where you spend most of your time, which is fine so long as you don’t stop users performing the core functionality with the most basic technology.
A common disappointment with having a separate back-end to your front-end excitement is that you have to build things twice. Once for the users with the flashy JavaScript bits enabled, and once for those running on the baseline. But you only have to do that for the core functionality.
Finally, Jeremy suggested that as developers we often put “developer convenience above user need”. I know I’ve been guilty of this, and I’m sure we can all find examples of spending more time helping ourselves than helping our users. So stop that, remember we’re dealing with humans, and make the most of the progressive design of web standards to keep core functionality possible with the absolute minimum spec and building user experience on top.
Conclusion
There were some great talks at Render, and I learnt a lot of important lessons. Some common themes emerged and there were definitely ideas that we can bring into development immediately.
In a future post I’ll summarise some of the lessons I learnt and talk about how we can being bringing them into our work in the Student Systems Partnership.
Header image copyright Katura Jensen, reproduced with permission.