Render Conference 2016 sessions

In April this year I attended the Render Conference, a rebrand and reorganisation of 2015’s jQuery UK Conference. The name change signifies better the broader content of the conference, covering all sorts of front-end topics from CSS and JavaScript to form content and development philosophy.

In this post I’m going to go through each of the talks over the two days and summarise what the speakers talked about. I’ll also adds links to the slides and videos as they become available for those who want to look a little bit deeper. In a separate post, I’ll talk about what the lessons are that we can learn in the University of Edinburgh; and what we can start doing today.

Continue reading “Render Conference 2016 sessions”

From 0 to OAuth2 with Spring Boot

Following on from the previous post about documenting MicroServices with Swagger, we also wanted to have a uniform authorisation/authentication model for access to our services.

Our basic requirements were as follows:

  • They must support client-side authorisation (e.g. via Javascript calls in browsers)
  • They should have a single authorisation point
  • Session authorisation timeout must be controllable
  • They should be multi-domain ready (e.g. authentication from <user>@ed.ac.uk or <another-user>@another.ac.uk

After reviewing our options, OAuth2 was the obvious contender. We already have a web single sign-on solution called EASE which uses Cosign, so we need to use that for any web-based user authentication.

The remainder of this article shows how we went about setting up an OAuth2 service using Spring Boot.

Continue reading “From 0 to OAuth2 with Spring Boot”

Software Development Community of Practice

Imagine having access to a safe environment where you can ask all sorts of subject related questions. Or perhaps you prefer meeting people and talking about your and their experiences? What about situations where you really need some advice or a second opinion on a method or how to apply a standard in some way?

A community of practice does all of this but lots more

Capture

As well as deliver tangible results

Communities of practice are a great way of getting people to connect and talk about a common area of interest. Working here at Edinburgh University I have seen fantastic work going on in my own department and Division but also in the Schools and of course in other parts of Information Services.

From working with UCISA as Vice Chair for the Infrastructure Group I have gained connections and contacts who have been great sources of information, ideas, new ways of thinking and these have really translated into tangible results. This is exactly what a community of practice should be about.

Universities have a long history of inter-organisation collaboration and for an organisation that has the scale and diversity of our own we have a fantastic opportunity to make the most of our own rich sets of skills, experiences and specialisms.

In my current role I can see a fantastic opportunity to create a Community of Practice in the University that focuses on Software Development and all that it involves. This interest area is a big thing for a lot of my direct colleagues and I firmly believe that a community of practice would be a great vehicle for encouraging collaboration on so many levels. So that’s what we are going to do!

So look out for activity with the new Software Development Community of Practice

Why not sign up to the mailing list at
sw-dev-community@mlists.is.ed.ac.uk

Or drop me a note at iain.fiddes@ed.ac.uk

DiBi #2: Design everything. (Also, skeletons)

Last month, I got to attend the 2016 Design it Build it (DiBi) conference at the Hub in Edinburgh. This is the second in a series of three posts about my adventures.


My second DiBi post is focussing on some specific methods in UI design that were covered at the conference that could have a really positive impact on the overall user experience.

The UI Stack

Scott Hurff, Product Designer and Lead Designer at Tinder, delivered a very interesting talk titled Fight back with the UI stack. The UI Stack is what he uses to counter awkward UI, which he describes as:

“[Awkward UI] is a missing loading indicator. It’s forgetting to tell your customer where something went wrong. Bonus points for doing so with a scary error message. It’s a graph that looks weird with only a few data points. It’s a linear snap into place when introducing a new piece of data.”

So, meet the UI Stack:

The UI Stack

Scott goes into a lot more detail on this on his blog, with lots of good and bad examples of each state – it’s well worth a read if of interest, particularly for some great examples on the states people tend to focus on less. But I will briefly summarise the 5 states:

Ideal State
Where you want your users to be. The core screens. Perhaps what you’ve spent the most effort designing…

Blank State
Scott splits this one down into three categories: how does your application look when the user first opens it, if the user clears all the data they have added, or when no results are returned in a search?

Error State
How do you handle erroring out? Will data be lost? Are your error messages dramatic and technical, or friendly and instructional? Does it tell the user how to try and resolve the problem?

Partial State
How do your screens look when partially / sparsely populated? Do you need to guide users towards the ideal state?

Loading State
How does your application transition between screens? How does it handle loading?

It’s often easy to spot when something looks a bit wrong. But it can be a lot more difficult to then work out what needs to be changed or added to make it right.

Skeletons vs Spinners

A key part of the stack that often gets neglected is the loading state. So I wanted to mention a neat little idea that Scott and others at the conference touched upon: skeleton screens. These replace common loading animations like spinners with loading skeletal frames of the page. They can be used to improve the loading time of content by loading and rendering it in skeletal blocks chunk by chunk – but can also be useful just as a more gentle transitional loading animation.

Example of skeleton screen on fiddlr, source: http://tandemseven.com/

Nowadays this is used a lot more, and not just in mobile apps or when loading things in by chunks. Slack renders templates of messages whilst it’s loading the actual ones, and displays them all at once. Facebook renders a fake news feed with blocks in while it loads all your stories.

Look familiar? Facebook skeleton loading screen. Source: https://github.com/ksux/ks-design-guide/issues/38

Replacing your loading spinners with skeletal blocks of the sort of content users should expect to see can give the perception of faster loading and give users more immediate familiarity with the content when it does load. Plus, people hate spinners.

Design everything

UX matters. A lot. We’re judged on it, and in reality we’re basically already using it as a performance measure whenever we ask our users for feedback on our work.

So, in summary:

  • Be sure to design everything, not just your UI’s ideal state.
  • Loading can impact the user experience massively. Plan and design that too.
  • Skeletons are cool.

Some related reading:

  1. Scott’s post on The UI Stack
  2. Mobile Design Details: Avoid The Spinner, Luke Wroblewski
  3. A Beginner’s Guide to Perceived Performance, Kyle Peatt

DiBi #1: Designing for users, not devices

Last week, I got to attend the 2016 Design it Build it (DiBi) conference at the Hub in Edinburgh. This is one in a series of three posts about my adventures.


One of the early talks at DiBi was by Anna Debenham on Games Console Browsers. Usage on these might be a bit higher than you think: Anna quotes that 26% of 14–24 year olds in the UK use consoles to visit websites. Interestingly, in an effort to get pages to render normally, most console browsers tell lies in their user agent strings – so it’s actually quite difficult to measure usage on them. But it was this slide that showed a tweet by Will Roissetter that really stood out:

A student completed their Student Loan Application on a Nintendo DS

The Hub filled with laughter from all the delegates at this seemingly strange behaviour. But then a memory stirred, and I began to feel a little… uncomfortable.

In August 2012, I completed my final student finance application using an Xbox 360.

Yeah, yeah… laugh it up.

My excuse at the time was that my laptop had died and had been shipped away for service. As to why I didn’t use my phone: no idea. But the point here is that I doubt this was a scenario Student Finance accounted for – but it worked.

Anna quickly pointed out that while she would go into detail on some of the console based browsers – her talk wasn’t really about games consoles. It was about how every screen can (and in many cases has) become some sort of web browser. Traditional “mobile” devices aside we now have web browsers on digital cameras, printers, cars, exercise equipment – even taps.

Now this is not a pitch to drop everything, go back and make everything you’ve ever built support a web browser in a digital smoothie maker*. The warning here was to not fall prey to designing things for the current three “device silos”.

This theme was echoed a lot in other talks: when designing for the web, design for users not for specific devices or specific screen resolutions. Not only will you create something that is much more future-proof, but it will help reduce technical debt. Ryan O’Conner, UX Creative Director for BBC News digital, spoke of his regrets when redesigning the BBC News app which launched early last year which – though it is responsive in its design – had loads of different “templates” for specific different devices and resolutions. They had quickly thrown in lots of variants and tweaks for very specific cases – and this has already become unmaintainable.

Because, silly device examples aside, the broad categories we’re becoming familiar with – mobile, tablet, computer – don’t make sense any more. Tablets and phones come in pretty much every shape and size now with the lines blurred between them (I will not use this word, ever). Combined laptop / tablets are becoming more popular. A desktop connected to a large screen can’t really be considered identical to a tiny netbook.

So destroy the silos, comrades! We can’t play catch-up forever designing to whatever device is “in” right now, or assuming everyone will or won’t do things on a specific device.

There will be more to follow on my exploits at DiBi over the next few days: apologies it’s taking me so long – I insisted on writing this on the web browser built in to my new internet enabled pepper mill**.


* Author not thorough enough to check if this was a thing or not.
** Patent pending.

Reflections on UCISA 16 conference

“Sue, that is surely the best conference I have ever attended”

This was what I said to Sue Fells who is the Business and Operations Manager at UCISA. Rarely have I come away from a conference and felt that something happened when I was there that will make me think completely differently. To attend a conference where each presentation leaves you asking questions is very very rare but that is exactly what happened.

When I was thinking and preparing about how I was going to write this blog I was wondering what pieces would be most interesting to staff in Apps Division, what could I take back that would encourage people to build on what we were doing and what could really motivate people to develop the initiatives we are heavily underway with?

So my planning for this started before I went, which sessions would I go to? Which vendors would I speak to? How could I use this time with the Director and other colleagues to really make the most out of the event? Lots of plans, lots of ideas, great I’m ready to go!

What I didn’t expect was that a lot of my preparation as it turns out would be replaced by my reaction to the experience!!!

I could go into length about the sessions I went to, I have many notes I can assure you. But in truth pictures paint more words than I should in a blog and so with that in mind I have linked two of the key presentations into this blog. I would encourage everyone who reads this to take a look at my highlight talks… Really it’s worth it.

First of All

California dreamin’ presented by Hilary Baker, Vice President for Information Technology and CIO at California State University.

Hilary’s presentation surrounded the approach she and her colleagues had taken to engage with their students (41548 students) at CSU, to encourage them to participate in the development of their own experience at the University, with the ultimate objective of increasing student graduation rates.

Hilary asked students how they could use technology to make it better for them as they tried to manage their degree process, navigate their experience at the University and also to prepare them for careers in the future. What a great package of objectives!

They set up a competition called AppJam where students were invited to create teams of students with skills from a range of specialist areas who would collaborate to develop ideas and mock-ups and prototypes for apps that could be incorporated into the institutional App called CSUN mobile. The students would be asked to present their ideas and prototypes and a winner would be chosen which would actually become a real part of the App. I guess they got 24 teams because they also had a rather nice cash reward into the bargain, but again lots of great real world experience there.

Have a look at the presentation and learn how this all went and why they are going to grow the idea going forward!

Really inspirational ideas and commitment from Hilary and her team.

A great phrase Hilary used really underlined the real success story of their project. “Students Can No Longer Escape Learning”

Secondly

Creative Leadership by Jamie Anderson, Professor of Strategic management at Antwerp Management School and Visiting Professor at London Business School.

Now this is what you call a life changer. I defy anyone to see this presentation and to not be blown away by it! Really I think this is surely one of the most insightful presentation I have seen ever!

Jamie takes the audience on a journey of self-discovery, something that will leave you really thinking about many diverse things but specifically what we all need to do in order to be creative in our work.

I will not spoil the fantastic experience of following him on the journey and encourage you to take about 45 minutes to treat yourself!

If there was ever a presentation that would make you sit up and listen then his is it.

Please do take a look at this!


All of the presentations can be found here, and registration is a simple process of adding your email address.

Building a WebJar

As part of the rollout to the new University Website, a Global Experience Language was developed for Edinburgh University, which was named Edinburgh GEL. The implementation of the GEL is based on Bootstrap.

In order to easily fold this into our Java Web Applications, I wanted to create a WebJar which would allow developers to quickly pull in the Edinburgh GEL and immediately begin to use the resources.

Continue reading “Building a WebJar”

IS at DrupalCon – Mentored Code Sprint

Last day of DrupalCon Barcelona 2015

This week myself and a few colleagues attended DrupalCon 2015 in Barcelona and I have been posting some general comments as well as session summaries from Day 1, Day 2 and Day 3.

On Friday, after the main conference ended, the conference centre remained open for the traditional post-conference code sprints, including the Mentored Core Sprint, which myself, Adrian Richardson and Andrew Gleeson attended for the first time.  It turns out code sprints are addictive; we arrived at 9am intending to stay until mid-afternoon and were thrown out along with the last remaining sprinters when the building closed at 6pm! Fuelled only by water, caffeine and a very short lunch eaten at our code sprint table, each of us contributed something during the session to move Drupal 8 core along.  Some other first-time sprinters were even lucky enough to have their first contribution made to Drupal in a live commit by Angela Byron (webchick) part-way through the code sprint! Having missed out on attending previous DrupalCon code sprints, it was great to finally have the opportunity to join in and contribute to Drupal!

Before arriving for the code sprint, we had prepared our laptops with a Drupal 8 install as well as the various tools described on the DrupalCon website, choosing the Acquia Dev Desktop as the quickest option to get started.  We began the day at the First Time Sprinter Workshop to ensure we were all ready to go, and then moved through to the code sprint room, joining the many Drupalistas who had already settled down to coding.  The mentor for our table was Rachel Lawson (rachel_norfolk on Drupal.org), who was friendly and extremely helpful in keeping us on the right track as we worked on the issues we picked up from the issue queue.

With Rachel’s guidance, Andrew and I managed to find a couple of related UI issues in Drupal core, specifically the Configuration and Structure administration pages, to give us some experience of using the issue queue.  Neither of us had used the issue queue in anger before – the most I have done is re-roll a patch – so we chose something simple, and Rachel kept us right when it came to documenting what we were doing by commenting on the issues we picked up. When 6pm came and we had to leave, we had each uploaded a patch for the issue we were working on, and although neither resulted in a commit before we left, it was very satisfying to feel that we had moved both issues along and made a first small contribution to Drupal core.  It was also comforting to see during the excitement of the live commit session in the afternoon that some of the committed changes were of a similar scale to those which we had made!

Being the more experienced Drupal developer among us, and already familiar with the Drupal issue queue, Adrian was quickly drafted in by Rachel to join three other code sprinters, Darko Kantic (darko-kantic), Glenn Barr (kiwimind) and Jari Nousiainen (holist).  Together they picked up a task to repace the use of the Drupal core theme_implementation() function for table indentation with a twig template, pooling their individual back-end and front-end skills to come up with a solution that ensures the indentation values update correctly following tabledrag actions.  By the end of the day, they had collaborated to produce a patch including the twig template and the required Javascript to handle tabledrag actions.  All that remains is to redo the CSS and background images for tree-child classes that display on mousedown.

Mentored code sprint at DrupalCon Barcelona 2015
Drupal community collaboration in action!

Watching Adrian and the others work together, going through several iterative discussions to come up with the best solution, supported by a mentor who challenged their approach and reminded them to keep the issue queue updated with progress, this seemed like textbook Drupal community collaboration.  By the end of the day, the patch that was uploaded to the issue queue had been worked on by 3 of the group, whilst the remaining member of the team concentrated on identifying where the theme_implementation() function is used so that the patch can be effectively tested.  Each developer contributed to an aspect of the problem that best suited their skills, whether Javascript, Twig or CSS, and all were involved in discussing the potential solutions, discarding those that were not suitable as they progressed.  When 6pm came, if the venue doors hadn’t closed, Adrian and Glenn would have been happy to keep working and finish the outstanding CSS tasks; Drupal contribution is addictive!  As it is, they were able to progress the issue very close to completion, and although no commit to Drupal 8 of the work they have done was made on the day, they can continue to collaborate via the IRC channel to finish what they started.

It was really interesting (and fun!) to take part in a DrupalCon mentored code sprint and witness first hand one of the best things about the Drupal community – the spirit of openness and collaboration that has made it a success.  Every contribution, whether large or small, can add something, and every contributor can feel valued by the community for the part they play.  I have already been looking at the issue queue for something else to pick up; the challenge will be to find the space and time to continue what we started at DrupalCon!

 

IS at DrupalCon – Sessions Day 3

The University of Edinburgh at DrupalCon 2015 in Barcelona

This week myself and a few colleagues attended DrupalCon 2015 in Barcelona and I have been posting some general comments as well as session summaries from Day 1 and Day 2.

In a new approach to the early morning keynotes at DrupalCon, Day 3 began with two Community Keynotes presented by David Rosaz and Mike Bell. It was fantastic to see two community members being given the DrupalCon main stage as a forum to present on two very different topics that are important to them, and of interest to the community in general.

David’s presentation of his PhD research covered the different types of contribution made to peer communities such as the Drupal community, highlighting how important all types of contribution are to the continuing success of any such community, as well as how contributions can be encouraged and sustained to strengthen it.

Mike’s talk was of a much more personal nature, using his own experience of mental health problems to open a conversation on this difficult topic in a presentation that clearly chimed with many who were physically present in the audience or following the session online.  It was inspiring to hear Mike speak so eloquently about his own mental health issues, how he has learned to accept and deal with them, and how others can do the same; such openness is rare, particularly in front of such a large audience, many of whom are complete strangers.  The impact of his presentation, and the audience’s response to it, testifies not only to Mike’s bravery in standing up there to give such a personal talk, a nerve-wracking experience in itself, but also to the inclusive, supportive nature of the Drupal community.

Our experiences of some of the sessions from the final day of DrupalCon in Barcelona are outlined below. Thanks to Riky, Tim, Andrew, Adrian and Chris for contributing their thoughts on sessions they attended; any errors, omissions or misinterpretations in their edited notes are entirely mine. Most of the sessions mentioned below, along with many more interesting talks, are recorded and available on the DrupalCon YouTube channel.

The day finished with the Closing Session, where it was announced that DrupalCon 2016 will be in Dublin.

Building the FrontEnd with AngularJS

This session covered how a Drupal back-end can be decoupled from the front-end, supplying back-end APIs which allow an alternative front-end development tool to be used, a web development technique that is extremely prevalent today. The speaker acknowledged that Drupal does content management very well, but the website delivery tool out of the box does not always live up to the standard of the Drupal back-end.  When constructing the model – adding a new content type – the process of getting fields set up and widgets created to configure the admin form is quick, but a lot of time is required to get the output right in the theming layer. Here, a Fully Decoupled model was proposed to address the limitations of front-end development in Drupal. The speaker noted that an alternative Progressive/Hybrid model could use Drupal to provide, for example, the header, footer and menu, with AngularJS for the rich, functional part of the page.

AngularJS is a framework for building decoupled front-end applications, chosen from the many alternatives for several reasons.  The large development community makes it easy to get help, and the ready availability of lots of modules via ngmodules.org provides solutions for common problems that the community has already solved.  AngularJS embodies OO concepts (dependency injection, etc.) to give a much cleaner codebase, and uses known recipes for laying out the structure of code and solving problems. The framework is supported by Google, which suggests that it should have longevity.

The format of the session was a whistle-stop tour of the tools required to prepare for using Drupal with AngularJS, followed by a demonstration of how the Drupal Views module can be used in conjunction with Drupal 8 RESTful web services to implement a back-end API which will generate view output in pure JSON form for consumption by a front-end application developed in AngularJS.

The following tools are required in the toolkit for a developer wishing to use the techniques demonstrated here:

  • Node.js with NPM as package manager;
  • a JS package manager, in this case Bower;
  • a Task Runner to contain scripts that do particular tasks such as running tests or deploying code, in this case Grunt (Gulp and Broccoli would be suitable alternatives);
  • a Scaffolding tool, which takes away a lot of the work in initially building your application, in this case Yeoman (slush would be another option);
  • a Testing framework, in this case Karma, which comes with AngularJS (Behat is another option).

Having outlined the toolkit required, the speaker went on to demonstrate the stages of development, showing how straightforward decoupling Drupal can be once you have the right tools in place.  The steps covered are described below, but this summary is no substitute for watching the excellent session recording and reviewing the code samples used in the demonstration.

1  Create “REST Export” display for view

A new display type for views is provided by RESTful web services, generating “just the data” in raw JSON format when the API URL is called.  When the View is filtered, for example by ID, only filtered content appears in the JSON RESTful output.

2  Scaffolding for the AngularJS application

A scaffolding tool, Yeoman in this example, provides recipes which do the legwork of building the initial application.  A simple command creates the base directories that are required for the web application, such as app for the code, dist for the compiled/minified files, required components for node & bower, etc. In this example, Node was used for the server side and Bower supplied the dependencies for AngularJS.  A Grunt file defines the tasks which can be done on the project; an IDE such as PHPStorm may provide a pretty visual representation of this.

It was extremely impressive to see just how much of the repetitive process of getting an application up and running can be automated. The scaffolding process creates an empty application that is ready for code!

3  Set up the client side HTML to support the AngularJS code

The AngularJS application demonstrated was a single page application using index.html (HTML is the templating language for AngularJS).  The compiled public version of this file differs from the version used during development because the Grunt task from the scaffolding recipe takes out unnecessary lines that are only for dev purposes when compiling the application.  Again, automation simplifies the development and deployment process.

In index.html, an attribute on the body tag (ng-app) acts as a directive to provide scope for the AngularJS module that will provide functionality.

4  Create the server side AngularJS code

The app.js file in the AngularJS application contains router information to let the front-end know where to send requests.  AngularJS uses dependency injection to inject the correct service at runtime; all that is needed is to provide the service name in arguments when defining the function. It was noted that HTML 5 mode needs to be enabled and base defined in order to use clean URLs, otherwise you get # in URLs.  The $routeProvider configuration is used to tell AngularJS what template to use and what controller to use for each URL.  The response handler is defined in a .js file, and a template file generates the application output using the RESTful web services output drawn from Drupal.

Et voila, with all of this in place, the Drupal 8 back-end is successfully decoupled and content consumed using a front-end AngularJS application.

5  Extend the app

Having covered the creation of the application, the demonstration went on to extend it, installing a new client-side package using Bower, which downloads a dependency that can then be configured in AngularJS app.  This is done by including the dependency to the JS file for the package in the index.html file and adding the dependency to app.js in the section where dependencies are configured.  Once the new client-side package is configured via these simple steps, it is ready to use.

The speaker briefly touched on equivalent functionality for Drupal 7, which does not have the built-in RESTful web services provided by Drupal 8.  The services module can open up all nodes on the system via GUI config, and hooks can be used to override how the data is sent back.  Alternatively, the RESTful module is code-based & gives more control over how the data is returned.  The generator-hedley Yeoman script provides a scaffodling recipe to build a Drupal 7 back-end with an Angular application client, and includes Behat as the testing framework.

This session was very well presented and incredibly dense; the speaker not only provided background on the reasons for decoupling Drupal and how RESTful web services can be used to achieve this, but also gave a really good overview of how an AngularJS application is structured, showing just how clean the code can be and how well back-end and front-end elements are separated.  Some developers in IS Applications are already exploring the possibilities of AngularJS in the context of uPortal development (see Unit testing AngularJS portlet with Maven and Jasmin and Making Portlets Angular); what we saw here indicates that we should definitely be pursuing this further.  Decoupling allows the best tool to be used for the particular task in hand.  The exciting potential is not limited to the  Drupal context; given how much of the web is now being delivered using these decoupling techniques, we should start making the most of the flexibility they provide.

AMA: Drupal Shops Explain How They Do It

This was a Q and A session with a panel of 3 from larger Drupal shops these were:

  1. Mike King, Project Manager with AnnerTech in Ireland
  2. Dogma Muth, Project Manager from Amazee Labs in Zurich
  3. Steve Parks, Wunderroot in London

The main takeaway from this session covers the question I raised on UX. What we did during the EdWeb project around UX was basically correct but too chunky; it could be refined to be more efficient by doing the UX in smaller chunks and earlier. Another improvement would be starting the wire-framing before the design is complete, the caveat being to do this where this fits. A second takeaway is around project communication, the two key words here are “early” and “transparent”!

Below is a summary of the questions and discussion.  Also, check out the Wunder Way at http://way.wunder.io, where Wunderroot explain their project delivery strategy.

Q: How much do you explain to your clients about what Drupal is as a community?

All three said that they explain to their customers the principles behind the community, usually at the outset, and attempt to educate their clients and encourage their teams to engage and, where possible, contribute to the community. It’s also important to get their clients on board so that code can be fed back to the community after a project has finished where this is appropriate.

All three noted that there are different options for time recording from individual recording across a client/community split to having a percentage within a sprint for community-focused work. Also, community time spent during office hours needs to be met with the same amount of time outside of the organisation.

Q: How can small teams with a limited number of people and resources accommodate all the traditional Agile roles and processes?

In this situation it is important to concentrate on the most important parts and not try to do everything at once. Firstly, use communication as a  tool to ensure that user stories correctly generate the deliverables to achieve the project objectives. With this in mind, it is important to understand why a feature is needed. The “so that” part of the story needs to clearly identify why something is wanted. Again on the communications front, stand-ups are the key to transparency within the project team.

Q: How is UX incorporated into your Agile process, in particular in projects with a large number of user stories and pressure to get things out the door?

Simple answer – wire-frames and process flow! Test often and early and keep it simple. Test on prototypes and allow sufficient time for this. It is good practice for the person doing the design not to pass the UX. There is no perfect way, but the key is dialogue! It is not recommended to wait until all the various design parts are finished before starting, and it’s important to keep asking what does the user really need. Find a way to confirm that by getting real end-users involved at the earliest stage possible. Also, UX starts at the beginning of a project with customer journeys.

Q: What is the ideal sprint length?

Two weeks, with a regular meeting structure including adequate time for planning and review. The whole team needs to be involved in sprint/iteration review. For some projects shorter sprint/iteration are a better fit, especially where faster demos are required; likewise under certain circumstances it may be better for a longer sprint/iteration duration.

Q: What is the best testing approach?

The key here is comprehensive automated testing, peer testing and of course dumb user (PM!) testing. It is also good practice to include testing in the definition of “done”, enforcing the idea that a user story is not done until testing is complete.

Q: How can things that were missed during discovery be picked up at a later stage, and how is this communicated with the client?

It’s easy: go back to the client at the earliest opportunity. Secondly, if this means extra scope then something has to give, and the client needs to prioritise. To avoid this happening, it’s important that the clients understand the principles of Agile and how it works. Change will always happen; it needs to be embraced and communicated, early and accurately, in order to allow prioritisation.

Distributed Teams, Systems & Culture: Finding success with a distributed workforce

In this session, the speaker talked about how Pantheon successfully maintain a worldwide engineering team where 30% of engineers work remotely.

A distributed culture gives autonomy to function in space and time. It has several benefits to the company, such as higher availability of staff and greater coverage of time zones for supporting services, but also benefits staff members too, allowing greater flexibility in how they work, and freeing up time which would otherwise be spent on commuting to an office.

To assist in their distributed working, Pantheon use a variety of tools:

  • Slack instant messaging with the Hubot chat bot;
  • Google Hangouts for meetings;
  • PagerDuty to alert support staff when outages occur;
  • Waffle as a Trello-like board for working with GitHub issues;
  • Sprintly as an Agile board;
  • Stickies as a collaborative online whiteboard
  • YubiKeys, a hardware key which needs to be plugged into a PC by a staff member, for 2 Factor Authentication.

However, there are things which aren’t as easy when working in a distributed manner. For Pantheon, trust, security and morale are very important; negativity and staff frustration can be amplified when working remotely. Pantheon introduced mandatory working from home days so that all staff could empathise with those who don’t work in an office. The bottom line is that you cannot beat actually getting together in person, but that doing so in a relaxed and more social manner can strongly aid working together remotely, even if only between different offices, by opening communication channels.

While we don’t have much distributed working in IS Applications, a lot of the tools were interesting and principles and techniques were discussed here which can be applied to people working in offices in the same city, but located in different offices and across different teams. We have equivalents for some of the tools demonstrated (HipChat, Skype for Business, Jira and Jira Agile), but using PagerDuty as an alerting system, 2FA hardware keys and extending HipChat with chat bots were all ideas which I will investigate further to see if they could be adopted within the department.

CIBox – full stack open source Continuous Integration flow for Drupal/Symfony teams

The session started by describing the old Continuous Integration workflow used by FFW; there was a single development environment, with all commits made to the master branch and then master was deployed to DEV, which caused shared resource problems and took too long for developers to configure their local development environment each time.

Their current workflow is now much better, and in some ways similar to the development performed for the Drupal projects: local Vagrant VMs are used, with feature branches in Git and automated testing on pull requests, BackTrac shows visual diffs between site versions and multi-node Munin for OS monitoring.

To enable their new workflow FFW produced CIBox, a standardised, preconfigured way to deploy the Jenkins continuous integration server. These are Vagrant Ubuntu VMs configured with Ansible and setup to use a GitHub project. The Jenkins VMs have Jenkins plugins, LAMP with SSL, CodeSniffer and JSHint code sniffers, SCSS-Lint for SASS file linting, security linters, Jetty and Solr, Selenium and Behat, and Drupal configuration instantly available.

While it is unlikely that we would use CIBox to replace our current Bamboo configuration, it was encouraging to see that many of the improved workflow techniques used by FFW are already being adopted by Development Services (Git with feature branches) or are soon to be investigated (local Vagrant development environments).

Introducing Probo.CI

“Drupal is near impossible to test in an automated way; there’s too code and too much in the database,”

So began the confident speaker in a very exciting talk about the Probo Continuous Integration server. Traditionally in modern CI workflows, issues would be created and assigned to developers, they would create code and commit it to a feature branch, then this would be reviewed in DEV. However, despite these ‘best practices’, having multiple tickets worked on in one feature branch can mean cherry-picking pain if the Business is only happy that some of the issues have been successfully completed.

An alternative workflow proposed by Probo is to still have assigned tickets get coded on by developers and committed to a feature branch, but then have these feature branches get reviewed in their own temporary environments. This allows far more useful QA to be performed and avoids situations where only half a feature branch is ready for merging.

To enable this alternative workflow which distinguishes the tool from being “yet another CI server”, Probo was created. Available as both a hosted SaaS solution and as an open source project, Probo watches a GitHub project and automatically creates a temporary environment on the creation of a pull request. The technology it runs on is also interesting, using ‘fat’ Docker containers which treat an environment as a single unit.

The process of isolating individual features on a branch is actually similar to how feature branches were used in the project to develop the University’s new Drupal CMS, EdWeb.  Each feature branch represented the functionality for a particular user story, but rather than having temporary environments automatically spin up, each branch was deployed to the Dev infrastructure, and only merged when ready.  Automatic deployment of a temporary environment for each branch would have saved us having to manage the slot for deployment of a feature branch to Dev. Another difference is that QA by the business was carried out in a Test environment after merging with other features; whilst it did not happen often, we were still sometimes in a position where features that had been merged were not quite ready for production.  The ability for the business to do their QA on the feature branch in a temporary environment would have been extremely useful.  The session also highlighted a flaw in the new workflows being developed as part of our Python adoption.

This was a very entertaining session that I would encourage others to watch. Having a way to spin up temporary environments for QA is a very powerful technique which can be applied not just to Drupal, but to all of our development, and is something I intend to investigate further.

Visual Regression Testing

This session centred around Shoov, an open sourced visual regression tool developed by Gizra. Shoov provides both live monitoring of an application – as you would get from pingdom or 24×7 – and live visual regression testing.  Testing for visual regression on the live site allows you to test for issues introduced by 3rd party elements, such as Facebook and Twitter widgets, as well as pick up on elements not rendering as expected, which cannot be spotted by conventional tests.  It helps identify the cases where the site is broken as far as the users are concerned, but more conventional monitoring would report everything to be OK.

The session demonstrated how to use Behat to define your tests, and how to run the same test for multiple browsers (Chrome, IE, etc) on multiple platforms (Windows 7, OS X Yosemite, iPhone 5, etc) across multiple viewports (320,  640, 960, etc).  You aren’t tied to Behat for testing; cucumber, casper.js and others are also supported.

The demonstration also covered how to exclude specific elements on the page that you always expect to differ from your base element, such as video, image carousels or other animated elements.  You just use a CSS3 identifier to specify whether it should be excluded, hidden or removed before generating the diff image. Not only do you get a high contrast image diff, as Wraith generates (see also Fundamentals of Front-End Ops), but you can also get an image overlay where you can swipe to reveal one version overlaid on the other.

Building semantic content models in Drupal 8

RDFa from schema.org is now in Drupal 8 core and this session showed what is currently possible with the help of contrib modules and what is in the pipeline with sandbox modules.

There is a lot of work going on to reduce the overhead both for site builders and site users in adding semantic markup to their pages.  In Drupal 7 it is not a quick process to build a new entity and map its fields to RDFa properties.

With the RDF UI module it becomes very easy to generate a new content type based on a schema.org definition.  If you want to create a new sporting event content type for example, you can specify that it is to be generated with a schema.org definition and you are just presented with a list of fields derived from http://schema.org/SportsEvent; then you just need to select those properties you want to use and generate fields for, and the entity is built for you with all the RDFa mapping done.

Keeping to the premise that you shouldn’t be replicating content in many places, there is a lot of effort going into tapping into external sources for taxonomies and marking those up with the correct RDFa automatically.  Being able to have Entity Reference Fields take data from external APIs means you don’t have to replicate the effort in maintaining the taxonomy.  For instance, you want to have the user select a genre for your music site, just point your entity reference field at the Genre API  and offload that work while ensuring the semantic markup is also there to help search engines give intelligent results for searches by music genre.

When it comes to user-generated content and including semantic markup, there has only really been the RDFa Content Editor (RDFaCE) plugin for TinyMCE.  But now we have a couple of extra buttons coming to CKEditor in Drupal 8 to allow users to apply semantic links to content – with dynamic lookups to Wikidata – to make it easy for you to, for instance, mark the word “Paris” in your content as a prince of Troy rather than have search engines interpret your content as relevant for the capital of France.  There is a dynamic lookup based on your initial selection which you can further refine with additional terms to locate the correct “Paris” in the list and select that, and this is all without leaving your main workflow, making it more likely that content editors will actually use semantic markup.

What we learned from building an (extensible) distribution

This session covered lessons learned during the development of the ERPAL distribution.  There are many uses for a distribution platform, which can start to introduce new challenges.
At the University our mechanism for supplying a Distribution profile matching the central Drupal CMS provision is still quite new, as is using Drupal in general.  Although not widely used at the moment, there is quite a lot of scope for sites to implement sites based on the Distribution. It is however quite difficult to pre-empt how something so new will be used; we should remain aware of the potential is it matures in order to exploit it.

I attended this session with a colleague from the University Website Programme team, who manage the central Drupal CMS provision, EdWeb.  Afterwards, the talk sparked a conversation about our own distribution and issues which we might have at the moment. The main thing that came out of this discussion is that a default config for our distribution site would be useful to make it easier for users to get up and running with working with it. We will follow this up by writing up some of the areas which have already arisen as needing some configuration for new users of our distribution.  We can then identify how to incorporate this into this distribution itself, or even just into the one-click distribution provided on our central hosting system, which will be much simpler to achieve, and may be all that is required.

Drupal 8 retrospective with Dries

In this session, Dries talked mainly about the high and low points of the Drupal 8 project.

One of the main suggestions that came out of this was to release fewer things sooner, which is a strategy that will be adopted for future Drupal releases. It’s possible to see parallels between the Drupal 8 project and our project to develop the central Drupal CMS, EdWeb, giving some perspective on what we have done and achieved, and suggesting how we might proceed in future.