Some time ago I wrote a post on our first EdWeb code sprint. Obviously things have moved on since 2016 – for one thing we have run quite a few contributions events since that pilot! Last December we hosted a Drupal 8 contribution day, as code sprints are now known, inviting some developers from the University of Dundee to take part. Over the years between my original post and our latest contribution day we have learned a lot about making the most out of this type of event. Now seems an excellent time to reflect on my original recommendations for staging a successful code sprint for any application or service and add a few extra things to the list.
Continue reading “Five more things on effective contribution days”A View from the Prater – IS at DrupalCon Vienna, Day 2
This week a group of us from Information Services are attending DrupalCon 2017 in Vienna and we are sharing our thoughts on the sessions we attend, recommending top sessions, and giving our key takeaways from our DrupalCon experience. Yesterday I posted our reactions to the first day of DrupalCon, and today we continue our DrupalCon reportage.
For two of our party, Tim Gray and Bruce Darby, this was a very exciting day as they were presenting a session on how we have used code sprints and collaborative development to build a community of users and developers around EdWeb. More on our first-time DrupalCon Speakers later!
Continue reading “A View from the Prater – IS at DrupalCon Vienna, Day 2”
A View from the Prater – IS at DrupalCon Vienna, Day 1
As we embark upon our next big adventure, planning for the migration from Drupal 7 to Drupal 8 of EdWeb, the University’s central CMS, a group of us from Information Services are here in Vienna this week attending DrupalCon 2017. We are a small but diverse bunch of project managers, developers, sysadmins, and support staff who all play a part in building, running and managing EdWeb. For the next few days we’ll be sharing our thoughts on the sessions we attend, recommending top sessions, and giving our key takeaways – not the wurst variety – from our DrupalCon experience.
On Tuesday, we started DrupalCon the right way by attending the always entertaining Pre-note, followed by Dries Buytaert’s traditional Driesnote keynote presentation on the state of Drupal. We then set out on our different tracks, paths crossing at coffee and lunch, for the first intense but interesting day of DrupalCon sessions.
Continue reading “A View from the Prater – IS at DrupalCon Vienna, Day 1”
Code sprint five by five
Last week Development Services, in collaboration with colleagues in the University Website Programme team, helped run a code sprint with developers from around the University to work on fixes and enhancements for EdWeb, the Drupal-based content management system that underpins the University’s website. This post gathers some technology-agnostic thoughts on what we did to prepare, and how we ran our sprint, that might be of interest to anyone thinking about running a similar event.
IS at DrupalCon – Mentored Code Sprint
This week myself and a few colleagues attended DrupalCon 2015 in Barcelona and I have been posting some general comments as well as session summaries from Day 1, Day 2 and Day 3.
On Friday, after the main conference ended, the conference centre remained open for the traditional post-conference code sprints, including the Mentored Core Sprint, which myself, Adrian Richardson and Andrew Gleeson attended for the first time. It turns out code sprints are addictive; we arrived at 9am intending to stay until mid-afternoon and were thrown out along with the last remaining sprinters when the building closed at 6pm! Fuelled only by water, caffeine and a very short lunch eaten at our code sprint table, each of us contributed something during the session to move Drupal 8 core along. Some other first-time sprinters were even lucky enough to have their first contribution made to Drupal in a live commit by Angela Byron (webchick) part-way through the code sprint! Having missed out on attending previous DrupalCon code sprints, it was great to finally have the opportunity to join in and contribute to Drupal!
Before arriving for the code sprint, we had prepared our laptops with a Drupal 8 install as well as the various tools described on the DrupalCon website, choosing the Acquia Dev Desktop as the quickest option to get started. We began the day at the First Time Sprinter Workshop to ensure we were all ready to go, and then moved through to the code sprint room, joining the many Drupalistas who had already settled down to coding. The mentor for our table was Rachel Lawson (rachel_norfolk on Drupal.org), who was friendly and extremely helpful in keeping us on the right track as we worked on the issues we picked up from the issue queue.
With Rachel’s guidance, Andrew and I managed to find a couple of related UI issues in Drupal core, specifically the Configuration and Structure administration pages, to give us some experience of using the issue queue. Neither of us had used the issue queue in anger before – the most I have done is re-roll a patch – so we chose something simple, and Rachel kept us right when it came to documenting what we were doing by commenting on the issues we picked up. When 6pm came and we had to leave, we had each uploaded a patch for the issue we were working on, and although neither resulted in a commit before we left, it was very satisfying to feel that we had moved both issues along and made a first small contribution to Drupal core. It was also comforting to see during the excitement of the live commit session in the afternoon that some of the committed changes were of a similar scale to those which we had made!
Being the more experienced Drupal developer among us, and already familiar with the Drupal issue queue, Adrian was quickly drafted in by Rachel to join three other code sprinters, Darko Kantic (darko-kantic), Glenn Barr (kiwimind) and Jari Nousiainen (holist). Together they picked up a task to repace the use of the Drupal core theme_implementation() function for table indentation with a twig template, pooling their individual back-end and front-end skills to come up with a solution that ensures the indentation values update correctly following tabledrag actions. By the end of the day, they had collaborated to produce a patch including the twig template and the required Javascript to handle tabledrag actions. All that remains is to redo the CSS and background images for tree-child classes that display on mousedown.
Watching Adrian and the others work together, going through several iterative discussions to come up with the best solution, supported by a mentor who challenged their approach and reminded them to keep the issue queue updated with progress, this seemed like textbook Drupal community collaboration. By the end of the day, the patch that was uploaded to the issue queue had been worked on by 3 of the group, whilst the remaining member of the team concentrated on identifying where the theme_implementation() function is used so that the patch can be effectively tested. Each developer contributed to an aspect of the problem that best suited their skills, whether Javascript, Twig or CSS, and all were involved in discussing the potential solutions, discarding those that were not suitable as they progressed. When 6pm came, if the venue doors hadn’t closed, Adrian and Glenn would have been happy to keep working and finish the outstanding CSS tasks; Drupal contribution is addictive! As it is, they were able to progress the issue very close to completion, and although no commit to Drupal 8 of the work they have done was made on the day, they can continue to collaborate via the IRC channel to finish what they started.
It was really interesting (and fun!) to take part in a DrupalCon mentored code sprint and witness first hand one of the best things about the Drupal community – the spirit of openness and collaboration that has made it a success. Every contribution, whether large or small, can add something, and every contributor can feel valued by the community for the part they play. I have already been looking at the issue queue for something else to pick up; the challenge will be to find the space and time to continue what we started at DrupalCon!
IS at DrupalCon – Sessions Day 3
This week myself and a few colleagues attended DrupalCon 2015 in Barcelona and I have been posting some general comments as well as session summaries from Day 1 and Day 2.
In a new approach to the early morning keynotes at DrupalCon, Day 3 began with two Community Keynotes presented by David Rosaz and Mike Bell. It was fantastic to see two community members being given the DrupalCon main stage as a forum to present on two very different topics that are important to them, and of interest to the community in general.
David’s presentation of his PhD research covered the different types of contribution made to peer communities such as the Drupal community, highlighting how important all types of contribution are to the continuing success of any such community, as well as how contributions can be encouraged and sustained to strengthen it.
Mike’s talk was of a much more personal nature, using his own experience of mental health problems to open a conversation on this difficult topic in a presentation that clearly chimed with many who were physically present in the audience or following the session online. It was inspiring to hear Mike speak so eloquently about his own mental health issues, how he has learned to accept and deal with them, and how others can do the same; such openness is rare, particularly in front of such a large audience, many of whom are complete strangers. The impact of his presentation, and the audience’s response to it, testifies not only to Mike’s bravery in standing up there to give such a personal talk, a nerve-wracking experience in itself, but also to the inclusive, supportive nature of the Drupal community.
Our experiences of some of the sessions from the final day of DrupalCon in Barcelona are outlined below. Thanks to Riky, Tim, Andrew, Adrian and Chris for contributing their thoughts on sessions they attended; any errors, omissions or misinterpretations in their edited notes are entirely mine. Most of the sessions mentioned below, along with many more interesting talks, are recorded and available on the DrupalCon YouTube channel.
The day finished with the Closing Session, where it was announced that DrupalCon 2016 will be in Dublin.
Building the FrontEnd with AngularJS
This session covered how a Drupal back-end can be decoupled from the front-end, supplying back-end APIs which allow an alternative front-end development tool to be used, a web development technique that is extremely prevalent today. The speaker acknowledged that Drupal does content management very well, but the website delivery tool out of the box does not always live up to the standard of the Drupal back-end. When constructing the model – adding a new content type – the process of getting fields set up and widgets created to configure the admin form is quick, but a lot of time is required to get the output right in the theming layer. Here, a Fully Decoupled model was proposed to address the limitations of front-end development in Drupal. The speaker noted that an alternative Progressive/Hybrid model could use Drupal to provide, for example, the header, footer and menu, with AngularJS for the rich, functional part of the page.
AngularJS is a framework for building decoupled front-end applications, chosen from the many alternatives for several reasons. The large development community makes it easy to get help, and the ready availability of lots of modules via ngmodules.org provides solutions for common problems that the community has already solved. AngularJS embodies OO concepts (dependency injection, etc.) to give a much cleaner codebase, and uses known recipes for laying out the structure of code and solving problems. The framework is supported by Google, which suggests that it should have longevity.
The format of the session was a whistle-stop tour of the tools required to prepare for using Drupal with AngularJS, followed by a demonstration of how the Drupal Views module can be used in conjunction with Drupal 8 RESTful web services to implement a back-end API which will generate view output in pure JSON form for consumption by a front-end application developed in AngularJS.
The following tools are required in the toolkit for a developer wishing to use the techniques demonstrated here:
- Node.js with NPM as package manager;
- a JS package manager, in this case Bower;
- a Task Runner to contain scripts that do particular tasks such as running tests or deploying code, in this case Grunt (Gulp and Broccoli would be suitable alternatives);
- a Scaffolding tool, which takes away a lot of the work in initially building your application, in this case Yeoman (slush would be another option);
- a Testing framework, in this case Karma, which comes with AngularJS (Behat is another option).
Having outlined the toolkit required, the speaker went on to demonstrate the stages of development, showing how straightforward decoupling Drupal can be once you have the right tools in place. The steps covered are described below, but this summary is no substitute for watching the excellent session recording and reviewing the code samples used in the demonstration.
1 Create “REST Export” display for view
A new display type for views is provided by RESTful web services, generating “just the data” in raw JSON format when the API URL is called. When the View is filtered, for example by ID, only filtered content appears in the JSON RESTful output.
2 Scaffolding for the AngularJS application
A scaffolding tool, Yeoman in this example, provides recipes which do the legwork of building the initial application. A simple command creates the base directories that are required for the web application, such as app for the code, dist for the compiled/minified files, required components for node & bower, etc. In this example, Node was used for the server side and Bower supplied the dependencies for AngularJS. A Grunt file defines the tasks which can be done on the project; an IDE such as PHPStorm may provide a pretty visual representation of this.
It was extremely impressive to see just how much of the repetitive process of getting an application up and running can be automated. The scaffolding process creates an empty application that is ready for code!
3 Set up the client side HTML to support the AngularJS code
The AngularJS application demonstrated was a single page application using index.html (HTML is the templating language for AngularJS). The compiled public version of this file differs from the version used during development because the Grunt task from the scaffolding recipe takes out unnecessary lines that are only for dev purposes when compiling the application. Again, automation simplifies the development and deployment process.
In index.html, an attribute on the body tag (ng-app) acts as a directive to provide scope for the AngularJS module that will provide functionality.
4 Create the server side AngularJS code
The app.js file in the AngularJS application contains router information to let the front-end know where to send requests. AngularJS uses dependency injection to inject the correct service at runtime; all that is needed is to provide the service name in arguments when defining the function. It was noted that HTML 5 mode needs to be enabled and base defined in order to use clean URLs, otherwise you get # in URLs. The $routeProvider configuration is used to tell AngularJS what template to use and what controller to use for each URL. The response handler is defined in a .js file, and a template file generates the application output using the RESTful web services output drawn from Drupal.
Et voila, with all of this in place, the Drupal 8 back-end is successfully decoupled and content consumed using a front-end AngularJS application.
5 Extend the app
Having covered the creation of the application, the demonstration went on to extend it, installing a new client-side package using Bower, which downloads a dependency that can then be configured in AngularJS app. This is done by including the dependency to the JS file for the package in the index.html file and adding the dependency to app.js in the section where dependencies are configured. Once the new client-side package is configured via these simple steps, it is ready to use.
The speaker briefly touched on equivalent functionality for Drupal 7, which does not have the built-in RESTful web services provided by Drupal 8. The services module can open up all nodes on the system via GUI config, and hooks can be used to override how the data is sent back. Alternatively, the RESTful module is code-based & gives more control over how the data is returned. The generator-hedley Yeoman script provides a scaffodling recipe to build a Drupal 7 back-end with an Angular application client, and includes Behat as the testing framework.
This session was very well presented and incredibly dense; the speaker not only provided background on the reasons for decoupling Drupal and how RESTful web services can be used to achieve this, but also gave a really good overview of how an AngularJS application is structured, showing just how clean the code can be and how well back-end and front-end elements are separated. Some developers in IS Applications are already exploring the possibilities of AngularJS in the context of uPortal development (see Unit testing AngularJS portlet with Maven and Jasmin and Making Portlets Angular); what we saw here indicates that we should definitely be pursuing this further. Decoupling allows the best tool to be used for the particular task in hand. The exciting potential is not limited to the Drupal context; given how much of the web is now being delivered using these decoupling techniques, we should start making the most of the flexibility they provide.
AMA: Drupal Shops Explain How They Do It
This was a Q and A session with a panel of 3 from larger Drupal shops these were:
- Mike King, Project Manager with AnnerTech in Ireland
- Dogma Muth, Project Manager from Amazee Labs in Zurich
- Steve Parks, Wunderroot in London
The main takeaway from this session covers the question I raised on UX. What we did during the EdWeb project around UX was basically correct but too chunky; it could be refined to be more efficient by doing the UX in smaller chunks and earlier. Another improvement would be starting the wire-framing before the design is complete, the caveat being to do this where this fits. A second takeaway is around project communication, the two key words here are “early” and “transparent”!
Below is a summary of the questions and discussion. Also, check out the Wunder Way at http://way.wunder.io, where Wunderroot explain their project delivery strategy.
Q: How much do you explain to your clients about what Drupal is as a community?
All three said that they explain to their customers the principles behind the community, usually at the outset, and attempt to educate their clients and encourage their teams to engage and, where possible, contribute to the community. It’s also important to get their clients on board so that code can be fed back to the community after a project has finished where this is appropriate.
All three noted that there are different options for time recording from individual recording across a client/community split to having a percentage within a sprint for community-focused work. Also, community time spent during office hours needs to be met with the same amount of time outside of the organisation.
Q: How can small teams with a limited number of people and resources accommodate all the traditional Agile roles and processes?
In this situation it is important to concentrate on the most important parts and not try to do everything at once. Firstly, use communication as a tool to ensure that user stories correctly generate the deliverables to achieve the project objectives. With this in mind, it is important to understand why a feature is needed. The “so that” part of the story needs to clearly identify why something is wanted. Again on the communications front, stand-ups are the key to transparency within the project team.
Q: How is UX incorporated into your Agile process, in particular in projects with a large number of user stories and pressure to get things out the door?
Simple answer – wire-frames and process flow! Test often and early and keep it simple. Test on prototypes and allow sufficient time for this. It is good practice for the person doing the design not to pass the UX. There is no perfect way, but the key is dialogue! It is not recommended to wait until all the various design parts are finished before starting, and it’s important to keep asking what does the user really need. Find a way to confirm that by getting real end-users involved at the earliest stage possible. Also, UX starts at the beginning of a project with customer journeys.
Q: What is the ideal sprint length?
Two weeks, with a regular meeting structure including adequate time for planning and review. The whole team needs to be involved in sprint/iteration review. For some projects shorter sprint/iteration are a better fit, especially where faster demos are required; likewise under certain circumstances it may be better for a longer sprint/iteration duration.
Q: What is the best testing approach?
The key here is comprehensive automated testing, peer testing and of course dumb user (PM!) testing. It is also good practice to include testing in the definition of “done”, enforcing the idea that a user story is not done until testing is complete.
Q: How can things that were missed during discovery be picked up at a later stage, and how is this communicated with the client?
It’s easy: go back to the client at the earliest opportunity. Secondly, if this means extra scope then something has to give, and the client needs to prioritise. To avoid this happening, it’s important that the clients understand the principles of Agile and how it works. Change will always happen; it needs to be embraced and communicated, early and accurately, in order to allow prioritisation.
Distributed Teams, Systems & Culture: Finding success with a distributed workforce
In this session, the speaker talked about how Pantheon successfully maintain a worldwide engineering team where 30% of engineers work remotely.
A distributed culture gives autonomy to function in space and time. It has several benefits to the company, such as higher availability of staff and greater coverage of time zones for supporting services, but also benefits staff members too, allowing greater flexibility in how they work, and freeing up time which would otherwise be spent on commuting to an office.
To assist in their distributed working, Pantheon use a variety of tools:
- Slack instant messaging with the Hubot chat bot;
- Google Hangouts for meetings;
- PagerDuty to alert support staff when outages occur;
- Waffle as a Trello-like board for working with GitHub issues;
- Sprintly as an Agile board;
- Stickies as a collaborative online whiteboard
- YubiKeys, a hardware key which needs to be plugged into a PC by a staff member, for 2 Factor Authentication.
However, there are things which aren’t as easy when working in a distributed manner. For Pantheon, trust, security and morale are very important; negativity and staff frustration can be amplified when working remotely. Pantheon introduced mandatory working from home days so that all staff could empathise with those who don’t work in an office. The bottom line is that you cannot beat actually getting together in person, but that doing so in a relaxed and more social manner can strongly aid working together remotely, even if only between different offices, by opening communication channels.
While we don’t have much distributed working in IS Applications, a lot of the tools were interesting and principles and techniques were discussed here which can be applied to people working in offices in the same city, but located in different offices and across different teams. We have equivalents for some of the tools demonstrated (HipChat, Skype for Business, Jira and Jira Agile), but using PagerDuty as an alerting system, 2FA hardware keys and extending HipChat with chat bots were all ideas which I will investigate further to see if they could be adopted within the department.
CIBox – full stack open source Continuous Integration flow for Drupal/Symfony teams
The session started by describing the old Continuous Integration workflow used by FFW; there was a single development environment, with all commits made to the master branch and then master was deployed to DEV, which caused shared resource problems and took too long for developers to configure their local development environment each time.
Their current workflow is now much better, and in some ways similar to the development performed for the Drupal projects: local Vagrant VMs are used, with feature branches in Git and automated testing on pull requests, BackTrac shows visual diffs between site versions and multi-node Munin for OS monitoring.
To enable their new workflow FFW produced CIBox, a standardised, preconfigured way to deploy the Jenkins continuous integration server. These are Vagrant Ubuntu VMs configured with Ansible and setup to use a GitHub project. The Jenkins VMs have Jenkins plugins, LAMP with SSL, CodeSniffer and JSHint code sniffers, SCSS-Lint for SASS file linting, security linters, Jetty and Solr, Selenium and Behat, and Drupal configuration instantly available.
While it is unlikely that we would use CIBox to replace our current Bamboo configuration, it was encouraging to see that many of the improved workflow techniques used by FFW are already being adopted by Development Services (Git with feature branches) or are soon to be investigated (local Vagrant development environments).
Introducing Probo.CI
“Drupal is near impossible to test in an automated way; there’s too code and too much in the database,”
So began the confident speaker in a very exciting talk about the Probo Continuous Integration server. Traditionally in modern CI workflows, issues would be created and assigned to developers, they would create code and commit it to a feature branch, then this would be reviewed in DEV. However, despite these ‘best practices’, having multiple tickets worked on in one feature branch can mean cherry-picking pain if the Business is only happy that some of the issues have been successfully completed.
An alternative workflow proposed by Probo is to still have assigned tickets get coded on by developers and committed to a feature branch, but then have these feature branches get reviewed in their own temporary environments. This allows far more useful QA to be performed and avoids situations where only half a feature branch is ready for merging.
To enable this alternative workflow which distinguishes the tool from being “yet another CI server”, Probo was created. Available as both a hosted SaaS solution and as an open source project, Probo watches a GitHub project and automatically creates a temporary environment on the creation of a pull request. The technology it runs on is also interesting, using ‘fat’ Docker containers which treat an environment as a single unit.
The process of isolating individual features on a branch is actually similar to how feature branches were used in the project to develop the University’s new Drupal CMS, EdWeb. Each feature branch represented the functionality for a particular user story, but rather than having temporary environments automatically spin up, each branch was deployed to the Dev infrastructure, and only merged when ready. Automatic deployment of a temporary environment for each branch would have saved us having to manage the slot for deployment of a feature branch to Dev. Another difference is that QA by the business was carried out in a Test environment after merging with other features; whilst it did not happen often, we were still sometimes in a position where features that had been merged were not quite ready for production. The ability for the business to do their QA on the feature branch in a temporary environment would have been extremely useful. The session also highlighted a flaw in the new workflows being developed as part of our Python adoption.
This was a very entertaining session that I would encourage others to watch. Having a way to spin up temporary environments for QA is a very powerful technique which can be applied not just to Drupal, but to all of our development, and is something I intend to investigate further.
Visual Regression Testing
This session centred around Shoov, an open sourced visual regression tool developed by Gizra. Shoov provides both live monitoring of an application – as you would get from pingdom or 24×7 – and live visual regression testing. Testing for visual regression on the live site allows you to test for issues introduced by 3rd party elements, such as Facebook and Twitter widgets, as well as pick up on elements not rendering as expected, which cannot be spotted by conventional tests. It helps identify the cases where the site is broken as far as the users are concerned, but more conventional monitoring would report everything to be OK.
The session demonstrated how to use Behat to define your tests, and how to run the same test for multiple browsers (Chrome, IE, etc) on multiple platforms (Windows 7, OS X Yosemite, iPhone 5, etc) across multiple viewports (320, 640, 960, etc). You aren’t tied to Behat for testing; cucumber, casper.js and others are also supported.
The demonstration also covered how to exclude specific elements on the page that you always expect to differ from your base element, such as video, image carousels or other animated elements. You just use a CSS3 identifier to specify whether it should be excluded, hidden or removed before generating the diff image. Not only do you get a high contrast image diff, as Wraith generates (see also Fundamentals of Front-End Ops), but you can also get an image overlay where you can swipe to reveal one version overlaid on the other.
Building semantic content models in Drupal 8
RDFa from schema.org is now in Drupal 8 core and this session showed what is currently possible with the help of contrib modules and what is in the pipeline with sandbox modules.
There is a lot of work going on to reduce the overhead both for site builders and site users in adding semantic markup to their pages. In Drupal 7 it is not a quick process to build a new entity and map its fields to RDFa properties.
With the RDF UI module it becomes very easy to generate a new content type based on a schema.org definition. If you want to create a new sporting event content type for example, you can specify that it is to be generated with a schema.org definition and you are just presented with a list of fields derived from http://schema.org/SportsEvent; then you just need to select those properties you want to use and generate fields for, and the entity is built for you with all the RDFa mapping done.
Keeping to the premise that you shouldn’t be replicating content in many places, there is a lot of effort going into tapping into external sources for taxonomies and marking those up with the correct RDFa automatically. Being able to have Entity Reference Fields take data from external APIs means you don’t have to replicate the effort in maintaining the taxonomy. For instance, you want to have the user select a genre for your music site, just point your entity reference field at the Genre API and offload that work while ensuring the semantic markup is also there to help search engines give intelligent results for searches by music genre.
When it comes to user-generated content and including semantic markup, there has only really been the RDFa Content Editor (RDFaCE) plugin for TinyMCE. But now we have a couple of extra buttons coming to CKEditor in Drupal 8 to allow users to apply semantic links to content – with dynamic lookups to Wikidata – to make it easy for you to, for instance, mark the word “Paris” in your content as a prince of Troy rather than have search engines interpret your content as relevant for the capital of France. There is a dynamic lookup based on your initial selection which you can further refine with additional terms to locate the correct “Paris” in the list and select that, and this is all without leaving your main workflow, making it more likely that content editors will actually use semantic markup.
What we learned from building an (extensible) distribution
This session covered lessons learned during the development of the ERPAL distribution. There are many uses for a distribution platform, which can start to introduce new challenges.
At the University our mechanism for supplying a Distribution profile matching the central Drupal CMS provision is still quite new, as is using Drupal in general. Although not widely used at the moment, there is quite a lot of scope for sites to implement sites based on the Distribution. It is however quite difficult to pre-empt how something so new will be used; we should remain aware of the potential is it matures in order to exploit it.
I attended this session with a colleague from the University Website Programme team, who manage the central Drupal CMS provision, EdWeb. Afterwards, the talk sparked a conversation about our own distribution and issues which we might have at the moment. The main thing that came out of this discussion is that a default config for our distribution site would be useful to make it easier for users to get up and running with working with it. We will follow this up by writing up some of the areas which have already arisen as needing some configuration for new users of our distribution. We can then identify how to incorporate this into this distribution itself, or even just into the one-click distribution provided on our central hosting system, which will be much simpler to achieve, and may be all that is required.
Drupal 8 retrospective with Dries
In this session, Dries talked mainly about the high and low points of the Drupal 8 project.
One of the main suggestions that came out of this was to release fewer things sooner, which is a strategy that will be adopted for future Drupal releases. It’s possible to see parallels between the Drupal 8 project and our project to develop the central Drupal CMS, EdWeb, giving some perspective on what we have done and achieved, and suggesting how we might proceed in future.
IS at DrupalCon – Sessions Day 2
On Tuesday and Wednesday I posted some session summaries and comments from DrupalCon 2015 in Barcelona, where myself and a few colleagues are spending this week.
Day 2 of DrupalCon began with a short session celebrating those involved with Drupal. i.e. partners and contributors, highlighting the importance of Drupal community members contributing through sprints, followed by the morning’s Keynote with Nathalie Nahai. Nathalie spoke about web psychology, providing a scientific perspective on how people see and react to different aspects of web content presentation. Admittedly the theme was more applicable to those Drupal users who deal with marketing aspects of websites since it was concerned with how to get, and keep, the attention that you desire from your online audience. However, the principles apply equally well to any organisation or institution interested in engaging in the most effective way with visitors to their website.
The day continued with many more sessions across a broad range of topics. We also took part in a couple of Birds of a Feather sessions, one of the great features of DrupalCon, allowing members of the Drupal community with a common background or interest an opportunity to discuss face-to-face the issues they deal with, sharing their knowledge and experience and exploring potential strategies to resolve those issues.
Our experiences of some of Wednesday’s DrupalCon sessions are outlined below. Thanks to Riky, Tim, Andrew, Adrian and Chris for contributing their thoughts on sessions they attended; any errors, omissions or misinterpretations in their edited notes are entirely mine. Most of the sessions mentioned below, along with many more interesting talks, are recorded and available on the DrupalCon YouTube channel.
How to change our Estimation Process Took our Project Endgame from WTF to FTW
This session covered estimation and how to turn this critical project component from something that often leads to a project being perceived as a failure, into an accurate and more reliable part of the project process. A common issue within some organisations is that the person deciding the budget does not have the in-depth knowledge of the project deliverables required to make sensible decisions. A lot of work goes into creating an initial estimate without knowing the details of the deliverables; the “bid”, or in UoE terms the proposal estimate, needs to be made on the objectives, and it should be accepted that this is what the estimate reflects.
During the estimation process, it’s crucial to ask the right questions, to review and to explain the process outlined below (creating transparency), to define scope (get the business to say what they want), and to discuss and agree milestones.
The next part in the process is the discovery, which is best done with UX sketches, but this needs a designer! Rapid iterative design should be the approach, with sketch approval, continuing early tech planning with sketches in preference to wire frames; these sketches are not full set of requirements but enable rough estimates to be produced with a goal of +/- 40% accuracy. This provides an early indication of feature complexity and expedites prioritisation before moving on to wireframes, and long before anything is actually built. Wireframes must be fully approved before beginning the next stage.
The next stage is full Tech Planning, which involves larger group of people with a goal of achieving estimates with 10% accuracy, adding implementation notes. This stage comprises of several 1.5 – 2 hour meetings over a couple of weeks where the deliverables are broken down into tasks and these are estimated in hours. These sessions involve lots of discussion using a kind of low level poker estimation, but do not involve the business. The project manager can then create a budget breakdown based on the estimates; the results, which are now deliverables, are then shared with the business and recommendations discussed with them.
If the estimation is over the the available budget at this stage, the options are clear: descope, share work or find more budget.
During the build pay close attention to:
1. Large overspend on individual tasks;
2. New requirements (these need prioritisation!);
3. Weekly budget reviews and status check ins;
4. Demo as often as possible as this gets customers excited when they can see a concept come to life!
Finally at project wrap up the project should be within 10% of the estimate. It is worth noting that this process doesn’t really work when the client provides UX, or for time and materials projects – in that case just go Agile!
However, this approach does present two main challenges: it requires partner buy in, and essential meetings are difficult to schedule.
My takeaways from this session are the need to review and constantly update estimates as the project moves forward, the importance of prioritisation and defining what is actually needed, and creating transparency throughout this process. On suitable projects, this could involve additional soft milestones for estimation. Another takeaway relates to the estimation process itself, to make it more accurate. To do this we need to have more detail before committing to delivery. As project managers, we need to be strong and not press ahead when there is insufficient detail; without this, there is a tendency to estimate at an abstract optimistic level.
Drupal Extreme Scaling
The old way to boost Drupal performance is to use the following technologies: memcache, APC/OPcache, Varnish, and server redundancy. We currently use all of these. We can now utilise elastic computing and containerisation to boost performance – as the presenter put it:
“this is not future technology, but present technology”
The speaker’s team was tasked to provide a minimum cost, automated, no-downtime hosting platform for 30 to 100 thousand Drupal sites. To do this, Amazon Web Services infrastructure was used to run a stack consisting of Docker containers, Nginx, MySQL, MongoDB, with Ansible for Configuration Management, a Node.JS administrative application, Apache Mesos as an abstraction layer, and Marathon and Chronos running on Mesos to allow it to control Docker containers and scheduled tasks. The end result gave a platform which could perform EC2 auto scaling and spin up Amazon Machine Images which contain the three used Docker containers (one for the admin application, one for Varnish, and a Drupal container which would be used for every site), while databases were shared with one per 500 sites to minimise their overheads using a clever method of table prefixing.
This was a fascinating, very technical talk which I’d recommend watching to anyone with an interest in successfully solving a huge, complex infrastructural project with modern technologies. Although we only have one Drupal site to run and not tens of thousands, some very useful advice was given based on the presenter’s experiences: Nginx is very flexible and PHP-FPM increases performance significantly (as we found with our own testing); centralised logging is vital; always use authentication on REST APIs; and the combination of cloud hosting and containerisation was excellent. If the task were to be repeated today though, one would likely use the AWS EC2 Container Service instead of Mesos and Marathon. Possibly the most important thing to remember
“Lazy DevOps is the best DevOps!”
Headless D8 in HHVM plus Angular.js and some other things you can do with Platform.sh
Platform.sh is a deployment platform originally developed for PHP applications which now also supports Python and node.js. It integrates with whatever git repository you want, as well as HipChat, Jira and other tools, allowing multiple applications to be pushed into one build, e.g. front and back-end applications, and appear under separate hostnames. Platform.sh handles all the DNS and varnish config to create these pop-up environments and replicates live database and configuration into your development area. It can also sanitise the database as it’s moved to strip out user passwords and email addresses, etc.
One really useful feature is the ability for developers to specify the version of PHP and control the php.ini file in YAML files. You can also specify which database should be set up for the environment.
The ability to control this non-code configuration and replicate a complex build process for all developers without them all needing the level of expertise to set up their own environment comes in very useful when working with multiple teams. This is especially true if external developers who don’t know your environment are involved.
The session also covered some of the performance gains that can be achieved running Drupal on HHVM (HipHop Virtual Machine), over PHP7 and PHP5.
Defense in Depth: Lessons learned securing 100,000 Drupal Sites
Data breaches can be very expensive, so it is incredibly important to ensure that security consciousness is part of our mind-set in IS Applications. Breaches typically are not due to cracking encryption and hashes or exploiting unknown vulnerabilities, but rather human error. Thought should be given to the “CIA Security Triad” of confidentiality, integrity and availability. Security lists can be used to find out known vulnerabilities which need to be patched, these include: US-CERT and CERT-EU, LWN, Drupal, and security releases by Red Hat.
The main thing we took away from this session was not the quality of the advice, which was all very sensible (do backups, patch your servers and applications, use a Version Control Repository), but the practicalities of implementing that advice. Recommendations like using 2 Factor Authentication for our SSH keys are great, but we aren’t even using SSH keys for connecting to servers. Using enterprise login services so password hashes aren’t stored locally is also sound, but only if we were to use a technology like OAuth to allow it. We need to be doing more good practice when it comes to security; a greater security consciousness within IS Applications would be a great step in the right direction.
Local vs. Remote Development: Do Both by Syncing Your Site From Kitchen to Cloud With Jenkins
There are pros and cons to developing both locally (using VMs on a developer’s PC) and remotely (using Development servers provisioned by Development Technology). CASCADE is a new tool to streamline development workflows and add CI to local development. Effectively, it is extra code which uses Ansible to spin up and configure local Jenkins and GitLab Vagrant boxes and provide an interface to them.
While everyone in the audience was using Vagrant, very few had edited a Vagrantfile or run more than one Vagrant box at a time; a tool like CASCADE could provide developers with a simpler way to have a more advanced local CI environment. I don’t think its use would be appropriate to Development Services, but some of the ideas raised were interesting, especially as we are not generally using Vagrant for local development yet.
Behat+Mink+PhantomJS = Test ALL THE THINGS!
Integration testing is a topic that pops up regularly at DrupalCon, and in retrospect it was interesting to hear this talk on the same day as another talk on unit testing. This particular session focused on the use of BDD framework Behat for testing, coupled with Mink to simplify interaction with the browser emulator, provided in this case by PhantomJS (other browser emulators such as Selenium webdriver can also be used).
Whilst the speaker was very engaging and did give a decent high level outline of the different components, including the Gherkin language used to define Behat tests, the outline didn’t have a clear structure, which made it difficult to get a grasp on how each component fits into the bigger picture. That in turn makes it difficult to judge whether any/all of what was demonstrated would be useful in our context. It was also disappointing to see that whilst the talk description mentioned screenshot comparison, the only real mention of this during the talk was to say that PhantomJS was not the best tool for UI comparison (one attendee suggested wraith as an alternative). UI testing is something that we definitely need to explore further in the context of our Drupal CMS, and our current set of tools for automated testing (primarily Selenium WebDriver with test suites built in Java) may not be the best starting point. Unfortunately, although it was interesting to see Mink, which I hadn’t come across before, there was nothing in this particular talk to help us find the best approach to UI testing where there are gaps in our own test suites.
Principles of Solitary Unit Testing
Whereas the earlier session I attended on Behat with Mink and PhantomJS was concerned more with integration testing, this session explored the principles of solitary unit testing, as contrasted with sociable unit testing, where the idea is to limit what is being tested as much as possible, essentially to test one thing without “crossing boundaries” such as writing to disk or reading from a database.
The speaker provided a very clear and interesting summary of the principles of unit testing, exploring aspects such as:
- the importance of testing “one concrete class” (not counting value objects as these don’t have behaviour), using “doubles” to represent dependencies and objects returned by collaborators, thereby eliminating crossing of boundaries;
- the stages of solitary unit testing, namely Arrange (setting up the context, e.g. any data required, before carrying out the test), Act (which ideally should call only one method) and Assert (to test whether the test passes);
- the principle of always asserting last, and limiting each test to one assertion, which is really a general principle rather than a hard and fast rule – it was pointed out by one attendee and acknowledged by the speaker that sometimes it is necessary to break this principle;
- ways of handling some of the complexity issues around unit testing, for example using ‘object mothers’ or ‘data builders’ to encapsulate setup, writing custom assertions to avoid multiple asserts in one method, and eliminating dependencies, all of which reduce the lines of code in the actual test and help to avoid “fragile tests” which break easily when something changes that isn’t directly related to the specific test;
- the ability of solitary unit testing to highlight bad OOP code – if the test is difficult to write, the problem could be the code.
The speaker noted that solitary unit testing is not a catch-all, and will not always provide the most appropriate benefit. In our particular context, end-to-end integration testing is of greater importance than unit testing as we need to ensure that the complex set of contrib and custom modules and configuration settings which comprise our central Drupal CMS function correctly when deployed together in one of our deployment environments; integration testing using Selenium Webdriver is therefore incorporated in our automated deployment process.
Notwithstanding the focus of our own test suites however, the principles explored in this session such as clear code structure, isolating specific functionality, ensuring readability and clarity of tests, minimising what is covered by one test, and limiting the assertions performed, are equally applicable. It seems to me that many of these principles are a starting point for best practice regardless of the particular type of testing being performed. This was an excellent talk which provoked an equally interesting conversation between attendees and the speaker on when it is appropriate to bend or break the principles. I highly recommend watching the session recording to anyone with an interest in automated code testing.
SmarTest: Proposal for accelerating the detection of faults in Drupal
The SmarTest module has been developed at the University of Seville, extending SimpleTest to improve automated testing for widely varying system configurations. With Drupal having a high scope of configuration variability, it can generate multiple test cases for different configurations which can be quite difficult to cover in testing.
As part of the studies performed by the speaker, Ana, and her team at the University of Seville, a diagram was drawn up showing the relationships between a set of modules (48 in their example). After querying how this was produced, I was told that it was quite an involved manual process and, without tools to assist, would be a fairly time consuming if we wanted to have the same thing.
With the tests that they ran across various modules, they found some direct (and possibly obvious) relationships between certain aspects. They found that module size (lines of code) as well as the number of commits on a module directly related to the number of faults found in the modules which they tested, i.e. More code and/or more commits produced more faults. However, the more contributors that there were on a module did the opposite and reduced the number of faults in modules. It was also found that migrating the same modules to a newer version of Drupal introduced more faults again.
The SmarTest module is something that could be interesting for us to run against the configuration of EdWeb, our central Drupal CMS. It aims, among other things, to highlight the most potentially problematic modules. However, the problem of Drupal being a “variability-intensive system” is not so much of an issue for us as we don’t really expect to vary our configuration drastically or often.
Next generation graphics: SVG
SVG is making a comeback now that Flash is dying off, and high resolution mobile and touch-screen tablet devices require vector graphics to keep logos and icons sharp while keeping file size low.
While there are no SVGs in Drupal 7, Drupal 8 core is now making use of SVG assets to replace PNGs. This session covered SVG as a markup language, even how to write it by hand (if you are that way inclined), as well as the features available for animating and interacting with SVGs using javascript, and how well (or not so well) these features are supported by the different browsers.
There are also some quite significant security risks if you allow users to upload their own SVG files, but that aside, you should be looking to SVGs rather than icon fonts for those vector icons now.
Configuration management in Drupal 8
Configuration Management is a new feature to Drupal 8; in Drupal 7 the closest you have is the features module. This session was a quick tour of how configuration in Drupal 8 can be exported and imported using drush, how it is stored in YAML files, and where it is defined in custom modules.
Dependencies are fully managed in Drupal 8 through these YAML configuration files, and when dependencies are removed, the configuration is deleted. Demonstrations during the session showed how dependencies build up and apply as soon as you use them, for example, a role access filter being applied to a view.
Tips for best practice in changing these files and moving these files between environments were covered, as well as in-depth details of the Configuration Entity and third party settings.
Drupal architectures for flexible content
This session explored the need to understand the requirements of editors and how to deal with the demands of content editors in Drupal.
Current limitations in Drupal were also highlighted, such as the disconnection between content and layout, which is not always a problem, and also how Drupal does not currently have revision history in content editing.
Making Drupal fly – The fastest Drupal ever is here!
Quite simply, this “fastest Drupal ever” is Drupal 8.
Different caching options were discussed and it does look like, due to the issues seen in Drupal 7 and earlier, a lot of attention has been given to performance and customisation of caching options in Drupal 8, .
As mentioned in previous session notes, full page caching via reverse proxy such as Varnish is also possible in Drupal 8.
Birds of a Feather Sessions
Design and Usability Critiques
This was a great BoF discussion, not so much for new information, but confirmation that the UX approach taken during the EdWeb project was fundamentally correct, although the process of incorporating UX into Agile project needs to be lighter, take place earlier and be more frequent. We had 3 hour sessions with a large group of people, who at that time in the iteration were all under pressure to get the iteration completed.
Some recommendations to make Agile UX sessions more effective:
1. Run combined sessions with developers, business analyst, etc. but make them shorter;
2. Present results at these sessions;
3. Be clear about how UX sessions relate to prioritised stories;
4. Be clear up front about what questions the UX session should answer.
Tips:
http://www.jasonondesign.com/2014/05/05/ladder-of-control/#scrollto
https://www.usertesting.com/b/
Drupal in Higher Education
This session was set up by developers from the University of Adelaide in Australia and was well attended by representatives from Higher Education institutions in the UK and across Europe. The initial introductions showed that a wide range of Drupal experiences at various stages of maturity were represented, from the management of a small number of Drupal sites, through the distribution of a profile across more than 100 devolved Drupal sites, to the wholesale replacement of existing CMSs with a central Drupal service. As is so often the case during meetings between Drupal users having a common background, the main topics under discussion were pain points; what was apparent in the conversation that ensued was the commonality of these among the experiences of those present.
One area of particular concern was hosting, with almost everyone present agreeing that Universities often suffer from a peculiar fetish for internal hosting which can make it controversial to explore external hosting options. The main reason for this seems to be the desire to avoid exposing sensitive data, and that is clearly an important issue for many websites managed within the HE sector. The desire to keep things internal can make Drupal hosting especially difficult where there is a dependency for stability reasons on old versions of infrastructure elements such as PHP.
The approach which is being taken in Adelaide is of particular interest and something that we should explore further, especially given our own desire to look into the possibilities of configuration deployment and tools such as Puppet, Vagrant and Docker for automated deployment of the required server environments. The developers who set up the BoF have created an evolving platform for automated deployment of Drupal 8 to get around their internal hosting issues; they hope to collaborate on this with other institutions and ultimately make it available for wider use. We are not yet planning for Drupal 8, but the principles of how to manage deployment of Drupal in a devolved HE context are of interest regardless of the Drupal version being used. We have a relatively sophisticated means of deploying updates to the Distribution Profile that is associated with our central Drupal CMS, but we can do more with our automated deployment process in the admittedly more complex area of configuration and server deployment for the central CMS itself.
Another topic covered was role management – how to ensure that only the appropriate people can perform tasks such as generating a new Drupal site, or use functionality within a site. LDAP groups were discussed as one means of achieving this, with users automatically added and removed from roles within Drupal based on their LDAP group membership; this requires that groups be configured with the appropriate members, and that there are clearly defined mappings between those groups and roles in the CMS.
Overall this was a really interesting BoF. It was great to discuss both the positive aspects and common problems associated with using Drupal in an HE context, and to hear how other institutions are deploying and using Drupal. At the end of the session, contact details were shared and this will hopefully lead to further engagement beyond our meet-up in Barcelona.
IS at DrupalCon – Sessions Day 1
On Tuesday I posted some comments on the start of DrupalCon 2015 in Barcelona, where myself and a few colleagues are spending this week. The major strands this year are Docker, performance and scalability issues, the Symfony framework in a Drupal 8 context, and using headless Drupal with an alternative toolkit to provide the front-end. So far it’s been really interesting to see how sessions on Symfony and Drupal 8 this year have progressed from last year’s DrupalCon in terms of the complexity of what is being covered. It’s also interesting to see how many attendees are using tools like Vagrant, Docker and Jenkins to take the pain out of manual configuration and deployment. We have been using automated deployment tools for application code for some time now in IS Apps; configuration management and automatic creation of server environments is something that could further streamline our deployment workflow.
Our experiences of some of Tuesday’s DrupalCon sessions are outlined below. Thanks to Riky, Tim, Andrew, Adrian and Chris for contributing their thoughts on sessions they attended; any errors, omissions or misinterpretations in their edited notes are entirely mine. Most of the sessions mentioned below, along with many more interesting talks, are recorded and available on the DrupalCon YouTube channel.
Symfony for Dupal Developers
Drupal 8 makes use of many Symfony components, and this session covered the differences between the two frameworks to help decide which to use for projects. Drupal uses about a third of the Symfony components and you don’t need to know Symfony to develop Drupal.
Some differences are more obvious than others, such as while the application entry point in Drupal 8 is index.php, in Symfony it’s web/app.php and web/app_dev.php. These two entry points arise from the fact that Symfony enforces a programmatic toggle between Development and Production modes: you push generated code to production heads; you do not compile on production boxes.
Drupal uses the kernel a little differently and imposes stricter coding standards. For instance Drupal always uses the View event whereas this is discouraged in Symfony. Drupal 8 coding standards are quite strict in prescribing when to use YAML or Annotations for configuration. With Symfony configuration you are free to use PHP, XML, YAML or Annotations, although best practice is to pick one and stick to it.
Coming from Drupal 7, one of the fundamental differences in Symfony is that there are no functions; it’s all methods, with only a few static functions available. All meaningful logic in Symfony is in services, which are stateless objects.
Paths and routing are handled differently in Symfony and Drupal 8. In Drupal 8, you can only use module routing.yml files or events, not annotations, XML or PHP. Also, you don’t have path nesting or slugs in Drupal 8.
While both frameworks now use twig in the theming layer, when working in Drupal 8 you work with multiple twig files for each element, mirroring templating system in Drupal 7. Symfony templating uses only one file which extend twig files and override blocks defined in the parent twig file(s). Also Drupal always requires a render array; you don’t return rendered output directly in controllers.
Drupal has multiple APIs for storing content data, Symfony doesn’t have anything: you use Doctrine (or something else). Doctrine can only store primitive data and being a stand-alone PHP project has different event listeners from Symfony.
This talk highlights the fact that whilst Drupal 8 is using Symfony components, these are very much used with a Drupal flavour. Comprehending Drupal 8 and how it uses Symfony requires an understanding of what has gone before in previous versions of Drupal as well as knowledge of Symfony concepts and techniques.
Estimation: from waterfall to agile
In this session, Danish Digital Agency Adapt covered the background to their move from a Waterfall project methodology to an Agile approach, describing their experience of that transition and specifically targeting how they tackled estimation. Starting from a position where they were losing money on 50% of their projects due to inaccurate estimations and the need for their customers to prioritise scope, they adopted an Agile “light” methodology, only to quickly realise that with Agile it needs to be all or nothing.
The presentation continued with an outline of the process used to define user stories, use of planning poker, and the need for clearly defined roles and just-in-time management. The key to relative estimation, i.e. measuring feature size in story points not hours, is to involve all of the project team members in the process. Time boxed planning poker sessions, with limited discussion, allowed the knowledge level across the project team to be increased leading to more accurate forecasting. The iteration or sprint velocity was then calculated by breaking down the tasks (from development to testing) into the hours expected to take to complete. If a story took longer to complete than expected, the user story points associated with that user story did not change but this information was then used to forecast how much could be delivered within the project; this allowed the customer to prioritise what remained in the backlog.
The two main benefits derived from this change in approach to delivering projects was that, firstly, they were now in a position to keep to fixed budgets and secondly, knowledge was gained during the estimation process. However there were also negatives, primarily concerned with small projects where this approach has proved difficult to implement and there is often not sufficient time for people to become accustomed to this approach.
My main takeaways from this session are “Customise processes along the way” and “Know what you don’t know and accept that!”. Projects and customers (business units) are different and there is a need to refine and adapt the Agile process where experience shows that refinement is required. The more experience the project team have using Agile, the easier this process becomes. At the outset of a project, especially with larger scale projects, there are inevitably unknowns; it’s crucial to identify and accept these unknowns. As a project progresses, unknowns should become knowns and these can be requirements, risks or opportunities.
Configuration Deployment Best Practices in Drupal 8
This was an engaging (if somewhat caffine fueled!) session. Drupal currently has a problem with the separation of configuration from content. Code is developed in DEV then pushed out to TEST, STAGING, then LIVE. Content, however, is created on LIVE and the other environments are refreshed from it. “Content” can be thought of as the database, which has tables for both the created content and the configuration, but as we want Drupal configuration to be developed in DEV and tested through the environments, what is really needed is for configuration to be treated like code.
Drupal 7 does not have a good way to deal with this, although the Features contrib module can be used, as we are doing in our own Drupal CMS, to make our configuration fully deployable through our automated deployment process. In Drupal 8, configuration management on a managed workflow seems to offer many benefits over the older version. Configuration is totally separated out into YAML files. These files can then be committed into Version Control System like code, providing accountability and the ability to audit configuration changes. All of this makes Continuous Integration much easier, and may make it configuration rollback possible. The YAML configuration files are imported into the database for added performance, allowing a more robust method of configuration over hook_update_n, or using the current feature module in ways for which it wasn’t strictly designed; drush has also been extended to work with this functionality.
This was an interesting talk which made some very useful suggestions for how configuration should be managed between environments when using Drupal 8, as well as how exceptions can be handled. This area should be explored further when planning for the Drupal 8 upgrade; we should look into replacing our reliance on the features module for exported configuration with the equivalent in YAML configuration.
Altering, Extending and Enhancing Drupal
One of the many sessions on what is new in Drupal 8 versus Drupal 7, this was an interesting talk giving a high level summary of the mechanisms for customising and extending Drupal 8. Topics covered were plugins, services, events and hooks. In Drupal 8, the principle for plugins is “Learn once, apply everywhere”, moving away from the inconsistencies between modules and how they are used by having plugin classes implement an interface, so there is a common approach. Services in Drupal 8 are very well decoupled and can easily be swapped out, for example for testing purposes. Event handling allows modules to react to Drupal application actions and/or conditions in a standard manner that is common in OOP rather than using hooks to react when something happens. Hooks still exist in Drupal 8, but are primarily for modifying metadata which has been gathered by other means, or to alter forms.
One interesting question that was raised at this session was how to determine whether what is needed to implement a feature is a plugin or a service. A useful way to decide this is to think about a service as something that you would usually only have one of for any Drupal instance, for example, a caching service.
This succinct outline of mechanisms for extending Drupal 8 highlighted the fact that these mechanisms are less specific to Drupal than in previous versions. Hooks remain, but whereas previously they would have been used for everything, Drupal 8 leverages Symfony to add new ways of doing things that help improve code structure and re-usability. The patterns and techniques are familiar from other contexts where OO is used, which brings a consistency and helps developers to better avoid the unnecessary, and at times frustrating and confusing, proliferation of different ways of doing things. These approaches also help with documentation – creating common patterns for implementing new modules means that Drupal developers are not so much at the mercy of how good the documentation for a particular module is. All of these factors should improve the development experience in Drupal 8.
Fundamentals of Front-End Ops
As more application logic is being handled client-side, Front-end Ops is a response to the proliferation of front-end tools and frameworks. This session was not about the Drupal framework, but instead looked at tools for automating front-end development tasks, managing dependencies and generating scaffolding.
For scaffolding tasks, Yeoman was demonstrated. Yeoman recommends that your workflow involves Bower for dependency management and Grunt or Gulp for task automation. The session covered installing and using Yeoman, Bower, Grunt and Gulp as well as comparing the merits of Grunt and Gulp.
The talk covered the BBC’s Wraith, which leverages PhantomJS and SlimerJS to provide visual diffs of screenshots between two environments, as well as other visual regression tools, namely Huxley and PhantomCSS. There was also a discussion of the test rendering engines available: PhantomJS, SlimerJS, CasperJS and GhostLab.
Finally some of the front-end debugging tools available were covered, including Chrome DevTools Remote Debugging which allows you to connect a mobile or tablet to your desktop machine and use the development tools on the desktop browser to inspect the DOM, etc, on the mobile device.
Docker powered team and deployment
This session gave a basic overview of the features and benefits of Docker and of infrastructure as code. It focused on Bowline as an easy way to get started using Drupal on Docker, offering great flexibility and minimum requirements via a suite of BASH scripts that can be included in your Drupal code repositories. The scripts add a method for container installation, and can also be used to “hoist” (start up) other containers at the same time for local development, such as a Behat container for testing. The aim is to simplify and facilitate the configuration and linking of containers for Drupal setups. This definitely looks like the way forward for the provisioning and support, at the very least, of our Dev environments. Taking the paradigm further than just sandboxes for development, the ability to move docker containers through an environment pipeline from Dev to Test and into Production was very interesting and worth further investigation.
While much of the session was used to introduce Docker, it was interesting to see the various methods, like Bowline, that are being used to enable developers to work with Docker containers as the next step on from Vagrant. It was also interesting that almost everyone in the room was using Vagrant and had Jenkins as their CI server, and about half were developing on Linux – none of which are currently true for Development Services in general.
Using Docker (or something similar) could be beneficial for us, mainly because it would give us readily available, standard and up-to-date environments for development or support use.
We do however already achieve something quite similar with virtual machine deployment environments on local machines.
Docker in the DrupalCI test infrastructure
As seems to be the current standard, this Docker session also started with the obligatory shipping metaphor-laden introduction to containerisation which I won’t repeat (see https://www.docker.com/whatisdocker for the official version). DrupalCI is a project to make it easier for developers to do local testing, and to enable testing of different combinations of PHP versions and database backends. It works by having a base Docker image, from which a PHP base image and a database base image are created. With only small adjustments containers with variations of these, like PHP 5.5 and PHP 5.6 environments, can be produced.
The talk also raised important security points which arise from using containerisation: one needs to think about running a private image registry (an ‘IS Apps Docker Hub’); SELinux should be used to reduce the possibility for malicious containers to break out into the host; and the responsibility for updating the images, and the running containers, is also vital.
While it is frustrating to keep seeing the same Docker introductions, it is interesting that so many sessions this year have been dedicated to the technology showing it is being used by many in the community.
Solving Drupal Performance and Scalability Issues
In this session, Tine Sørensen drew on her years of experience in optimising performance for Drupal sites and troubleshooting scalability issues to highlight some of the techniques that can be used to diagnose issues, pick off the ‘low hanging fruit’ and achieve great improvements without great expense. There is no real value in spending 6 months rewriting some aspect of a module that is not performant if there is only a very small improvement at the end of that time. The recommended starting point when a performance issue is found is to collect data from the site, analyse it, choose where to apply effort – prioritising where ratio of effort to gains is the greatest, then repeat the process until performance is satisfactory.
The core message of this talk was the importance of collecting the data to demonstrate what is actually happening, and the huge gains in efficiency when pinpointing performance issues that can be achieved simply by using monitoring tools. Tine focused on New Relic as that is where her experience lies. As we found when we had external consultancy for our Drupal CMS project, New Relic is the tool of choice for monitoring Drupal and diagnosing performance issues; when using the Pro version, the tool has even more functionality such as granting XHProf-like profiling. Like ourselves in Apps, 50% of the audience were using New Relic for Drupal monitoring. Monitoring tools like New Relic can give an extremely useful picture of performance bottlenecks. For example, using the Pro version, it’s possible to see a list of PHP functions being called listed in order of execution time; this shows up any particular function that might be causing problems. A developer can then go straight to the function in question and analyse the code to identify any issues. We have used this technique ourselves whilst developing our Drupal CMS and it certainly can save hours, if not days, of time spent drilling down through XHProf reports.
Examples of quick wins were also provided, such as switching from GD to ImageMagick, disabling Views UI, tuning caching and tuning queries that are performing poorly. Of these, the GD/ImageMagick switch is the only one that was unfamiliar. It was particularly interesting to see that the server settings and tuning recommendations correspond to those we already use internally; it is useful to have our current approach validated by the speaker who is an experienced consultant.
For me, the main takeaways from this talk were that it is absolutely essential to understand what is happening on the servers when performance is poor, and that the less time you have to spend on collecting that data, the more quickly and efficiently you can resolve the problem. It’s also important not to simply throw hardware at a performance issue; that can resolve things in the short term, but ultimately it only masks the problem, particularly if that problem is not fully understood, with the risk that it could resurface in a more damaging way in future.
Drupal 8 Plugin Deep Dive
This session covered the new Plugin system in Drupal 8 which replaces the hook_info() and hook_info_alter() pattern. This is one of the areas where Drupal differs from other CMS’s and frameworks; Symfony bundles are hard-coded whereas Drupal plugins are configurable and discoverable.
Plugins do away with many of the Drupal 7 hooks, in favour of the new PluginManagerInterface model and examples of these in core were covered.
Plugin autoloading, dependency injection, service containers and annotations were all covered before going on to demonstrations of building your own plugins.
The session was very technical, covering low-level code samples in detail. It was a good companion piece to the earlier session on altering and extending Drupal 8.
Drupal 8 The Backend of Frontend
The Drupal 8 theming layer has been re-written and now uses Twig as its template engine. Theme functions are pretty much done away with now and replaced with Twig templates. The theme process hooks are also done away with now that Twig is used. You still have the two levels of template_preprocess and hook_preprocess hooks, but now everything they return in the variables array needs to be a render array.
The session also covered writing Twig templates and how to extend Twig with your own custom functions/filters in Drupal.
Drupal enables Twig’s auto-escaping; the security issues around this, and how to mark strings as safe so they are not escaped, were also covered.
Symfony2: The journey from the request to the response
This session was presented by the Head of Sensio Labs (creators of Symfony), Sarah Khalil. It covered the components involved in processing a HTTP request starting with the front controller (app.php in Symfony, index.php in Drupal) passing the Symfony HttpFoundation request through HttpKernel.
The Routing component YAML files are named and located differently in Drupal, but in essence the process is unchanged for the router passing the request through the controller and on to returning an HttpFoundation response.
Symfony’s Event Dispatcher component was covered in some detail with examples of event listeners and subscribers, the differences between them, and examples of how Drupal core implements these.
After listing the seven kernel events that you should know together with the Symfony components used that Drupal uses, the Dependency Injection component’s three key concepts of service container, services and parameters were detailed.
Finally a quick look at the concepts in Twig, with the caveat that Drupal does not use all the features available in Twig.
Caching at the Edge: CDNs for everyone
This session, although fairly technical, was well presented by clearly very knowledgeable speakers and touched on upcoming technology such as service-worker (client-side caching), ESI and Big Pipe.
Content Delivery Networks are external, multi-sited hosts which enable content to be delivered with lower latency from caches local to the user. They can be used just for static assets (JS, CSS, images), and also for dynamic content, although the latter is far more complicated. As proven by our infrastructure with Varnish, delivering anonymous content from a cache is fairly simple as it rarely changes; this advanced session focused mainly on caching authenticated content with CDNs.
As we have seen in earlier DrupalCon sessions, Drupal 7 is limited in what it can offer in this conext; it can only use “max-age” caching, or scripted/manual purging of stale content from CDNs. Drupal 8 looks to have taken a big step forward in terms of the effects of caching on website performance, providing three main cache invalidation techniques:
- cache tags, which show where data dependencies for caching exist;
- cache contexts, which give the context of dependencies for requests;
- cache max-age, as found in Drupal 7, which give the time dependencies of what is cached.
Together these enable placeholders and auto-placeholdering, whereby Drupal 8 “knows” what content makes up the page and so knows which can be retrieved from a CDN and which needs to be dynamically requested from Drupal.
Using this mechanism, it’s possible to perform Edge Side Includes with authentication being cached per session. This could be used with Varnish rather than a CDN, enabling us to cache a lot of our HTTPS traffic, which is where we are currently experiencing our worst performance. The demonstration of the response time improvements from caching user specific content was very impressive. While complicated, I strongly recommend that using Varnish for authenticated users is investigated when upgrading to Drupal 8. This would perhaps be one of the main reasons for considering upgrading when it is finally released.
BigPipe ( https://github.com/bigpipe/bigpipe ), a node.js framework which can be used with Drupal 8, was also demonstrated. It breaks up pages into smaller chunks so they can load asynchronously, decreasing the time for the first elements of page content to load.
Cut the crap. Practical tips and real world examples for removing waste from your development process.
This session dealt with a rather different approach to managing projects, albeit there is a caveat that this covers smaller pieces of work. Basically anything at all that can be deemed as not adding tangible benefit should be removed from the project. We all know that getting decision makers to make decisions quickly can be challenging. So the first thing is to identify your decision maker. In the example given the presenter, Jason Mark, relayed a project that was delivered within 4 weeks. The first week was used for requirements, design and templates and starting the build. The remaining three weeks involved working with the client/partner to test and refine what was delivered.
If this fast track approach is to work, it needs several things to fit, the main four being:
- Decision maker;
- No hidden stakeholders;
- The right people – low ego and ability to be flexible are key characteristics;
- Technology needs must fit and the requirement needs must fit the technology.
If there are blockers, especially people, find ways to turn them into champions by using positive creative language.
The top takeaways from this session are twofold. Firstly, take a step back and look at the project in terms of the 4 points above. Can these points be answered positively? If not, what can be done to turn this around and make the process fit? Secondly, because this is a fast approach to turning a piece of work around, the planning will not be complete before the work starts. This makes change inevitable, which needs to be embraced at the outset and not seen as something negative. To repeat a quote attributed to Buddha,
“Change is never painful, only the resistance to change is painful”
Above all, only focus on things that add value! As Project Manager you can ask this everyday!
Headful Drupal
This session was about headless Drupal. Perhaps the only advantage for us with this is the security advantages of removing some front-end admin components. Alternative means of achieving what we require do however seem quite time consuming and careful consideration of the particular development context would be needed before going down the headless Drupal route.
Visualizing Logfiles with ELK Stack
This was an interesting session that would have benefited from demonstrating a concrete example. The material presented was somewhat abstract and the potential for an escalation in complexity and the associated infrastructure requirements of such a system seemed a little daunting without something to tie it to a real world example. However, a centralised logging system would be a great asset to IS Apps even beyond the context of Drupal and EdWeb. An ELK stack, in some incarnation, should be a serious consideration.
Migrating a running service (Mollom) to AWS without service interruptions and reduce costs
A disappointing session that was difficult to generalise from, and only seemed relevant in terms of the specific service that the speakers were trying to move into the Amazon Cloud. The Mollom spam protection system seemed far removed from anything managed by IS Apps. It was claimed that the switch to AWS reduced the number of alerts that were received by the Ops team, but no other metrics were presented in terms of savings to the business as a whole. The takeaway from this session seemed simply to be that you need to think differently about services in AWS due to the ephemeral nature of the server instances, these being discarded and new ones spun up any time configuration changed or applications crashed.
Lightning talks
At the Lightning Talks session, there were 3 commercial companies pitching ideas and providing insight into their products and services.
PhpStorm for Drupal Development
This is a good looking and powerful tool which could be useful for us when working with PHP in Drupal. It has Drupal-specific functionality such as being able to track hooks across your codebase. It also incorporates Git functionality and provides a graphical means for diffing files as well as back tracking changes. In general it seems quite neatly put together and well thought out.
Interoute Virtual Data Centre – Proven to be the fastest cloud
One of the main advantages as part of their pitch was hosting in remote locations to assist with latency caused by long distance connections. I cannot see that this is applicable for us.
How to setup Nginx/Varnish Full Page Caching for Drupal
The main point taken from this session is that reverse proxy (Varnish) full page caching is not possible in Drupal 7, but will be in Drupal 8.
IS at DrupalCon – Hola Barcelona!
It’s 7.30 in the morning in our hotel and I have already overloaded my breakfast plate with so much that I need help to finish what I can’t eat. It must be DrupalCon!
This week myself and some colleagues in IS have escaped from autumnal Edinburgh to balmy Barcelona for this year’s European Drupal conference, DrupalCon 2015. For three days we are in the midst of a whirlwind of sessions on all aspects of working with Drupal, many of which touch on issues and experiences that are common to all web developers and site owners.
Our journey to create a new Drupal CMS for the University’s main website began before DrupalCon in Prague in 2013. Since last year’s DrupalCon in Amsterdam our new Drupal CMS has moved from its embryonic state following the initial MVP release in 2014 to a production system with a name, EdWeb, and upwards of 140 sites have so far been migrated across to the new system. You can read more about our Drupal journey at the University Website Programme site in EdWeb.
Soon we embark on the exciting process of planning the next set of features to add to our shiny new responsive website and I’m sure that as before we will find much to inspire us at DrupalCon. It’s also a fantastic opportunity to explore how other organisations are managing scalability and performance when running an Enterprise level CMS. Judging by the number of cloud hosting companies in the exhibition hall this year, the answer to that problem for many organisations is to let someone else feel at least some of the pain for you!
As we did last year, over the next few days we will be gathering together short summaries of some of the sessions we attend here at DrupalCon, with our own reflections on what we see and hear. This year we are fortunate enough to have brought a group of colleagues who have played a range of roles in the creation of EdWeb, from development through to project management and production staff and there is something at DrupalCon for everyone!
But before we share any session notes, the Prenote and Keynote sessions from the first day of DrupalCon have already given food for thought.
At the centre of the yesterday’s 8am Prenote, which is always an entertaining way to start DrupalCon, was the notion of dreaming the impossible dream, expressed charmingly in song by Adam Juran, in this case the seemingly impossible dream of getting Drupal 8 released. Having been involved with the huge undertaking of building EdWeb from the outset, that sentiment was very familiar! Throughout the ups and downs of such a large scale and complex Agile project, it’s been important to keep our end goal in sight, and to believe that what we are trying to achieve with EdWeb is both possible and necessary. Now we have our production CMS, and it was announced at DrupalCon today that Release Candidate 1 for Drupal 8 will ship on 7th October 2015. Those impossible dreams can be realised!
The morning continued with the opening Keynote by Drupal founder Dries Buytaert, and once again we see parallels between the process of getting Drupal 8 to a shippable release and our own experience of building a large scale CMS. This year, Dries’ theme was “We need to talk about that”, covering some of the uncomfortable questions in the Drupal community. Two aspects of his talk in particular struck a chord as they relate to problems that we have also had to solve.
In talking about the extended timescale that’s been necessary to get Drupal 8 ready, Dries proposed an alternative model for Drupal’s code management, a branching strategy rather than the current approach of having all development on the trunk. This would embrace the difficult reality of getting all features ready for a release; instead, what goes into the release is only what is ready. During the course of our own CMS development, we have had to solve exactly that problem, releasing feature bundles into the production system at the end of each development iteration without releasing code that is not ready to ship. Our initial workflow was to do development on the trunk, but we quickly realised that this creates problems, particularly as we run our migration project in parallel with ongoing CMS development work. We also had to allow for the release of a security patch for Drupal itself, or for a contrib module we are using, which would need to take precedence over any feature development and go into production sooner, without including features that are not ready. To solve those problems in a way that would support our automated deployment process, we moved to a workflow that turned out to be a variation on Gitflow Workflow and this has served us extremely well, allowing us to manage parallel development work and release only code that is production-ready into our live system. It was very interesting to hear how Dries has come to the same conclusion as us – that for large-scale development work involving multiple developers and many features with a complex life cycle, feature branches are the way to go. The detail of our own approach is a topic for a future post!
Another theme of Dries’ talk was usability in Drupal and how features that improve the experience for editors can be sacrificed in favour of features that add new functionality. In creating EdWeb, we have involved users from the outset, whether via quick paper prototyping sessions to determine the best approach for a particular interface detail, or by running sessions where all members of our team, including developers, were able to watch editors actually use EdWeb so we could pinpoint usability problems. That process has contributed hugely to the usability of EdWeb, but it’s undoubtedly true that when the pressure is on to develop new features, it’s very difficult to hold to the discipline of prioritising usability. That problem is not unique to Drupal development; it’s a perennial problem that is particularly troublesome for Agile projects.
So there it is – before we even got to 10am on the first morning of DrupalCon, there was already a lot to think about! Watch this space for daily posts with session notes from our team. And if you want to see what we’re so excited about, check out the DrupalCon YouTube channel for session recordings!
IS at DrupalCon Amsterdam – Day 2
Yesterday I posted some session summaries from the first full day of DrupalCon 2014 in Amsterdam, where a few members of IS are spending this week. DrupalCon Day 2 began on Wednesday with a Keynote from Cory Doctorow, a thought-provoking talk on freedom and the internet, a subject about which some of us had previously heard him give at the IT Futures Conference in 2013, and one which has significance well beyond the context of Drupal for anyone who uses the web. The broader relevance of Cory’s speech is reflected in many of the sessions here at DrupalCon; topics such as automated testing or developments in HTML and CSS are of interest to any web developer, not just those of us who work with Drupal. In particular, the very strong DevOps strand at this conference contains much that we can learn from and apply to all areas of our work, not just Drupal, whether we are developing new tools or managing services.
Our experiences of some of Wednesday’s DrupalCon sessions are outlined below. Once again, thanks to Aileen, Arthur, Riky, Tim, Andrew, Adrian and Stratos for contributing their thoughts on sessions they attended; any errors, omissions or misinterpretations in their edited summaries are entirely mine. Most of the sessions mentioned below, along with many more interesting talks, are recorded and available on the DrupalCon website, so if the summaries pique your interest, visit the DrupalCon site for more information!
Development Processes, Deployment and Infrastructure
How Cultivating a DevOps Culture will Raise your Team to the Next Level
The main idea explored in this session was how to create a single DevOps team rather than have separate teams. DevOps is a Movement, a better way to work and collaborate. Rather than make a new team, the current teams should work together with fewer walls between them. The responsibility for adding new features and keeping the site up can then be shared, but this does mean that information needs to be shared between the teams to enable meaningful discussion.
The session was very dense and covered many aspects of implementing a DevOps culture, including:
- common access to monitoring tools and logging in all environments;
- the importance of consistency between environments and how automation can help with this;
- the need for version control of anything that matters – if something is worth changing, it is worth versioning;
- communication of process and results throughout the project life-cycle;
- infrastructure as code, which is a big change but opens up many opportunities to improve the repeatability of tasks and general stability;
- automated testing, including synchronisation of data between environments.
The framework changes discussed here are an extension of the road we are already on in IS Apps, but the session raised many suggestions and ideas that could usefully influence the direction we take.
Using Open Source Logging and Monitoring Tools
Our current Drupal infrastructure is configured for logging in the same way as the rest of our infrastructure – in a very simple, default manner. Apache access and error logs and MySQL slow query logs in the default locations, but not much else. Varnish currently doesn’t log to disk at all as its output is too vast to search. If we are having an issue with Apache on an environment, this could mean manually searching through log files on four different servers.
Monitoring isn’t setup by default by DevTech for our Linux hosts – we would use Dell Spotlight to diagnose issues, but it isn’t something which runs all the time. IS Apps is often unaware that there is an issue until it is reported.
We are able to solve these issue by using some form of logging host. This could be running a suite of tools, such as the ‘ELK stack’, which comprises Elasticsearch, Logstash and Kibana.
By using log shipping, we can copy syslog files and other log files from our servers to our log host. Logstash can then filter these logs from their various formats to a standard type, which Elasticsearch, a Java tool based on the Lucene search engine, can then search through. This resulting aggregated data can then be displayed using the Kibana dashboard.
We can also use these log “monitors” to create metrics. Logstash can write out to Graphite which can act as a counter of this data. Grafana acts as a dashboard for Graphite. As well as data from the logs, collectd can also populate Graphite with system data, such as CPU and memory usage. A combination of these three tools could potentially replace Spotlight for some tasks.
We need this. Now. I strongly believe that our current logging and monitoring is insufficient, and while all of this is applicable to any service that we run, our vast new Drupal infrastructure particularly shows the weaknesses in our current practices. One of the 5 core DevOps “CLAMS” values is Measurement, and I think that an enhanced logging and monitoring system will greatly improve the support and diagnosis of services for both Development and Production Services.
Drupal in the HipHop Virtual Machine
When it comes to improving Drupal performance, there are three different areas to focus on. The front end is important as render times will always affect how quickly content is displayed. Data and IO at the back end is also a fundamental part; poor SQL queries for example are a major cause of non-linear performance degradation.
While caching will greatly increase page load times, for dynamic content which can’t be cached, the runtime is a part of the system which can be tuned. The original version of HipHop compiled PHP sites in their entirety to a C binary. The performance was very good, but it took about an hour to compile a Drupal 7 site and it resulted in a 1 Gb binary file. To rectify this, Java Virtual Machine-like Just In Time compilation techniques were introduced for HipHop Virtual Machine (HHVM) which runs as a FastCGI module.
Performance testing has shown that PHP 5.5 with OPcache is about 15% faster than PHP 5.3 with APC, which is what we are currently using, and HHVM 3.1 has about the same performance improvement again over PHP 5.5. However, despite the faster page load times, HHVM might not be perfect for our use. It compiles pages with Hack, which uses strong typing, rather than PHP, and it doesn’t support all elements of the PHP language. It is still very new and documentation isn’t great, but this session demonstrates that it is worth thinking about alternatives to the default PHP that is packaged for our Linux distribution. There are also other PHP execution engines, PHPng (which PHP 7 will be based off of), HippyVM and Recki-CT.
In IS Apps, we may want to start thinking about using the Red Hat Software Collections repository to get access to a supported, but newer, and therefore potentially more performant, version of PHP.
Content Staging in Drupal 8
This technical session provided a very nice overview of content staging models and how these can be implemented in Drupal 8. There was a presentation of core and contrib modules used, as well as example code. The process runs by comparing revisions and their changes using hashcodes and then choosing whether to push to the target websites.
What I would take from this session is that it will be feasible to build content staging in Drupal 8 using several workflows, from simple Staging to Production, up to multiple editorial sandboxes to production or a central editorial hub to multiple production sites. One understandable caveat is that the source and target nodes must share the same fields otherwise only the source fields will be updated, but this can be addressed with proper content strategy management.
Whilst this session focused on Drupal 8, the concepts and approach discussed are of interest to us as we explore how to replicate content in different environments, for example between Live and Training, in the University’s new central Drupal CMS.
Testing
Automated Frontend Testing
This session explored three aspects of automated testing: functional testing, performance testing and CSS regression testing.
From the perspective of developing the University’s new central Drupal CMS, there were a number of things to take away from this session.
In the area of functional testing, we are using Selenium WebDriver test suites written in Java to carry out integration tests via Bamboo as part of the automated deployment process. Whilst Selenium tests have served us well to a point, we have encountered some issues when dealing with Javascript heavy functionality. CasperJS, which uses the PhantomJS headless WebKit and allows scripted actions to be tested using an accessible syntax very similar to jQuery, could be a good alternative tool for us. In addition to providing very similar test suite functionality to what is available to us with Selenium, there are two features of CasperJS that are not available to us with our current Selenium WebDriver approach:
- the ability to specify browser widths when testing in order to test responsive design elements, which was demonstrated using picturefill.js, and which could prove invaluable when testing our Drupal theme;
- the ability to easily capture page status to detect, for example, 404 errors, without writing custom code as with Selenium.
For these reasons, we should explore CasperJS when writing the automated tests for our Drupal theme, and ultimately we may be able to refactor some of our existing tests in CasperJS to simplify the tests and reduce the time spent on resolving intermittent Selenium WebDriver issues.
On the performance testing front, we do not currently use any automated testing tools to compare such aspects of performance as page load time before and after making code changes. This is certainly something we should explore, and the tools used during the demo, PageSpeed and Phantomas, seem like good candidates for investigation. A tool such as PageSpeed can provide both performance metrics and recommendations for how to resolve bottlenecks. Phantomas could be even more useful as it provides an extremely granular variation on the kind of metrics available using PageSpeed and even allows assertions to be made to check for specific expected results in the metrics retrieved. On performance, see also the blog post from DrupalCon day 1 for the session summary on optimising page delivery to mobile devices.
Finally, CSS regression testing with Wraith, an open source tool developed by the BBC, was demonstrated. This tool produces a visual diff of output from two different environments to detect unexpected variation in the visual layout following CSS or code changes. Again, we do not do any CSS regression testing as part of our deployment process for the University’s new central Drupal CMS, but the demo during this talk showed how easy it could be to set up this type of testing. The primary benefit gained is the ability to quickly verify for multiple device sizes that you have not made an unexpected change to the visual layout of a page. CSS regression testing could be particularly useful in the context of ensuring consistency in Drupal theme output following deployment.
I can highly recommend watching the session recording for this session. It’s my favourite talk from this year’s DrupalCon and worth a look for any web developer. The excellent session content is in no way specific to Drupal. Also, the code samples used in the session are freely available and there are links to additional resources, so you can explore further after watching the recording.
Doing Behaviour-Driven Development with Behat
Having attended a similar, but much simpler and more technically focused, presentation at DrupalCamp Scotland 2014, my expectation from this session was to better understand Behaviour Driven Development (BDD) and how Behat can be used to automate testing using purpose written scripts. It was showcased how BDD can be integrated easily in Agile projects because its main driver of information is discussions regarding business objectives. In addition to user stories, examples were provided to better explain the business benefit.
I strongly believe that this testing process is something to look deeper into as it would enable quicker, more comprehensive and better documented user acceptance testing to take place following functionality updates, saving time in writing long documents and hours of manual work. Another clear benefit is that the examples being tested reflect real business needs and requests, ensuring that deliverables actually follow discussed user stories and satisfy their conditions. Finally, this highlights the importance of good planning and how it can help later project stages, like testing, to run more smoothly and quickly.
UX Concerns
Building a Tasty Backend
This session was held in one of the smaller venues and was hugely popular; there was standing room only by the start, or even “sitting on the floor room” only. Obvious health and safety issues there!
The focus of this session was to explore Drupal modules that can help improve the UX for CMS users who may be intimidated by or frankly terrified of using Drupal, demonstrating how it is possible to simplify content creation, content management and getting around the Admin interface without re-inventing the wheel.
The general recommended principle is “If it’s on the page and it really doesn’t need to be, get rid of it!”. Specific topics covered included:
- using the Field Group module to arrange related content fields into vertical tabs, simplifying the user experience by showing only what the user needs to see;
- disabling options that are not really required or don’t work as expected (e.g. the Preview button when editing a node) to remove clutter from the interface;
- using Views Bulk Operations to tailor and simplify how users interact with lists of content;
- customising and controlling how each CMS user interacts with the Admin menu system using modules such as Contextual Administration, Admin Menu Source and Admin Menu Per Menu.
The most interesting thing about this talk in light of our experience developing the University’s new central Drupal CMS is how closely many of the recommendations outlined in this session match our own module selection and the way in which we are handling the CMS user experience. It is reassuring to see our approach reflected in suggested best practices, which we have come to through our knowledge and experience of the Drupal modules concerned, combined with prototyping and user testing sessions that have at times both validated our assumptions and exposed flaws in our understanding of the user experience. As was noted in this session, “Drupal isn’t a CMS, it’s a toolkit for building a CMS”; it’s important that we use that toolkit to build not only a robust, responsive website but also a clear, usable and consistent CMS user experience.
Project Management and Engagement
Getting the Technical Win: How to Position Drupal to a Sceptical Audience
This Presentation started with the bold statement that no one cares about the technology be it Drupal, Adobe, Sitecore or WordPress. Business’s care about solutions and Drupal can offer the solution. Convincing people is hard, removing identified blockers is the easier bit.
In order to understand the drivers for change we must ask the correct questions. These can include:
- What are the pain points
- What is the competition doing, and most importantly
- Take a step back and don’t dive into a solution immediately.
Asking these kinds of questions will help build a trusted relationship. To this end it is sometimes a necessity in certain situations to be realistic and sometimes there is the need to say no. Understanding what success will look like and what happens if change is not implemented are two further key factors.
The presentation then moved on to technical themes. It is important to acknowledge that some people have favoured technologies. While Drupal is not the strongest technology, it has the biggest community and with that huge technical resources, ensuring longevity and support. Another common misconception is around scalability. However, Drupal’s scalability has been proven.
In the last part of the presentation attention turned to the sales process, focussing on the stages and technicalities involved towards closing a deal. The presentation ended with a promising motto “Don’t just sell, promise solutions instead.”
Although this was a sales presentation it offered valuable arguments to call upon when encouraging new areas to come aboard the Drupal train.
Looking to the Future
Future-Proof your Drupal 7 Site
This session primarily explored how best to future-proof a Drupal site by selecting modules chosen from the subset that have either been moved into Drupal core in version 8 or have been back ported into Drupal 7. We are already using most of the long list of modules discussed here for the University’s new Drupal CMS. For example, we recently implemented the picture and breakpoints modules to meet responsive design requirements, both of which have been back ported to Drupal 7. This gives us a degree of confirmation that our module selection process will be effective in ensuring that we future-proof the University’s new central Drupal CMS.
In addition to the recommended modules, migrate was mentioned as the new upgrade path from Drupal 7 to Drupal 8, so we should be able to use the knowledge gained in migrating content from our existing central CMS to Drupal when we eventually upgrade from Drupal 7 to Drupal 8.
Symfony2 Best Practices from the Trenches
The framework underpinning Drupal 8 is Symfony2, and whilst we are not yet using Drupal 8, we are exploring web development languages and frameworks in other areas, one of which is Symfony2. As Symfony2 uses OO, it’s also useful to see how design patterns such as Dependency Injection are applied outside the more familiar Java context.
This best practices covered in this session seem to have been discovered through the bitter experience of the engaging presenter, and many of them are applicable to other development frameworks. Topics covered included:
- proper use of dependency injection in Symfony2 and how this can allow better automated testing using mock DB classes;
- the importance of separation of concerns and emphasis on good use of the service layer, keeping Controllers ‘thin’;
- appropriate use of bundles to manage code;
- selection of a standard method of configuration to ensure clarity, readability and maintainability (XML, YAML and annotations can all be used to configure Symfony2);
- the importance of naming conventions;
- recommended use of Composer for development using any PHP framework, not just Symfony2.
I have attended two or three sessions which talk about Symfony2 at this conference as well as a talk on using Ember.js with headless Drupal. It’s interesting to note that whilst there are an increasing number of web development languages and tools to choose from, there are many conceptual aspects and best practices which converge across those languages and frameworks. In particular, the frequent reference to the MVC architecture pattern, especially in the context of frameworks using OO, demonstrates the universality of this particular approach across current web development languages and frameworks. What is also clear from this session is that standardisation of approach and separation of concerns are important in all web development, regardless of your flavour of framework.
The Future of HTML and CSS
This tech-heavy session looked at the past, present and future of the relationship between HTML and CSS, exploring where we are now and how we got here, and how things might change or develop in future. Beginning with a short history lesson in how CSS developed out of the need to separate structure from presentation to resolve cross-browser compatibility issues, the session continued with an exploration of advancements in CSS such as CSS selectors, pseudo classes, CSS Flexbox, etc. and finally moved on to briefly talk about whether the apparent move in a more programmatic direction means that CSS may soon no longer be truly and purely a presentational language.
There was way too much technical detail in this presentation to absorb in the allotted time, but it was an interesting overview of what is now possible with CSS and what may be possible in future. In terms of the philosophical discussion around whether programmatic elements in CSS are appropriate, I’m not sure I agree that this is necessarily a bad thing. It seems to me that as long as the ‘logic’ aspects of CSS are directed at presentation concerns and not business logic, there is no philosophical problem. The difficulty may then lie in identifying the line between presentation concerns and business concerns. At any rate, this is perhaps of less concern than the potential page load overhead produced by increasingly complex CSS!