IS at DrupalCon Amsterdam – Day 2

Yesterday I posted some session summaries from the first full day of DrupalCon 2014 in Amsterdam, where a few members of IS are spending this week. DrupalCon Day 2 began on Wednesday with a Keynote from Cory Doctorow, a thought-provoking talk on freedom and the internet, a subject about which some of us had previously heard him give at the IT Futures Conference in 2013, and one which has significance well beyond the context of Drupal for anyone who uses the web. The broader relevance of Cory’s speech is reflected in many of the sessions here at DrupalCon; topics such as automated testing or developments in HTML and CSS are of interest to any web developer, not just those of us who work with Drupal.  In particular, the very strong DevOps strand at this conference contains much that we can learn from and apply to all areas of our work, not just Drupal, whether we are developing new tools or managing services.

Our experiences of some of Wednesday’s DrupalCon sessions are outlined below.  Once again, thanks to Aileen, Arthur, Riky, Tim, Andrew, Adrian and Stratos for contributing their thoughts on sessions they attended; any errors, omissions or misinterpretations in their edited summaries are entirely mine. Most of the sessions mentioned below, along with many more interesting talks, are recorded and available on the DrupalCon website, so if the summaries pique your interest, visit the DrupalCon site for more information!

Development Processes, Deployment and Infrastructure

How Cultivating a DevOps Culture will Raise your Team to the Next Level

The main idea explored in this session was how to create a single DevOps team rather than have separate teams. DevOps is a Movement, a better way to work and collaborate. Rather than make a new team, the current teams should work together with fewer walls between them. The responsibility for adding new features and keeping the site up can then be shared, but this does mean that information needs to be shared between the teams to enable meaningful discussion.

The session was very dense and covered many aspects of implementing a DevOps culture, including:

  • common access to monitoring tools and logging in all environments;
  • the importance of consistency between environments and how automation can help with this;
  • the need for version control of anything that matters – if something is worth changing, it is worth versioning;
  • communication of process and results throughout the project life-cycle;
  • infrastructure as code, which is a big change but opens up many opportunities to improve the repeatability of tasks and general stability;
  • automated testing, including synchronisation of data between environments.

The framework changes discussed here are an extension of the road we are already on in IS Apps, but the session raised many suggestions and ideas that could usefully influence the direction we take.

Using Open Source Logging and Monitoring Tools

Our current Drupal infrastructure is configured for logging in the same way as the rest of our infrastructure – in a very simple, default manner. Apache access and error logs and MySQL slow query logs in the default locations, but not much else. Varnish currently doesn’t log to disk at all as its output is too vast to search. If we are having an issue with Apache on an environment, this could mean manually searching through log files on four different servers.

Monitoring isn’t setup by default by DevTech for our Linux hosts – we would use Dell Spotlight to diagnose issues, but it isn’t something which runs all the time. IS Apps is often unaware that there is an issue until it is reported.

We are able to solve these issue by using some form of logging host. This could be running a suite of tools, such as the ‘ELK stack’, which comprises Elasticsearch, Logstash and Kibana.

By using log shipping, we can copy syslog files and other log files from our servers to our log host. Logstash can then filter these logs from their various formats to a standard type, which Elasticsearch, a Java tool based on the Lucene search engine, can then search through. This resulting aggregated data can then be displayed using the Kibana dashboard.

We can also use these log “monitors” to create metrics. Logstash can write out to Graphite which can act as a counter of this data. Grafana acts as a dashboard for Graphite. As well as data from the logs, collectd can also populate Graphite with system data, such as CPU and memory usage. A combination of these three tools could potentially replace Spotlight for some tasks.

We need this. Now. I strongly believe that our current logging and monitoring is insufficient, and while all of this is applicable to any service that we run, our vast new Drupal infrastructure particularly shows the weaknesses in our current practices. One of the 5 core DevOps “CLAMS” values is Measurement, and I think that an enhanced logging and monitoring system will greatly improve the support and diagnosis of services for both Development and Production Services.

Drupal in the HipHop Virtual Machine

When it comes to improving Drupal performance, there are three different areas to focus on. The front end is important as render times will always affect how quickly content is displayed. Data and IO at the back end is also a fundamental part; poor SQL queries for example are a major cause of non-linear performance degradation.

While caching will greatly increase page load times, for dynamic content which can’t be cached, the runtime is a part of the system which can be tuned. The original version of HipHop compiled PHP sites in their entirety to a C binary. The performance was very good, but it took about an hour to compile a Drupal 7 site and it resulted in a 1 Gb binary file. To rectify this, Java Virtual Machine-like Just In Time compilation techniques were introduced for HipHop Virtual Machine (HHVM) which runs as a FastCGI module.

Performance testing has shown that PHP 5.5 with OPcache is about 15% faster than PHP 5.3 with APC, which is what we are currently using, and HHVM 3.1 has about the same performance improvement again over PHP 5.5. However, despite the faster page load times, HHVM might not be perfect for our use. It compiles pages with Hack, which uses strong typing, rather than PHP, and it doesn’t support all elements of the PHP language. It is still very new and documentation isn’t great, but this session demonstrates that it is worth thinking about alternatives to the default PHP that is packaged for our Linux distribution. There are also other PHP execution engines, PHPng (which PHP 7 will be based off of), HippyVM and Recki-CT.

In IS Apps, we may want to start thinking about using the Red Hat Software Collections repository to get access to a supported, but newer, and therefore potentially more performant, version of PHP.

Content Staging in Drupal 8

This technical session provided a very nice overview of content staging models and how these can be implemented in Drupal 8. There was a presentation of core and contrib modules used, as well as example code. The process runs by comparing revisions and their changes using hashcodes and then choosing whether to push to the target websites.

What I would take from this session is that it will be feasible to build content staging in Drupal 8 using several workflows, from simple Staging to Production, up to multiple editorial sandboxes to production or a central editorial hub to multiple production sites. One understandable caveat is that the source and target nodes must share the same fields otherwise only the source fields will be updated, but this can be addressed with proper content strategy management.

Whilst this session focused on Drupal 8, the concepts and approach discussed are of interest to us as we explore how to replicate content in different environments, for example between Live and Training, in the University’s new central Drupal CMS.

Testing

Automated Frontend Testing

This session explored three aspects of automated testing: functional testing, performance testing and CSS regression testing.

From the perspective of developing the University’s new central Drupal CMS, there were a number of things to take away from this session.

In the area of functional testing, we are using Selenium WebDriver test suites written in Java to carry out integration tests via Bamboo as part of the automated deployment process.  Whilst Selenium tests have served us well to a point, we have encountered some issues when dealing with Javascript heavy functionality.  CasperJS, which uses the PhantomJS headless WebKit and allows scripted actions to be tested using an accessible syntax very similar to jQuery, could be a good alternative tool for us.  In addition to providing very similar test suite functionality to what is available to us with Selenium, there are two features of CasperJS that are not available to us with our current Selenium WebDriver approach:

  • the ability to specify browser widths when testing in order to test responsive design elements, which was demonstrated using picturefill.js, and which could prove invaluable when testing our Drupal theme;
  • the ability to easily capture page status to detect, for example, 404 errors, without writing custom code as with Selenium.

For these reasons, we should explore CasperJS when writing the automated tests for our Drupal theme, and ultimately we may be able to refactor some of our existing tests in CasperJS to simplify the tests and reduce the time spent on resolving intermittent Selenium WebDriver issues.

On the performance testing front, we do not currently use any automated testing tools to compare such aspects of performance as page load time before and after making code changes.  This is certainly something we should explore, and the tools used during the demo, PageSpeed and Phantomas, seem like good candidates for investigation. A tool such as PageSpeed can provide both performance metrics and recommendations for how to resolve bottlenecks. Phantomas could be even more useful as it provides an extremely granular variation on the kind of metrics available using PageSpeed and even allows assertions to be made to check for specific expected results in the metrics retrieved. On performance, see also the blog post from DrupalCon day 1 for the session summary on optimising page delivery to mobile devices.

Finally, CSS regression testing with Wraith, an open source tool developed by the BBC, was demonstrated.  This tool produces a visual diff of output from two different environments to detect unexpected variation in the visual layout following CSS or code changes.  Again, we do not do any CSS regression testing as part of our deployment process for the University’s new central Drupal CMS, but the demo during this talk showed how easy it could be to set up this type of testing. The primary benefit gained is the ability to quickly verify for multiple device sizes that you have not made an unexpected change to the visual layout of a page. CSS regression testing could be particularly useful in the context of ensuring consistency in Drupal theme output following deployment.

I can highly recommend watching the session recording for this session.  It’s my favourite talk from this year’s DrupalCon and worth a look for any web developer.  The excellent session content is in no way specific to Drupal.  Also, the code samples used in the session are freely available and there are links to additional resources, so you can explore further after watching the recording.

Doing Behaviour-Driven Development with Behat

Having attended a similar, but much simpler and more technically focused, presentation at DrupalCamp Scotland 2014, my expectation from this session was to better understand Behaviour Driven Development (BDD) and how Behat can be used to automate testing using purpose written scripts. It was showcased how BDD can be integrated easily in Agile projects because its main driver of information is discussions regarding business objectives. In addition to user stories, examples were provided to better explain the business benefit.

I strongly believe that this testing process is something to look deeper into as it would enable quicker, more comprehensive and better documented user acceptance testing to take place following functionality updates, saving time in writing long documents and hours of manual work. Another clear benefit is that the examples being tested reflect real business needs and requests, ensuring that deliverables actually follow discussed user stories and satisfy their conditions. Finally, this highlights the importance of good planning and how it can help later project stages, like testing, to run more smoothly and quickly.

UX Concerns

Building a Tasty Backend

This session was held in one of the smaller venues and was hugely popular; there was standing room only by the start, or even “sitting on the floor room” only. Obvious health and safety issues there!

The focus of this session was to explore Drupal modules that can help improve the UX for CMS users who may be intimidated by or frankly terrified of using Drupal, demonstrating how it is possible to simplify content creation, content management and getting around the Admin interface without re-inventing the wheel.

The general recommended principle is “If it’s on the page and it really doesn’t need to be, get rid of it!”.  Specific topics covered included:

  • using the Field Group module to arrange related content fields into vertical tabs, simplifying the user experience by showing only what the user needs to see;
  • disabling options that are not really required or don’t work as expected (e.g. the Preview button when editing a node) to remove clutter from the interface;
  • using Views Bulk Operations to tailor and simplify how users interact with lists of content;
  • customising and controlling how each CMS user interacts with the Admin menu system using modules such as Contextual Administration, Admin Menu Source and Admin Menu Per Menu.

The most interesting thing about this talk in light of our experience developing the University’s new central Drupal CMS is how closely many of the recommendations outlined in this session match our own module selection and the way in which we are handling the CMS user experience.  It is reassuring to see our approach reflected in suggested best practices, which we have come to through our knowledge and experience of the Drupal modules concerned, combined with prototyping and user testing sessions that have at times both validated our assumptions and exposed flaws in our understanding of the user experience.  As was noted in this session, “Drupal isn’t a CMS, it’s a toolkit for building a CMS”; it’s important that we use that toolkit to build not only a robust, responsive website but also a clear, usable and consistent CMS user experience.

Project Management and Engagement

Getting the Technical Win: How to Position Drupal to a Sceptical Audience

This Presentation started with the bold statement that no one cares about the technology be it Drupal, Adobe, Sitecore or WordPress. Business’s care about solutions and Drupal can offer the solution. Convincing people is hard, removing identified blockers is the easier bit.

In order to understand the drivers for change we must ask the correct questions. These can include:

  1. What are the pain points
  2. What is the competition doing, and most importantly
  3. Take a step back and don’t dive into a solution immediately.

Asking these kinds of questions will help build a trusted relationship. To this end it is sometimes a necessity in certain situations to be realistic and sometimes there is the need to say no. Understanding what success will look like and what happens if change is not implemented are two further key factors.

The presentation then moved on to technical themes. It is important to acknowledge that some people have favoured technologies. While Drupal is not the strongest technology, it has the biggest community and with that huge technical resources, ensuring longevity and support. Another common misconception is around scalability. However, Drupal’s scalability has been proven.

In the last part of the presentation attention turned to the sales process, focussing on the stages and technicalities involved towards closing a deal. The presentation ended with a promising motto “Don’t just sell, promise solutions instead.”

Although this was a sales presentation it offered valuable arguments to call upon when encouraging new areas to come aboard the Drupal train.

SellingDrupal

Looking to the Future

Future-Proof your Drupal 7 Site

This session primarily explored how best to future-proof a Drupal site by selecting modules chosen from the subset that have either been moved into Drupal core in version 8 or have been back ported into Drupal 7.  We are already using most of the long list of modules discussed here for the University’s new Drupal CMS.  For example, we recently implemented the picture and breakpoints modules to meet responsive design requirements, both of which have been back ported to Drupal 7.  This gives us a degree of confirmation that our module selection process will be effective in ensuring that we future-proof the University’s new central Drupal CMS.

In addition to the recommended modules, migrate was mentioned as the new upgrade path from Drupal 7 to Drupal 8, so we should be able to use the knowledge gained in migrating content from our existing central CMS to Drupal when we eventually upgrade from Drupal 7 to Drupal 8.

Symfony2 Best Practices from the Trenches

The framework underpinning Drupal 8 is Symfony2, and whilst we are not yet using Drupal 8, we are exploring web development languages and frameworks in other areas, one of which is Symfony2. As Symfony2 uses OO, it’s also useful to see how design patterns such as Dependency Injection are applied outside the more familiar Java context.

This best practices covered in this session seem to have been discovered through the bitter experience of the engaging presenter, and many of them are applicable to other development frameworks.  Topics covered included:

  • proper use of dependency injection in Symfony2 and how this can allow better automated testing using mock DB classes;
  • the importance of separation of concerns and emphasis on good use of the service layer, keeping Controllers ‘thin’;
  • appropriate use of bundles to manage code;
  • selection of a standard method of configuration to ensure clarity, readability and maintainability (XML, YAML and annotations can all be used to configure Symfony2);
  • the importance of naming conventions;
  • recommended use of Composer for development using any PHP framework, not just Symfony2.

I have attended two or three sessions which talk about Symfony2 at this conference as well as a talk on using Ember.js with headless Drupal.  It’s interesting to note that whilst there are an increasing number of web development languages and tools to choose from, there are many conceptual aspects and best practices which converge across those languages and frameworks.  In particular, the frequent reference to the MVC architecture pattern, especially in the context of frameworks using OO, demonstrates the universality of this particular approach across current web development languages and frameworks. What is also clear from this session is that standardisation of approach and separation of concerns are important in all web development, regardless of your flavour of framework.

The Future of HTML and CSS

This tech-heavy session looked at the past, present and future of the relationship between HTML and CSS, exploring where we are now and how we got here, and how things might change or develop in future. Beginning with a short history lesson in how CSS developed out of the need to separate structure from presentation to resolve cross-browser compatibility issues, the session continued with an exploration of advancements in CSS such as CSS selectors, pseudo classes, CSS Flexbox, etc. and finally moved on to briefly talk about whether the apparent move in a more programmatic direction means that CSS may soon no longer be truly and purely a presentational language.

There was way too much technical detail in this presentation to absorb in the allotted time, but it was an interesting overview of what is now possible with CSS and what may be possible in future.  In terms of the philosophical discussion around whether programmatic elements in CSS are appropriate, I’m not sure I agree that this is necessarily a bad thing.  It seems to me that as long as the ‘logic’ aspects of CSS are directed at presentation concerns and not business logic, there is no philosophical problem.  The difficulty may then lie in identifying the line between presentation concerns and business concerns.  At any rate, this is perhaps of less concern than the potential page load overhead produced by increasingly complex CSS!