On Tuesday I posted some comments on the start of DrupalCon 2015 in Barcelona, where myself and a few colleagues are spending this week. The major strands this year are Docker, performance and scalability issues, the Symfony framework in a Drupal 8 context, and using headless Drupal with an alternative toolkit to provide the front-end. So far it’s been really interesting to see how sessions on Symfony and Drupal 8 this year have progressed from last year’s DrupalCon in terms of the complexity of what is being covered. It’s also interesting to see how many attendees are using tools like Vagrant, Docker and Jenkins to take the pain out of manual configuration and deployment. We have been using automated deployment tools for application code for some time now in IS Apps; configuration management and automatic creation of server environments is something that could further streamline our deployment workflow.
Our experiences of some of Tuesday’s DrupalCon sessions are outlined below. Thanks to Riky, Tim, Andrew, Adrian and Chris for contributing their thoughts on sessions they attended; any errors, omissions or misinterpretations in their edited notes are entirely mine. Most of the sessions mentioned below, along with many more interesting talks, are recorded and available on the DrupalCon YouTube channel.
Symfony for Dupal Developers
Drupal 8 makes use of many Symfony components, and this session covered the differences between the two frameworks to help decide which to use for projects. Drupal uses about a third of the Symfony components and you don’t need to know Symfony to develop Drupal.
Some differences are more obvious than others, such as while the application entry point in Drupal 8 is index.php, in Symfony it’s web/app.php and web/app_dev.php. These two entry points arise from the fact that Symfony enforces a programmatic toggle between Development and Production modes: you push generated code to production heads; you do not compile on production boxes.
Drupal uses the kernel a little differently and imposes stricter coding standards. For instance Drupal always uses the View event whereas this is discouraged in Symfony. Drupal 8 coding standards are quite strict in prescribing when to use YAML or Annotations for configuration. With Symfony configuration you are free to use PHP, XML, YAML or Annotations, although best practice is to pick one and stick to it.
Coming from Drupal 7, one of the fundamental differences in Symfony is that there are no functions; it’s all methods, with only a few static functions available. All meaningful logic in Symfony is in services, which are stateless objects.
Paths and routing are handled differently in Symfony and Drupal 8. In Drupal 8, you can only use module routing.yml files or events, not annotations, XML or PHP. Also, you don’t have path nesting or slugs in Drupal 8.
While both frameworks now use twig in the theming layer, when working in Drupal 8 you work with multiple twig files for each element, mirroring templating system in Drupal 7. Symfony templating uses only one file which extend twig files and override blocks defined in the parent twig file(s). Also Drupal always requires a render array; you don’t return rendered output directly in controllers.
Drupal has multiple APIs for storing content data, Symfony doesn’t have anything: you use Doctrine (or something else). Doctrine can only store primitive data and being a stand-alone PHP project has different event listeners from Symfony.
This talk highlights the fact that whilst Drupal 8 is using Symfony components, these are very much used with a Drupal flavour. Comprehending Drupal 8 and how it uses Symfony requires an understanding of what has gone before in previous versions of Drupal as well as knowledge of Symfony concepts and techniques.
Estimation: from waterfall to agile
In this session, Danish Digital Agency Adapt covered the background to their move from a Waterfall project methodology to an Agile approach, describing their experience of that transition and specifically targeting how they tackled estimation. Starting from a position where they were losing money on 50% of their projects due to inaccurate estimations and the need for their customers to prioritise scope, they adopted an Agile “light” methodology, only to quickly realise that with Agile it needs to be all or nothing.
The presentation continued with an outline of the process used to define user stories, use of planning poker, and the need for clearly defined roles and just-in-time management. The key to relative estimation, i.e. measuring feature size in story points not hours, is to involve all of the project team members in the process. Time boxed planning poker sessions, with limited discussion, allowed the knowledge level across the project team to be increased leading to more accurate forecasting. The iteration or sprint velocity was then calculated by breaking down the tasks (from development to testing) into the hours expected to take to complete. If a story took longer to complete than expected, the user story points associated with that user story did not change but this information was then used to forecast how much could be delivered within the project; this allowed the customer to prioritise what remained in the backlog.
The two main benefits derived from this change in approach to delivering projects was that, firstly, they were now in a position to keep to fixed budgets and secondly, knowledge was gained during the estimation process. However there were also negatives, primarily concerned with small projects where this approach has proved difficult to implement and there is often not sufficient time for people to become accustomed to this approach.
My main takeaways from this session are “Customise processes along the way” and “Know what you don’t know and accept that!”. Projects and customers (business units) are different and there is a need to refine and adapt the Agile process where experience shows that refinement is required. The more experience the project team have using Agile, the easier this process becomes. At the outset of a project, especially with larger scale projects, there are inevitably unknowns; it’s crucial to identify and accept these unknowns. As a project progresses, unknowns should become knowns and these can be requirements, risks or opportunities.
Configuration Deployment Best Practices in Drupal 8
This was an engaging (if somewhat caffine fueled!) session. Drupal currently has a problem with the separation of configuration from content. Code is developed in DEV then pushed out to TEST, STAGING, then LIVE. Content, however, is created on LIVE and the other environments are refreshed from it. “Content” can be thought of as the database, which has tables for both the created content and the configuration, but as we want Drupal configuration to be developed in DEV and tested through the environments, what is really needed is for configuration to be treated like code.
Drupal 7 does not have a good way to deal with this, although the Features contrib module can be used, as we are doing in our own Drupal CMS, to make our configuration fully deployable through our automated deployment process. In Drupal 8, configuration management on a managed workflow seems to offer many benefits over the older version. Configuration is totally separated out into YAML files. These files can then be committed into Version Control System like code, providing accountability and the ability to audit configuration changes. All of this makes Continuous Integration much easier, and may make it configuration rollback possible. The YAML configuration files are imported into the database for added performance, allowing a more robust method of configuration over hook_update_n, or using the current feature module in ways for which it wasn’t strictly designed; drush has also been extended to work with this functionality.
This was an interesting talk which made some very useful suggestions for how configuration should be managed between environments when using Drupal 8, as well as how exceptions can be handled. This area should be explored further when planning for the Drupal 8 upgrade; we should look into replacing our reliance on the features module for exported configuration with the equivalent in YAML configuration.
Altering, Extending and Enhancing Drupal
One of the many sessions on what is new in Drupal 8 versus Drupal 7, this was an interesting talk giving a high level summary of the mechanisms for customising and extending Drupal 8. Topics covered were plugins, services, events and hooks. In Drupal 8, the principle for plugins is “Learn once, apply everywhere”, moving away from the inconsistencies between modules and how they are used by having plugin classes implement an interface, so there is a common approach. Services in Drupal 8 are very well decoupled and can easily be swapped out, for example for testing purposes. Event handling allows modules to react to Drupal application actions and/or conditions in a standard manner that is common in OOP rather than using hooks to react when something happens. Hooks still exist in Drupal 8, but are primarily for modifying metadata which has been gathered by other means, or to alter forms.
One interesting question that was raised at this session was how to determine whether what is needed to implement a feature is a plugin or a service. A useful way to decide this is to think about a service as something that you would usually only have one of for any Drupal instance, for example, a caching service.
This succinct outline of mechanisms for extending Drupal 8 highlighted the fact that these mechanisms are less specific to Drupal than in previous versions. Hooks remain, but whereas previously they would have been used for everything, Drupal 8 leverages Symfony to add new ways of doing things that help improve code structure and re-usability. The patterns and techniques are familiar from other contexts where OO is used, which brings a consistency and helps developers to better avoid the unnecessary, and at times frustrating and confusing, proliferation of different ways of doing things. These approaches also help with documentation – creating common patterns for implementing new modules means that Drupal developers are not so much at the mercy of how good the documentation for a particular module is. All of these factors should improve the development experience in Drupal 8.
Fundamentals of Front-End Ops
As more application logic is being handled client-side, Front-end Ops is a response to the proliferation of front-end tools and frameworks. This session was not about the Drupal framework, but instead looked at tools for automating front-end development tasks, managing dependencies and generating scaffolding.
For scaffolding tasks, Yeoman was demonstrated. Yeoman recommends that your workflow involves Bower for dependency management and Grunt or Gulp for task automation. The session covered installing and using Yeoman, Bower, Grunt and Gulp as well as comparing the merits of Grunt and Gulp.
The talk covered the BBC’s Wraith, which leverages PhantomJS and SlimerJS to provide visual diffs of screenshots between two environments, as well as other visual regression tools, namely Huxley and PhantomCSS. There was also a discussion of the test rendering engines available: PhantomJS, SlimerJS, CasperJS and GhostLab.
Finally some of the front-end debugging tools available were covered, including Chrome DevTools Remote Debugging which allows you to connect a mobile or tablet to your desktop machine and use the development tools on the desktop browser to inspect the DOM, etc, on the mobile device.
Docker powered team and deployment
This session gave a basic overview of the features and benefits of Docker and of infrastructure as code. It focused on Bowline as an easy way to get started using Drupal on Docker, offering great flexibility and minimum requirements via a suite of BASH scripts that can be included in your Drupal code repositories. The scripts add a method for container installation, and can also be used to “hoist” (start up) other containers at the same time for local development, such as a Behat container for testing. The aim is to simplify and facilitate the configuration and linking of containers for Drupal setups. This definitely looks like the way forward for the provisioning and support, at the very least, of our Dev environments. Taking the paradigm further than just sandboxes for development, the ability to move docker containers through an environment pipeline from Dev to Test and into Production was very interesting and worth further investigation.
While much of the session was used to introduce Docker, it was interesting to see the various methods, like Bowline, that are being used to enable developers to work with Docker containers as the next step on from Vagrant. It was also interesting that almost everyone in the room was using Vagrant and had Jenkins as their CI server, and about half were developing on Linux – none of which are currently true for Development Services in general.
Using Docker (or something similar) could be beneficial for us, mainly because it would give us readily available, standard and up-to-date environments for development or support use.
We do however already achieve something quite similar with virtual machine deployment environments on local machines.
Docker in the DrupalCI test infrastructure
As seems to be the current standard, this Docker session also started with the obligatory shipping metaphor-laden introduction to containerisation which I won’t repeat (see https://www.docker.com/whatisdocker for the official version). DrupalCI is a project to make it easier for developers to do local testing, and to enable testing of different combinations of PHP versions and database backends. It works by having a base Docker image, from which a PHP base image and a database base image are created. With only small adjustments containers with variations of these, like PHP 5.5 and PHP 5.6 environments, can be produced.
The talk also raised important security points which arise from using containerisation: one needs to think about running a private image registry (an ‘IS Apps Docker Hub’); SELinux should be used to reduce the possibility for malicious containers to break out into the host; and the responsibility for updating the images, and the running containers, is also vital.
While it is frustrating to keep seeing the same Docker introductions, it is interesting that so many sessions this year have been dedicated to the technology showing it is being used by many in the community.
Solving Drupal Performance and Scalability Issues
In this session, Tine Sørensen drew on her years of experience in optimising performance for Drupal sites and troubleshooting scalability issues to highlight some of the techniques that can be used to diagnose issues, pick off the ‘low hanging fruit’ and achieve great improvements without great expense. There is no real value in spending 6 months rewriting some aspect of a module that is not performant if there is only a very small improvement at the end of that time. The recommended starting point when a performance issue is found is to collect data from the site, analyse it, choose where to apply effort – prioritising where ratio of effort to gains is the greatest, then repeat the process until performance is satisfactory.
The core message of this talk was the importance of collecting the data to demonstrate what is actually happening, and the huge gains in efficiency when pinpointing performance issues that can be achieved simply by using monitoring tools. Tine focused on New Relic as that is where her experience lies. As we found when we had external consultancy for our Drupal CMS project, New Relic is the tool of choice for monitoring Drupal and diagnosing performance issues; when using the Pro version, the tool has even more functionality such as granting XHProf-like profiling. Like ourselves in Apps, 50% of the audience were using New Relic for Drupal monitoring. Monitoring tools like New Relic can give an extremely useful picture of performance bottlenecks. For example, using the Pro version, it’s possible to see a list of PHP functions being called listed in order of execution time; this shows up any particular function that might be causing problems. A developer can then go straight to the function in question and analyse the code to identify any issues. We have used this technique ourselves whilst developing our Drupal CMS and it certainly can save hours, if not days, of time spent drilling down through XHProf reports.
Examples of quick wins were also provided, such as switching from GD to ImageMagick, disabling Views UI, tuning caching and tuning queries that are performing poorly. Of these, the GD/ImageMagick switch is the only one that was unfamiliar. It was particularly interesting to see that the server settings and tuning recommendations correspond to those we already use internally; it is useful to have our current approach validated by the speaker who is an experienced consultant.
For me, the main takeaways from this talk were that it is absolutely essential to understand what is happening on the servers when performance is poor, and that the less time you have to spend on collecting that data, the more quickly and efficiently you can resolve the problem. It’s also important not to simply throw hardware at a performance issue; that can resolve things in the short term, but ultimately it only masks the problem, particularly if that problem is not fully understood, with the risk that it could resurface in a more damaging way in future.
Drupal 8 Plugin Deep Dive
This session covered the new Plugin system in Drupal 8 which replaces the hook_info() and hook_info_alter() pattern. This is one of the areas where Drupal differs from other CMS’s and frameworks; Symfony bundles are hard-coded whereas Drupal plugins are configurable and discoverable.
Plugins do away with many of the Drupal 7 hooks, in favour of the new PluginManagerInterface model and examples of these in core were covered.
Plugin autoloading, dependency injection, service containers and annotations were all covered before going on to demonstrations of building your own plugins.
The session was very technical, covering low-level code samples in detail. It was a good companion piece to the earlier session on altering and extending Drupal 8.
Drupal 8 The Backend of Frontend
The Drupal 8 theming layer has been re-written and now uses Twig as its template engine. Theme functions are pretty much done away with now and replaced with Twig templates. The theme process hooks are also done away with now that Twig is used. You still have the two levels of template_preprocess and hook_preprocess hooks, but now everything they return in the variables array needs to be a render array.
The session also covered writing Twig templates and how to extend Twig with your own custom functions/filters in Drupal.
Drupal enables Twig’s auto-escaping; the security issues around this, and how to mark strings as safe so they are not escaped, were also covered.
Symfony2: The journey from the request to the response
This session was presented by the Head of Sensio Labs (creators of Symfony), Sarah Khalil. It covered the components involved in processing a HTTP request starting with the front controller (app.php in Symfony, index.php in Drupal) passing the Symfony HttpFoundation request through HttpKernel.
The Routing component YAML files are named and located differently in Drupal, but in essence the process is unchanged for the router passing the request through the controller and on to returning an HttpFoundation response.
Symfony’s Event Dispatcher component was covered in some detail with examples of event listeners and subscribers, the differences between them, and examples of how Drupal core implements these.
After listing the seven kernel events that you should know together with the Symfony components used that Drupal uses, the Dependency Injection component’s three key concepts of service container, services and parameters were detailed.
Finally a quick look at the concepts in Twig, with the caveat that Drupal does not use all the features available in Twig.
Caching at the Edge: CDNs for everyone
This session, although fairly technical, was well presented by clearly very knowledgeable speakers and touched on upcoming technology such as service-worker (client-side caching), ESI and Big Pipe.
Content Delivery Networks are external, multi-sited hosts which enable content to be delivered with lower latency from caches local to the user. They can be used just for static assets (JS, CSS, images), and also for dynamic content, although the latter is far more complicated. As proven by our infrastructure with Varnish, delivering anonymous content from a cache is fairly simple as it rarely changes; this advanced session focused mainly on caching authenticated content with CDNs.
As we have seen in earlier DrupalCon sessions, Drupal 7 is limited in what it can offer in this conext; it can only use “max-age” caching, or scripted/manual purging of stale content from CDNs. Drupal 8 looks to have taken a big step forward in terms of the effects of caching on website performance, providing three main cache invalidation techniques:
- cache tags, which show where data dependencies for caching exist;
- cache contexts, which give the context of dependencies for requests;
- cache max-age, as found in Drupal 7, which give the time dependencies of what is cached.
Together these enable placeholders and auto-placeholdering, whereby Drupal 8 “knows” what content makes up the page and so knows which can be retrieved from a CDN and which needs to be dynamically requested from Drupal.
Using this mechanism, it’s possible to perform Edge Side Includes with authentication being cached per session. This could be used with Varnish rather than a CDN, enabling us to cache a lot of our HTTPS traffic, which is where we are currently experiencing our worst performance. The demonstration of the response time improvements from caching user specific content was very impressive. While complicated, I strongly recommend that using Varnish for authenticated users is investigated when upgrading to Drupal 8. This would perhaps be one of the main reasons for considering upgrading when it is finally released.
BigPipe ( https://github.com/bigpipe/bigpipe ), a node.js framework which can be used with Drupal 8, was also demonstrated. It breaks up pages into smaller chunks so they can load asynchronously, decreasing the time for the first elements of page content to load.
Cut the crap. Practical tips and real world examples for removing waste from your development process.
This session dealt with a rather different approach to managing projects, albeit there is a caveat that this covers smaller pieces of work. Basically anything at all that can be deemed as not adding tangible benefit should be removed from the project. We all know that getting decision makers to make decisions quickly can be challenging. So the first thing is to identify your decision maker. In the example given the presenter, Jason Mark, relayed a project that was delivered within 4 weeks. The first week was used for requirements, design and templates and starting the build. The remaining three weeks involved working with the client/partner to test and refine what was delivered.
If this fast track approach is to work, it needs several things to fit, the main four being:
- Decision maker;
- No hidden stakeholders;
- The right people – low ego and ability to be flexible are key characteristics;
- Technology needs must fit and the requirement needs must fit the technology.
If there are blockers, especially people, find ways to turn them into champions by using positive creative language.
The top takeaways from this session are twofold. Firstly, take a step back and look at the project in terms of the 4 points above. Can these points be answered positively? If not, what can be done to turn this around and make the process fit? Secondly, because this is a fast approach to turning a piece of work around, the planning will not be complete before the work starts. This makes change inevitable, which needs to be embraced at the outset and not seen as something negative. To repeat a quote attributed to Buddha,
“Change is never painful, only the resistance to change is painful”
Above all, only focus on things that add value! As Project Manager you can ask this everyday!
Headful Drupal
This session was about headless Drupal. Perhaps the only advantage for us with this is the security advantages of removing some front-end admin components. Alternative means of achieving what we require do however seem quite time consuming and careful consideration of the particular development context would be needed before going down the headless Drupal route.
Visualizing Logfiles with ELK Stack
This was an interesting session that would have benefited from demonstrating a concrete example. The material presented was somewhat abstract and the potential for an escalation in complexity and the associated infrastructure requirements of such a system seemed a little daunting without something to tie it to a real world example. However, a centralised logging system would be a great asset to IS Apps even beyond the context of Drupal and EdWeb. An ELK stack, in some incarnation, should be a serious consideration.
Migrating a running service (Mollom) to AWS without service interruptions and reduce costs
A disappointing session that was difficult to generalise from, and only seemed relevant in terms of the specific service that the speakers were trying to move into the Amazon Cloud. The Mollom spam protection system seemed far removed from anything managed by IS Apps. It was claimed that the switch to AWS reduced the number of alerts that were received by the Ops team, but no other metrics were presented in terms of savings to the business as a whole. The takeaway from this session seemed simply to be that you need to think differently about services in AWS due to the ephemeral nature of the server instances, these being discarded and new ones spun up any time configuration changed or applications crashed.
Lightning talks
At the Lightning Talks session, there were 3 commercial companies pitching ideas and providing insight into their products and services.
PhpStorm for Drupal Development
This is a good looking and powerful tool which could be useful for us when working with PHP in Drupal. It has Drupal-specific functionality such as being able to track hooks across your codebase. It also incorporates Git functionality and provides a graphical means for diffing files as well as back tracking changes. In general it seems quite neatly put together and well thought out.
Interoute Virtual Data Centre – Proven to be the fastest cloud
One of the main advantages as part of their pitch was hosting in remote locations to assist with latency caused by long distance connections. I cannot see that this is applicable for us.
How to setup Nginx/Varnish Full Page Caching for Drupal
The main point taken from this session is that reverse proxy (Varnish) full page caching is not possible in Drupal 7, but will be in Drupal 8.