This is the third in a short series of posts from DrupalCon 2014 in Amsterdam, where a few members of IS are spending this week. On Wednesday and Thursday I posted some session summaries from Day 1 and Day 2 of DrupalCon.
Yesterday was the final day of conference sessions and after the main auditorium session of Drupal Lightning Talks, the Drupal Coder vs Themer Smackdown perfectly illustrated one of the best things about DrupalCon, the element of fun that can pervade even the driest, most technical discussion. The Smackdown was neither dry nor immensely technical, but Campbell Veretsi and Adam Juran managed to make some serious points about good Drupal development practices whilst wearing martial arts gear and waving weaponry around. Watching their antics was a great way to wake up for a day of DrupalCon talks, and their battle to create a Drupal site from wireframes using either only code or only the theme in only 15 minutes showed how Coders and Themers are inherently dependent on each other and are better off hugging than fighting.
Our experiences of some of Thursday’s sessions are outlined below. Two sessions from Day 2 which were not written up in time to appear in yesterday’s post are included here. Once again, thanks to Aileen, Arthur, Riky, Tim, Andrew, Adrian and Stratos for contributing their thoughts on sessions they attended; any errors, omissions or misinterpretations in their edited summaries are entirely mine. Most of the sessions mentioned below, along with many more interesting talks, are recorded and available on the DrupalCon website, so if the summaries pique your interest, visit the DrupalCon site for more information!
Development Processes, Deployment and Infrastructure
How we Quantified the True Business Value of DevOps with Real-life Analysis
This talk did an excellent job of not only giving the general benefits of DevOps, but why it is good for the Business too. It focused on six phases for implementing DevOps, saying that it’s not about whether you are using DevOps or not, but more a case of how much.
- Create your world – Use deployment and configuration management, and standardise across the board.
- Monitor your world – Use effective automated monitoring with easy access to information and a clear notification and reaction process. This is not using grep in /var/log/; see the session on logging and monitoring tools.
- Improve your world – Minimise repetition so that you can maximise the time spent on actual issues rather than environmental problems. Nobody should be logging onto servers.
- Test your world – Use automated testing (you shouldn’t need to depend on a developer triggering them) with robust test strategies; this makes customers happy. We can start small, as any test is better than none.
- Scale your world – Have automated responses to increased needs, with predictability, reliability and graceful degradation.
- Secure your world – Use proactive and reactive strategies with intrusion detection and alerts.
We would need to build institutional confidence in our process and what we’re doing, but we can start small. This is firstly a culture change and then a process change, but without Business buy-in, the task is complicated and often doomed to failure. The biggest initial wins can be found with configuration management (Puppet), automated deployment (Bamboo, which we’re already using) and easy scaling (OpenStack, or perhaps AWS). By using a quantification framework we can evaluate the benefits in using DevOps processes, though they are not all immediately quantifiable; it’s best to start with easy, universally understood metrics.
Of all the sessions I have seen, this is a true must-watch for anyone that doubts DevOps is the future and contained so much useful information that my summary has barely scratched the surface. Watch it during your lunch; you can thank me afterwards.
GitHub Pull Request Builder for Drupal
Note that this session happened on DrupalCon Day 2.
This session described how Lullabot use GutHub pull requests to automatically build a Drupal instance to test changes.
The pull request includes the Jira ticket number and allows you to see review a list of all commits much as Bamboo currently does for us, but you also get a diff of the changes across the request.
For their automated deployment Lullabot use Jenkins, which listens out for pull requests to the development branch rather than commits. It then builds a dev environment from the dev branch, including the pull request patch, and when done posts a comment in the pull request with a link to the testing environment. The comment includes instructions for clearing down the test area when finished. This is achieved using a Jenkins plugin listening to an IRC channel; a message of “jdel 12345” will delete the test environment for pull request 12345.
When building the test environment, a recent copy of the live database is used so they can test against real data.
If there are any changes required and more commits are pushed to the pull request, Jenkins rebuilds the test environment and re-runs tests. Lullabot have found this very useful as it lets clients quickly see new features or enhancements without affecting other environments, especially where multiple features are being developed in parallel; each feature has its own test environment derived from that feature branch’s pull request.
Once the pull request is merged you can automatically deploy to another environment. Alternatively, this can be left as a manual job and multiple pull requests included to build a release.
As part of this automated deployment process, Lullabot run automated tests using CasperJS and take screenshots with Resemble.js; the process then sends out a login link for testing using an admin user which exists purely within that test environment.
Lullabot are currently working on a service which supports this automated build of Drupal environments. Currently in private beta, it can be found at http://tugboat.qa/.
Automated Performance Tracking
We should be considering how to measure performance better, not just in terms of which metrics to gather, but making sure that the measurements we do take are repeatable and relevant. This talk was mostly about trying to get the “core introspection” methods more widely used and extended since what is currently available is not very useful at the moment, which may not seem immediately relevant to the University, but there were some interesting points.
For instance, performance measuring should be a part of the project from the beginning. We need to see how performance changes over time – the best case would be over every commit. This would allow to evaluate changes in terms of performance – “Yes, sure you can have that feature but it will make your site run 10% slower”.
There are many different technical challenges with measuring performance.
- Which metrics to take? Different sets will be useful for front end, back end, databases, and external services.
- Which tools set to use? XHProf and webprofiler are the current most useful and can be used to collect data automatically via XHProf-Kit.
- How do we automatically setup relevant “scenarios”? This could actually be the easiest task for us. We could import data from LIVE to Staging and then use Behat to run tests for all the user stories. We could even run them in parallel for realistic load testing.
- Data MUST be collected over time to allow decisions to be made. The smaller the granularity the better, in general.
There are many tools available to help with databases, for example MySQLTuner.pl was mentioned. These could be used as part of the regular support upkeep. The data collected can then be fed back into both the decision making process and the development process.
Also we should keep the slow query log and use tools like pt-query-digest to make sure that things are not getting worse! The sooner we find a problem the better chance we have of figuring out what has caused it and therefore fixing it.
In order to keep the measurement relevant we need to make sure that the different environments are equivalent and that all infrastructure is identical; this is a common theme across many DrupalCon sessions this year.
Another problem with keeping the performance relevant is how to ensure that the performance is NOT measured on sites on virtual machines. The speaker discovered that the differences between runs was too great to make the measurements useful; in order to make these measurements comparable, they should be done on dedicated machines, not virtual ones. This could create problems when ensuring that the infrastructure is identical if we rely too heavily on methods that only work with VMs.
At least 6 stats need to be kept for each metric over many runs:
- Minimum value
- Maximum value
- Average
- Median
- 95 percentile
- 5 percentile
This is the only way to even out many of the non-code contributors to performance.
The new sensiolabs profiler was mentioned. In is currently in private beta but will be very fully featured. We’ll probably need to wait and see. It will be free to OSS projects so it will be easy to evaluate.
Building Modern Web Applications with Ember.js and Headless Drupal
Ember.js is a client-side javascript framework for building single-page applications using the MVC architectural pattern. The presence of this session and similar sessions at this year’s DrupalCon reflects the fact that single-page applications are becoming the norm. For speaker Mikkel Høgh, this development is inevitable as the expectations of web users increase. Constant page reloads are not efficient; it’s not just the request/response overhead that are an issue, but the repeated re-rendering of page content, CSS, etc. Ajax calls can help with this, but building an entire application using javascript, jQuery and ajax without a framework does not make for clean, maintainable code. Ember.js, like Angular and Backbone, is a framework designed to address these issues, with a rich object model and automatically updating templates using Handlebars, a semantic templating tool similar to Twig.
This session outlined the main core of Ember.js, a full-stack MVC framework in the browser and demonstrated some key features such as:
- adherence to the concept of “convention over configuration”, which means there is less boilerplate code and more auto-loading;
- “ember-flavoured” Web Components, an intermediary measure designed to alleviate poor browser support for the Web Components standard, which is not yet complete;
- the class-like Object Model, based on Ember.Object, which supports inheritance;
- two-way bindings that allow templates to automatically update with data regardless of where the model is updated;
- automatically updating ‘computed properties’;
- the importance of Getters and Setters, which must be used to allow the appropriate events to fire and update all uses of the data;
- Routing, which determines the structure of the web application by specifying the handlers for each URL;
- naming conventions, the use of which allows the framework to make reasonable assumptions about what an application needs so that it is not necessary to define absolutely everything;
- the Controller, Model and View in Ember.js;
- the ability to rollback data changes in the model that are not saved, allowing for less messy handling of persistent state in the browser;
- the ability to omit an explicit View implementation because Ember.js can make assumptions based on other application configuration to send a default view;
- Ember-Data, the data-storage abstraction layer designed to simplify data management over a REST API using JSON
- useful tools for working with Ember.js such as EMBER-CLI.
The primary focus of the session was Ember.js itself, but the session did turn to the question of why to use Drupal as a back-end for an Ember.js application. The benefits raised were very similar to those mentioned in other DrupalCon talks on headless Drupal, such as:
- authentication, permissions and user management;
- an easy Admin UI
- the availability of many modules to provide rich functionality, enabling the Ember.js application developer to focus on the core application.
It was really interesting to hear about an increasingly common approach to addressing the challenges faced by modern web developers. Single-page applications are not an area we have widely explored, but given their prevalence and the increasing richness of the javascript frameworks available, it’s important to have some awareness of this web development technique and this session certainly provided much food for thought. In the context of the University’s new central Drupal CMS, headless Drupal is not something we intend to explore; however, it seems likely that there will in future be local headless Drupal installations in Schools and Units that receive feeds from the central CMS.
If you’re interested in reading more about ember.js, see these pages:
- An In-Depth Introduction to Ember.js (a breakdown of the framework, including clear examples and tutorials)
- Official Guides (the official Ember.js guides)
Front End Concerns
Integration of ElasticSearch in Drupal – the “New School” Search Engine
This session included a presentation and demo on ElasticSearch, a full-text search server and analytics engine based on Lucene with a REST-ful web interface and features also available through JSON API. Several Drupal modules were mentioned that have been written to make ElasticSearch available in a Drupal site.
Some key points:
- easy to install and configure with an easy-to-use interface;
- very scalable and distributed in a configurable way;
- replication is handled automatically;
- it is all open source and since the main application is comparable to a database the hosting needs will be similar;
- the system contains a method which allows for conflict resolution if multiple users enter the same document to different nodes;
- the query system is more powerful and flexible than other “URL only” systems for creating the queries;
- it can be used with many other modules including watchdog and views;
- it can be used with an ElasticSearch views module to allow querying of indexes of documents that are not in Drupal.
Sites developed by WIT for other areas of the University currently use Solr where more powerful search features are required. Following this session, they intend to try out a cloud-hosted elastic search service, http://www.found.no, with one of their sites that currently use Solr. This will allow comparison between ElasticSearch and Solr to determine whether it is a suitable alternative. From the perspective of the University’s central website, it will certainly be interesting to explore further the details and understand how ElasticSearch could be useful. Watch this space!
Project Management and Best Practices
Drupal Lightning Talks
Thursday began with a series of short talks on various technical and non-technical topics. Some, like the Coin Tools Lightning Talk, were technically of interest but not necessarily directly related to our own use of Drupal.
The Unforseen: A Healthy Attitude To Risk on Web Projects
Steve Parks talked about how management of Risk can be a major blocker in projects being successful, highlighting the need to accept that there is risk associated with any project and the fact that trust is of great importance in mitigating the impact of risk.
Druphpet
The talk on the Druphpet project and Puphpet showcased a Puppet-based Vagrant VM suitable for instant and unified configuration of Drupal environment. The question of how to get a fresh, consistent local development environment running as quickly as possible for the University’s new central Drupal CMS is something that we are currently exploring. Puphpet is certainly something we will look into!
Continuous Delivery as the Agile Successor
Michael Godeck’s talk was of particular interest given the adoption within IS of automated deployment tools to support our internal Agile methodology. The subject is closely related to DrupalCon sessions on DevOps with common underlying principles such as the importance of communication across teams and shared ownership.
Godeck talked about how Agile was effective in changing software development because it has “just the right balance of abstraction and detail to take the software industry to a new plateau”. Improvements in quality & productivity are gained by using Agile tools seriously. Agile was designed to address difficulties in responding appropriately to changing requirements throughout the project life-cycle. It is successful in that regard, but the key is to be able to *deliver* the software.
Continuous Delivery practice has the goal of dealing with the delivery question in the way that Agile has dealt with management of risk. The emphasis is on resolving the conflict between the need to deliver quickly and get fast feedback and the need to run complex test suites which can be slow. Build Pipelines break the build up into stages, with the early stages allowing for quick feedback where it is most important, whilst the later build phases give time to probe and explore issues in more detail. Like Agile, Continuous Delivery only provides the best benefits by changing culture across both technical and non-technical teams. The key point is that software delivery should not be a “technical silo”; it belongs to the whole team and with Continuous Delivery, the decision to deliver software becomes a business decision, not a technical one.
We are already using many of the techniques and building blocks that are part of Continuous Delivery. However, the principles of Continuous Delivery are worth exploring further to identify where we may streamline and improve our existing practices.
Lightning Talks 2
This session was a follow-up from the main auditorium Lightning Talks earlier in the day. It comprised two separate short presentations.
Session 1: AbleOrganizer: Fundraising, Outreach and Activism in Drupal
In this talk Dr. Taher Ali (Assistant Professor of Computer Science/ IT director of Gulf University for Science & Technology (GUST)), presented on the challenges around convincing senior management to adopt Open Source applications. One of the major concerns was around the support and maintainance of Open Source solutions. However after presenting a convincing argument built around the community strengths and license costs, the University now run the majority of their systems using open source applications.
One of the main advantages that the University has found is the ease of integration of Open Source application with one another.
Finally, it was noted that by becoming a gold sponsor of this event was their way of feeding back into the community.
Session 2: eCommerce Usability – The small stuff that combined makes a big difference
Myles Davidson, CEO of iKOS, gave a rapid fire presentation of how small subtle changes can collectively make a huge difference to customers and their success.
Some examples are listed below.
- When using forms – make things simple, don’t make your users think!
- Know what your users want and develop the front end towards their needs.
- Make it clear – don’t drive people away though ambiguous messages. Use help text to help not hinder.
- Where possible use defaults – reduce double keying e.g. Deliver and invoice addresses.
- Be careful with buttons – don’t break the user journey.
- Search – do it properly, do it brilliantly or leave it alone. People will leave your site if search doesn’t work.
- Site recommendations need to be realistic.
- Analytics – the key is that you can’t manage things that you don’t measure and you can manage everything!
12 Best Practices from Wunderkraut
Note that this session happened on DrupalCon Day 2.
At last year’s DrupalCon I saw a presentation from Wunderroot which saw 45, yes 45, different presenters in 60 minutes. This year they have reduced that down to a mere 12. Each presenter covered a single best practice compressed into 5 minutes and not a second was wasted. There were actually only 11 but let’s not be pedantic.
- Risk – adopt a healthy attitude to risk. Trust, training and responsible planning are better than bureaucratic rules to manage risk.
- Predicting the future – Impact Mapping in four words Why, who, how and what. More info on www.impactmapping.org
- Custom Webpage Layouts – put everything on one page!
- How to make complex things simple – your website should mirror your customers needs not your company! Keep content consistent and the user experience consistent.
- Balance theory and practice – using new tools is not only about technologies it is also about approaches.
- Managing Expectations – 70% of projects fail due to communication. Keep communicating the minor decisions and use the project steering group to align expectations with stakeholders. Transparency is king!
- If you can’t install it, it’s broken – make sure the workflows work and keep the configuration in code, and remove old code. Old code smells.
- Alignment – let customers come to the community. The community is a rich vibrant and colorful community, there’s no danger in encouraging your customer to become a part of the community.
- Learning an alien language in two years – structure the information and use technology, like Anki which uses space recognition. Remember it is a step by step process that takes some time – read, listen and talk to people.
- One size fits all – consider all the possibilities. Start with the smaller screens and prioritise the content. Content prioritisation requires good customer knowledge. After prioritisation the content can be re-engineered for the specific user journey. Lastly, this knowledge can be used to create a road map for content development.
- A different kind of bonus system – hugs equal money.
Hardcore Drupal 8
Field API is Dead, Long Live Entity Field API!
With the beta release of Drupal 8 there are major changes to the API and Field API is no exception. This session outlined key aspects of Entity Field API in Drupal 8, some of which are summarised below.
The Entity Field API unifies the following APIs/features:
- Field translation
- Field access
- Constraints/validation
- REST
- Widgets/formatters
- In-place editing
- EntityQuery
- Field cache/Entity cache
Many field types are now included in core, removing the need to enable separate modules: for example, email, link, phone, date and datetime, and, best of all, entity reference are now in core. Entity reference being in core allows for some very neat chaining of entities:
$node->field_ref->entity->field_foo;
And you can get a taxonomy term with:
$node->tags->entity;
All text fields now support in-place editing out of the box too, without the need for additional modules. Even in-place editing of the title is now possible.
Since fields can be attached to block entities in Drupal 8, fieldable blocks are now provided out of the box.
We also get “Form modes” in Drupal 8, which are similar to view modes where you can change the order and visibility of an entity type’s fields for multiple forms. In Drupal 7 you only have one add/edit form available, which leads to nasty workarounds for entities such as those required to provide different user edit and user registration forms for the user entity. “Form modes” also makes it much easier to have alternate create and edit forms and to hide fields in forms, especially using the Field Overview UI which works along the same lines as the existing view modes UI.
Comment is now a field, which means you can have comments on any entity type.
In Drupal 8, everything is now an entity. There are two types of entity: configuration entities and content entities. Content entities are revisionable, translatable and fieldable. Configuration entities are stored to configuration management, cannot have fields attached and include things like node types, views, image styles and fields themselves. Yes, fields are entities!
Entities now have the full CRUD capability in core. They are classed objects making full use of interfaces and methods rather than having wrapper functions as in Drupal 7.
The following code example shows how nodes are now handled:
$node = Node::create(array(
'type' => 'page',
'title' => 'Example',
));
$node->save();
$id = $node->id();
$node = Node::load($id);
$node->delete();
A newly created node has to be saved before it exists in the database.
Interfaces are now used to extend a base entity interface when creating custom entities:
$node implements EntityInterface
$node implements NodeInterface
NodeInterface extends EntityInterface
This means you have common methods across all entities:
$entity->label();
$entity->id();
$entity->bundle();
$entity->url();
$entity->toArray();
$entity->validate();
if (!$entity->access('view')) {
// ...
}
Having validation as a method in Drupal 8 separates it from form submission and also allows easier validation through REST APIs.
You can have specialised methods for specific entity types:
$node = Node::load($id);
if (!$node->isPublished()) {
$node->setTitle('published');
$node->setPublished(TRUE);
$node->save();
}
There is built in translation support in Drupal 8, which allows the translated output of all fields on an entity to be handled much more easily than is currently possible:
$translation = $node->getTranslation('de'); $translation instanceof NodeInterface;
$translation->getTitle();
$translation->language()->id == 'de';
$entity = $translation->getUntranslated();
In Drupal 8, $node->body[LANGUAGE_NONE][0]['value'];
becomes $node->body->value;
. Much neater!
For multiple instances of a field, you can specify the delta with $node->body->get(0)->value
or $node->body[0]->value
There is a cheat sheet for the new Entity Field API, available at http://wizzlern.nl/drupal/drupal-8-entity-cheat-sheet.
All in all, these example demonstrate how changes to the Entity Field API in Drupal 8 will make for much cleaner, more readable, and more maintainable code.