SSP New Developments

The Student Systems Partnership (SSP) Team has recently developed and successfully applied to live the following three projects:

SAC018 UKBA Data Recording – Overseas students can study in the UK after being granted a visa under Tier 4 of the Home Office (formally the UK Border Agency) points based system. This main aim of this project was to improve the way University holds data for Tier 4 students by bringing information from disparate systems into EUCLID to allow effective reporting on, and monitoring of Tier 4 students.

Other improvements were made so that CAS (Confirmation of Acceptance of Studies) requests for students extending their studies are generated in EUCLD rather than keyed manually. Tier 4 data is now presented in one place within EUCLID for review by Registry staff at Census points. Copies of passport and visa documents can be stored and accessed from EUCLID.

The software has been successfully used during the last week (20th – 24th October) the Tier Four Census 2014-2015 academic session. The census details of around 5,500 students were collected and stored within the SITS database.

SAC019 Direct Admissions Review – The purpose of the project was to review Direct Admissions Processes across the university with a particular focus on PG and VS applicants looking at the process from the decision to submit an application to the point of decision. The new Direct Admissions application uses the same framework as the one developed as a part of the first SSP Agile project Paperless Admissions. From the technical side the following technologies or improvements have been added to the SITS:

PD4ML – is a powerful PDF generating tool that uses HTML and CSS (Cascading Style Sheets) as page layout and content definition format. The software has been installed on the SITS server and is used to generate the PDF version of the offer letter so it can be printed on the student request. The technology has been also successfully used in the Student Self Service to print documents such as Certificate of Matriculation, Higher Education Achievement Report (HEAR), Certificate of Student Status (for Council Tax Exemption)

E: Vision SSO Single Sign On enhancement – a MD5 encryption method of the single sign on links has been replaced with more secure AES Advanced Encryption Standard (32 character key)

SAC033 Tier 4 Engagement Monitoring – The aim of this project was to meet the requirement by UKVI to be able to report on engagement for all Tier 4 Students by September 2014. As a part of this project the following functionality have been delivered:

  • Exposure of engagement points from other sources within EUCLID
  • Bulk creation of engagement points within EUCLID
  • Auto-scheduling of engagement points per student within EUCLID
  • Facility for administrators / academics to record engagements within EUCLID
  • Upload of engagement points from spreadsheets into EUCLID from an external source

From the technological point of view the following solutions have been used:

  • Creation of the JSON object from the .csv file
  • Validation of the JSON objects using the JSON schema http://json-schema.org/
  • Creation of csv files from the html tables

Testing Times – Python unittest and Selenium

In the SSP we have been putting together a small suite of performance tests to use as before/after checks in the SITS upgrade. In the SSP we have a mixture of IS and non-IS staff, so it was important to come up with a procedure which was:

  1. Easily maintainable
  2. Accessible to team members who don’t come from a programming background

Selenium was a natural choice, since it is already being used elsewhere in the department, and it has a handy interface which lets you record tests easily.

Continue reading “Testing Times – Python unittest and Selenium”

IS at DrupalCon Amsterdam – Day 3

This is the third in a short series of posts from DrupalCon 2014 in Amsterdam, where a few members of IS are spending this week. On Wednesday and Thursday I posted some session summaries from Day 1 and Day 2 of DrupalCon.

Yesterday was the final day of conference sessions and after the main auditorium session of Drupal Lightning Talks, the Drupal Coder vs Themer Smackdown perfectly illustrated one of the best things about DrupalCon, the element of fun that can pervade even the driest, most technical discussion.  The Smackdown was neither dry nor immensely technical, but Campbell Veretsi and Adam Juran managed to make some serious points about good Drupal development practices whilst wearing martial arts gear and waving weaponry around.  Watching their antics was a great way to wake up for a day of DrupalCon talks, and their battle to create a Drupal site from wireframes using either only code or only the theme in only 15 minutes showed how Coders and Themers are inherently dependent on each other and are better off hugging than fighting.

Our experiences of some of Thursday’s sessions are outlined below. Two sessions from Day 2 which were not written up in time to appear in yesterday’s post are included here. Once again, thanks to Aileen, Arthur, Riky, Tim, Andrew, Adrian and Stratos for contributing their thoughts on sessions they attended; any errors, omissions or misinterpretations in their edited summaries are entirely mine. Most of the sessions mentioned below, along with many more interesting talks, are recorded and available on the DrupalCon website, so if the summaries pique your interest, visit the DrupalCon site for more information!

Development Processes, Deployment and Infrastructure

How we Quantified the True Business Value of DevOps with Real-life Analysis

This talk did an excellent job of not only giving the general benefits of DevOps, but why it is good for the Business too. It focused on six phases for implementing DevOps, saying that it’s not about whether you are using DevOps or not, but more a case of how much.

  • Create your world – Use deployment and configuration management, and standardise across the board.
  • Monitor your world – Use effective automated monitoring with easy access to information and a clear notification and reaction process. This is not using grep in /var/log/; see the session on logging and monitoring tools.
  • Improve your world – Minimise repetition so that you can maximise the time spent on actual issues rather than environmental problems. Nobody should be logging onto servers.
  • Test your world – Use automated testing (you shouldn’t need to depend on a developer triggering them) with robust test strategies; this makes customers happy. We can start small, as any test is better than none.
  • Scale your world – Have automated responses to increased needs, with predictability, reliability and graceful degradation.
  • Secure your world – Use proactive and reactive strategies with intrusion detection and alerts.

We would need to build institutional confidence in our process and what we’re doing, but we can start small. This is firstly a culture change and then a process change, but without Business buy-in, the task is complicated and often doomed to failure. The biggest initial wins can be found with configuration management (Puppet), automated deployment (Bamboo, which we’re already using) and easy scaling (OpenStack, or perhaps AWS). By using a quantification framework we can evaluate the benefits in using DevOps processes, though they are not all immediately quantifiable; it’s best to start with easy, universally understood metrics.

Of all the sessions I have seen, this is a true must-watch for anyone that doubts DevOps is the future and contained so much useful information that my summary has barely scratched the surface. Watch it during your lunch; you can thank me afterwards.

GitHub Pull Request Builder for Drupal

Note that this session happened on DrupalCon Day 2.

This session described how Lullabot use GutHub pull requests to automatically build a Drupal instance to test changes.

The pull request includes the Jira ticket number and allows you to see review a list of all commits much as Bamboo currently does for us, but you also get a diff of the changes across the request.

For their automated deployment Lullabot use Jenkins, which listens out for pull requests to the development branch rather than commits. It then builds a dev environment from the dev branch, including the pull request patch, and when done posts a comment in the pull request with a link to the testing environment. The comment includes instructions for clearing down the test area when finished. This is achieved using a Jenkins plugin listening to an IRC channel; a message of “jdel 12345” will delete the test environment for pull request 12345.

When building the test environment, a recent copy of the live database is used so they can test against real data.

If there are any changes required and more commits are pushed to the pull request, Jenkins rebuilds the test environment and re-runs tests. Lullabot have found this very useful as it lets clients quickly see new features or enhancements without affecting other environments, especially where multiple features are being developed in parallel; each feature has its own test environment derived from that feature branch’s pull request.

Once the pull request is merged you can automatically deploy to another environment.  Alternatively, this can be left as a manual job and multiple pull requests included to build a release.

As part of this automated deployment process, Lullabot run automated tests using CasperJS and take screenshots with Resemble.js; the process then sends out a login link for testing using an admin user which exists purely within that test environment.

Lullabot are currently working on a service which supports this automated build of Drupal environments. Currently in private beta, it can be found at http://tugboat.qa/.

Automated Performance Tracking

We should be considering how to measure performance better, not just in terms of which metrics to gather, but making sure that the measurements we do take are repeatable and relevant. This talk was mostly about trying to get the “core introspection” methods more widely used and extended since what is currently available is not very useful at the moment, which may not seem immediately relevant to the University, but there were some interesting points.

For instance, performance measuring should be a part of the project from the beginning. We need to see how performance changes over time – the best case would be over every commit. This would allow to evaluate changes in terms of performance – “Yes, sure you can have that feature but it will make your site run 10% slower”.

There are many different technical challenges with measuring performance.

  • Which metrics to take? Different sets will be useful for front end, back end, databases, and external services.
  • Which tools set to use? XHProf and webprofiler are the current most useful and can be used to collect data automatically via XHProf-Kit.
  • How do we automatically setup relevant “scenarios”? This could actually be the easiest task for us. We could import data from LIVE to Staging and then use Behat to run tests for all the user stories. We could even run them in parallel for realistic load testing.
  • Data MUST be collected over time to allow decisions to be made. The smaller the granularity the better, in general.

There are many tools available to help with databases, for example MySQLTuner.pl was mentioned. These could be used as part of the regular support upkeep. The data collected can then be fed back into both the decision making process and the development process.

Also we should keep the slow query log and use tools like pt-query-digest to make sure that things are not getting worse! The sooner we find a problem the better chance we have of figuring out what has caused it and therefore fixing it.

In order to keep the measurement relevant we need to make sure that the different environments are equivalent and that all infrastructure is identical; this is a common theme across many DrupalCon sessions this year.

Another problem with keeping the performance relevant is how to ensure that the performance is NOT measured on sites on virtual machines. The speaker discovered that the differences between runs was too great to make the measurements useful; in order to make these measurements comparable, they should be done on dedicated machines, not virtual ones. This could create problems when ensuring that the infrastructure is identical if we rely too heavily on methods that only work with VMs.

At least 6 stats need to be kept for each metric over many runs:

  1. Minimum value
  2. Maximum value
  3. Average
  4. Median
  5. 95 percentile
  6. 5 percentile

This is the only way to even out many of the non-code contributors to performance.

The new sensiolabs profiler was mentioned. In is currently in private beta but will be very fully featured. We’ll probably need to wait and see. It will be free to OSS projects so it will be easy to evaluate.

Building Modern Web Applications with Ember.js and Headless Drupal

Ember.js is a client-side javascript framework for building single-page applications using the MVC architectural pattern.  The presence of this session and similar sessions at this year’s DrupalCon reflects the fact that single-page applications are becoming the norm. For speaker Mikkel Høgh, this development is inevitable as the expectations of web users increase.  Constant page reloads are not efficient; it’s not just the request/response overhead that are an issue, but the repeated re-rendering of page content, CSS, etc. Ajax calls can help with this, but building an entire application using javascript, jQuery and ajax without a framework does not make for clean, maintainable code.  Ember.js, like Angular and Backbone, is a framework designed to address these issues, with a rich object model and automatically updating templates using Handlebars, a semantic templating tool similar to Twig.

This session outlined the main core of Ember.js, a full-stack MVC framework in the browser and demonstrated some key features such as:

  • adherence to the concept of “convention over configuration”, which means there is less boilerplate code and more auto-loading;
  • “ember-flavoured” Web Components, an intermediary measure designed to alleviate poor browser support for the Web Components standard, which is not yet complete;
  • the class-like Object Model, based on Ember.Object, which supports inheritance;
  • two-way bindings that allow templates to automatically update with data regardless of where the model is updated;
  • automatically updating ‘computed properties’;
  • the importance of Getters and Setters, which must be used to allow the appropriate events to fire and update all uses of the data;
  • Routing, which determines the structure of the web application by specifying the handlers for each URL;
  • naming conventions, the use of which allows the framework to make reasonable assumptions about what an application needs so that it is not necessary to define absolutely everything;
  • the Controller, Model and View in Ember.js;
  • the ability to rollback data changes in the model that are not saved, allowing for less messy handling of persistent state in the browser;
  • the ability to omit an explicit View implementation because Ember.js can make assumptions based on other application configuration to send a default view;
  • Ember-Data, the data-storage abstraction layer designed to simplify data management over a REST API using JSON
  • useful tools for working with Ember.js such as EMBER-CLI.

The primary focus of the session was Ember.js itself, but the session did turn to the question of why to use Drupal as a back-end for an Ember.js application.  The benefits raised were very similar to those mentioned in other DrupalCon talks on headless Drupal, such as:

  • authentication, permissions and user management;
  • an easy Admin UI
  • the availability of many modules to provide rich functionality, enabling the Ember.js application developer to focus on the core application.

It was really interesting to hear about an increasingly common approach to addressing the challenges faced by modern web developers. Single-page applications are not an area we have widely explored, but given their prevalence and the increasing richness of the javascript frameworks available, it’s important to have some awareness of this web development technique and this session certainly provided much food for thought.  In the context of the University’s new central Drupal CMS, headless Drupal is not something we intend to explore; however, it seems likely that there will in future be local headless Drupal installations in Schools and Units that receive feeds from the central CMS.

If you’re interested in reading more about ember.js, see these pages:

Front End Concerns

Integration of ElasticSearch in Drupal – the “New School” Search Engine

This session included a presentation and demo on ElasticSearch, a full-text search server and analytics engine based on Lucene with a REST-ful web interface and features also available through JSON API.  Several Drupal modules were mentioned that have been written to make ElasticSearch available in a Drupal site.

Some key points:

  • easy to install and configure with an easy-to-use interface;
  • very scalable and distributed in a configurable way;
  • replication is handled automatically;
  • it is all open source and since the main application is comparable to a database the hosting needs will be similar;
  • the system contains a method which allows for conflict resolution if multiple users enter the same document to different nodes;
  • the query system is more powerful and flexible than other “URL only” systems for creating the queries;
  • it can be used with many other modules including watchdog and views;
  • it can be used with an ElasticSearch views module to allow querying of indexes of documents that are not in Drupal.

Sites developed by WIT for other areas of the University currently use Solr where more powerful search features are required.  Following this session, they intend to try out a cloud-hosted elastic search service, http://www.found.no, with one of their sites that currently use Solr.  This will allow comparison between ElasticSearch and Solr to determine whether it is a suitable alternative.  From the perspective of the University’s central website, it will certainly be interesting to explore further the details and understand how ElasticSearch could be useful. Watch this space!

Project Management and Best Practices

Drupal Lightning Talks

Thursday began with a series of short talks on various technical and non-technical topics.  Some, like the Coin Tools Lightning Talk, were technically of interest but not necessarily directly related to our own use of Drupal.

The Unforseen: A Healthy Attitude To Risk on Web Projects

Steve Parks talked about how management of Risk can be a major blocker in projects being successful, highlighting the need to accept that there is risk associated with any project and the fact that trust is of great importance in mitigating the impact of risk.

Druphpet

The talk on the Druphpet project and Puphpet showcased a Puppet-based Vagrant VM suitable for instant and unified configuration of Drupal environment.  The question of how to get a fresh, consistent local development environment running as quickly as possible for the University’s new central Drupal CMS is something that we are currently exploring.  Puphpet is certainly something we will look into!

Continuous Delivery as the Agile Successor

Michael Godeck’s talk was of particular interest given the adoption within IS of automated deployment tools to support our internal Agile methodology.  The subject is closely related to DrupalCon sessions on DevOps with common underlying principles such as the importance of communication across teams and shared ownership.

Godeck talked about how Agile was effective in changing software development because it has “just the right balance of abstraction and detail to take the software industry to a new plateau”. Improvements in quality & productivity are gained by using Agile tools seriously.  Agile was designed to address difficulties in responding appropriately to changing requirements throughout the project life-cycle.  It is successful in that regard, but the key is to be able to *deliver* the software.

Continuous Delivery practice has the goal of dealing with the delivery question in the way that Agile has dealt with management of risk.  The emphasis is on resolving the conflict between the need to deliver quickly and get fast feedback and the need to run complex test suites which can be slow.  Build Pipelines break the build up into stages, with the early stages allowing for quick feedback where it is most important, whilst the later build phases give time to probe and explore issues in more detail. Like Agile, Continuous Delivery only provides the best benefits by changing culture across both technical and non-technical teams.  The key point is that software delivery should not be a “technical silo”; it belongs to the whole team and with Continuous Delivery, the decision to deliver software becomes a business decision, not a technical one.

We are already using many of the techniques and building blocks that are part of Continuous Delivery. However, the principles of Continuous Delivery are worth exploring further to identify where we may streamline and improve our existing practices.

Lightning Talks 2

This session was a follow-up from the main auditorium Lightning Talks earlier in the day.  It comprised two separate short presentations.

Session 1: AbleOrganizer: Fundraising, Outreach and Activism in Drupal

In this talk Dr. Taher Ali (Assistant Professor of Computer Science/ IT director of Gulf University for Science & Technology (GUST)), presented on the challenges around convincing senior management to adopt Open Source applications. One of the major concerns was around the support and maintainance of Open Source solutions. However after presenting a convincing argument built around the community strengths and license costs, the University now run the majority of their systems using open source applications.

One of the main advantages that the University has found is the ease of integration of Open Source application with one another.

Lightning2

Finally, it was noted that by becoming a gold sponsor of this event was their way of feeding back into the community.

Session 2: eCommerce Usability – The small stuff that combined makes a big difference

Myles Davidson, CEO of iKOS, gave a rapid fire presentation of how small subtle changes can collectively make a huge difference to customers and their success.

Some examples are listed below.

  • When using forms – make things simple, don’t make your users think!
  • Know what your users want and develop the front end towards their needs.
  • Make it clear – don’t drive people away though ambiguous messages. Use help text to help not hinder.
  • Where possible use defaults – reduce double keying e.g. Deliver and invoice addresses.
  • Be careful with buttons – don’t break the user journey.
  • Search – do it properly, do it brilliantly or leave it alone. People will leave your site if search doesn’t work.
  • Site recommendations need to be realistic.
  • Analytics – the key is that you can’t manage things that you don’t measure and you can manage everything!

12 Best Practices from Wunderkraut

Note that this session happened on DrupalCon Day 2.

At last year’s DrupalCon I saw a presentation from Wunderroot which saw 45, yes 45, different presenters in 60 minutes. This year they have reduced that down to a mere 12. Each presenter covered a single best practice compressed into 5 minutes and not a second was wasted. There were actually only 11 but let’s not be pedantic.

  1. Risk – adopt a healthy attitude to risk. Trust, training and responsible planning are better than bureaucratic rules  to manage risk.
  2. Predicting the future – Impact Mapping in four words Why, who, how and what. More info on www.impactmapping.org
  3. Custom Webpage Layouts – put everything on one page!12BestPractices1
  4. How to make complex things simple – your website should mirror your customers needs not your company! Keep content consistent and the user experience consistent.
  5. Balance theory and practice – using new tools is not only about technologies it is also about approaches.
  6. Managing Expectations – 70% of projects fail due to communication. Keep communicating the minor decisions and use the project steering group to align expectations with stakeholders. Transparency is king!
  7. If you can’t install it, it’s broken – make sure the workflows work and keep the configuration in code, and remove old code. Old code smells.
  8. Alignment – let customers come to the community. The community is a rich vibrant and colorful community, there’s no danger in encouraging your customer to become a part of the community.
  9. Learning an alien language in two years – structure the information and use technology, like Anki which uses space recognition. Remember it is a step by step process that takes some time – read, listen and talk to people.
  10. One size fits all – consider all the possibilities. Start with the smaller screens and prioritise the content. Content prioritisation requires good customer knowledge. After prioritisation the content can be re-engineered for the specific user journey. Lastly, this knowledge can be used to create a road map for content development.
  11. A different kind of bonus system – hugs equal money.12BestPractices2

Hardcore Drupal 8

Field API is Dead, Long Live Entity Field API!

With the beta release of Drupal 8 there are major changes to the API and Field API is no exception. This session outlined key aspects of Entity Field API in Drupal 8, some of which are summarised below.

The Entity Field API unifies the following APIs/features:

  • Field translation
  • Field access
  • Constraints/validation
  • REST
  • Widgets/formatters
  • In-place editing
  • EntityQuery
  • Field cache/Entity cache

Many field types are now included in core, removing the need to enable separate modules: for example, email, link, phone, date and datetime, and, best of all, entity reference are now in core. Entity reference being in core allows for some very neat chaining of entities:

$node->field_ref->entity->field_foo;

And you can get a taxonomy term with:

$node->tags->entity;

All text fields now support in-place editing out of the box too, without the need for additional modules. Even in-place editing of the title is now possible.

Since fields can be attached to block entities in Drupal 8, fieldable blocks are now provided out of the box.

We also get “Form modes” in Drupal 8, which are similar to view modes where you can change the order and visibility of an entity type’s fields for multiple forms. In Drupal 7 you only have one add/edit form available, which leads to nasty workarounds for entities such as those required to provide different user edit and user registration forms for the user entity. “Form modes” also makes it much easier to have alternate create and edit forms and to hide fields in forms, especially using the Field Overview UI which works along the same lines as the existing view modes UI.

Comment is now a field, which means you can have comments on any entity type.

In Drupal 8, everything is now an entity. There are two types of entity: configuration entities and content entities. Content entities are revisionable, translatable and fieldable. Configuration entities are stored to configuration management, cannot have fields attached and include things like node types, views, image styles and fields themselves. Yes, fields are entities!

Entities now have the full CRUD capability in core. They are classed objects making full use of interfaces and methods rather than having wrapper functions as in Drupal 7.

The following code example shows how nodes are now handled:
$node = Node::create(array(
'type' => 'page',
'title' => 'Example',
));
$node->save();
$id = $node->id();
$node = Node::load($id);
$node->delete();

A newly created node has to be saved before it exists in the database.

Interfaces are now used to extend a base entity interface when creating custom entities:
$node implements EntityInterface
$node implements NodeInterface
NodeInterface extends EntityInterface

This means you have common methods across all entities:
$entity->label();
$entity->id();
$entity->bundle();
$entity->url();
$entity->toArray();
$entity->validate();
if (!$entity->access('view')) {
// ...
}

Having validation as a method in Drupal 8 separates it from form submission and also allows easier validation through REST APIs.

You can have specialised methods for specific entity types:
$node = Node::load($id);
if (!$node->isPublished()) {
$node->setTitle('published');
$node->setPublished(TRUE);
$node->save();
}

There is built in translation support in Drupal 8, which allows the translated output of all fields on an entity to be handled much more easily than is currently possible:
$translation = $node->getTranslation('de'); $translation instanceof NodeInterface;
$translation->getTitle();
$translation->language()->id == 'de';
$entity = $translation->getUntranslated();

In Drupal 8, $node->body[LANGUAGE_NONE][0]['value']; becomes $node->body->value;. Much neater!

For multiple instances of a field, you can specify the delta with $node->body->get(0)->value or $node->body[0]->value

There is a cheat sheet for the new Entity Field API, available at http://wizzlern.nl/drupal/drupal-8-entity-cheat-sheet.

All in all, these example demonstrate how changes to the Entity Field API in Drupal 8 will make for much cleaner, more readable, and more maintainable code.

IS at DrupalCon Amsterdam – Day 2

Yesterday I posted some session summaries from the first full day of DrupalCon 2014 in Amsterdam, where a few members of IS are spending this week. DrupalCon Day 2 began on Wednesday with a Keynote from Cory Doctorow, a thought-provoking talk on freedom and the internet, a subject about which some of us had previously heard him give at the IT Futures Conference in 2013, and one which has significance well beyond the context of Drupal for anyone who uses the web. The broader relevance of Cory’s speech is reflected in many of the sessions here at DrupalCon; topics such as automated testing or developments in HTML and CSS are of interest to any web developer, not just those of us who work with Drupal.  In particular, the very strong DevOps strand at this conference contains much that we can learn from and apply to all areas of our work, not just Drupal, whether we are developing new tools or managing services.

Our experiences of some of Wednesday’s DrupalCon sessions are outlined below.  Once again, thanks to Aileen, Arthur, Riky, Tim, Andrew, Adrian and Stratos for contributing their thoughts on sessions they attended; any errors, omissions or misinterpretations in their edited summaries are entirely mine. Most of the sessions mentioned below, along with many more interesting talks, are recorded and available on the DrupalCon website, so if the summaries pique your interest, visit the DrupalCon site for more information!

Development Processes, Deployment and Infrastructure

How Cultivating a DevOps Culture will Raise your Team to the Next Level

The main idea explored in this session was how to create a single DevOps team rather than have separate teams. DevOps is a Movement, a better way to work and collaborate. Rather than make a new team, the current teams should work together with fewer walls between them. The responsibility for adding new features and keeping the site up can then be shared, but this does mean that information needs to be shared between the teams to enable meaningful discussion.

The session was very dense and covered many aspects of implementing a DevOps culture, including:

  • common access to monitoring tools and logging in all environments;
  • the importance of consistency between environments and how automation can help with this;
  • the need for version control of anything that matters – if something is worth changing, it is worth versioning;
  • communication of process and results throughout the project life-cycle;
  • infrastructure as code, which is a big change but opens up many opportunities to improve the repeatability of tasks and general stability;
  • automated testing, including synchronisation of data between environments.

The framework changes discussed here are an extension of the road we are already on in IS Apps, but the session raised many suggestions and ideas that could usefully influence the direction we take.

Using Open Source Logging and Monitoring Tools

Our current Drupal infrastructure is configured for logging in the same way as the rest of our infrastructure – in a very simple, default manner. Apache access and error logs and MySQL slow query logs in the default locations, but not much else. Varnish currently doesn’t log to disk at all as its output is too vast to search. If we are having an issue with Apache on an environment, this could mean manually searching through log files on four different servers.

Monitoring isn’t setup by default by DevTech for our Linux hosts – we would use Dell Spotlight to diagnose issues, but it isn’t something which runs all the time. IS Apps is often unaware that there is an issue until it is reported.

We are able to solve these issue by using some form of logging host. This could be running a suite of tools, such as the ‘ELK stack’, which comprises Elasticsearch, Logstash and Kibana.

By using log shipping, we can copy syslog files and other log files from our servers to our log host. Logstash can then filter these logs from their various formats to a standard type, which Elasticsearch, a Java tool based on the Lucene search engine, can then search through. This resulting aggregated data can then be displayed using the Kibana dashboard.

We can also use these log “monitors” to create metrics. Logstash can write out to Graphite which can act as a counter of this data. Grafana acts as a dashboard for Graphite. As well as data from the logs, collectd can also populate Graphite with system data, such as CPU and memory usage. A combination of these three tools could potentially replace Spotlight for some tasks.

We need this. Now. I strongly believe that our current logging and monitoring is insufficient, and while all of this is applicable to any service that we run, our vast new Drupal infrastructure particularly shows the weaknesses in our current practices. One of the 5 core DevOps “CLAMS” values is Measurement, and I think that an enhanced logging and monitoring system will greatly improve the support and diagnosis of services for both Development and Production Services.

Drupal in the HipHop Virtual Machine

When it comes to improving Drupal performance, there are three different areas to focus on. The front end is important as render times will always affect how quickly content is displayed. Data and IO at the back end is also a fundamental part; poor SQL queries for example are a major cause of non-linear performance degradation.

While caching will greatly increase page load times, for dynamic content which can’t be cached, the runtime is a part of the system which can be tuned. The original version of HipHop compiled PHP sites in their entirety to a C binary. The performance was very good, but it took about an hour to compile a Drupal 7 site and it resulted in a 1 Gb binary file. To rectify this, Java Virtual Machine-like Just In Time compilation techniques were introduced for HipHop Virtual Machine (HHVM) which runs as a FastCGI module.

Performance testing has shown that PHP 5.5 with OPcache is about 15% faster than PHP 5.3 with APC, which is what we are currently using, and HHVM 3.1 has about the same performance improvement again over PHP 5.5. However, despite the faster page load times, HHVM might not be perfect for our use. It compiles pages with Hack, which uses strong typing, rather than PHP, and it doesn’t support all elements of the PHP language. It is still very new and documentation isn’t great, but this session demonstrates that it is worth thinking about alternatives to the default PHP that is packaged for our Linux distribution. There are also other PHP execution engines, PHPng (which PHP 7 will be based off of), HippyVM and Recki-CT.

In IS Apps, we may want to start thinking about using the Red Hat Software Collections repository to get access to a supported, but newer, and therefore potentially more performant, version of PHP.

Content Staging in Drupal 8

This technical session provided a very nice overview of content staging models and how these can be implemented in Drupal 8. There was a presentation of core and contrib modules used, as well as example code. The process runs by comparing revisions and their changes using hashcodes and then choosing whether to push to the target websites.

What I would take from this session is that it will be feasible to build content staging in Drupal 8 using several workflows, from simple Staging to Production, up to multiple editorial sandboxes to production or a central editorial hub to multiple production sites. One understandable caveat is that the source and target nodes must share the same fields otherwise only the source fields will be updated, but this can be addressed with proper content strategy management.

Whilst this session focused on Drupal 8, the concepts and approach discussed are of interest to us as we explore how to replicate content in different environments, for example between Live and Training, in the University’s new central Drupal CMS.

Testing

Automated Frontend Testing

This session explored three aspects of automated testing: functional testing, performance testing and CSS regression testing.

From the perspective of developing the University’s new central Drupal CMS, there were a number of things to take away from this session.

In the area of functional testing, we are using Selenium WebDriver test suites written in Java to carry out integration tests via Bamboo as part of the automated deployment process.  Whilst Selenium tests have served us well to a point, we have encountered some issues when dealing with Javascript heavy functionality.  CasperJS, which uses the PhantomJS headless WebKit and allows scripted actions to be tested using an accessible syntax very similar to jQuery, could be a good alternative tool for us.  In addition to providing very similar test suite functionality to what is available to us with Selenium, there are two features of CasperJS that are not available to us with our current Selenium WebDriver approach:

  • the ability to specify browser widths when testing in order to test responsive design elements, which was demonstrated using picturefill.js, and which could prove invaluable when testing our Drupal theme;
  • the ability to easily capture page status to detect, for example, 404 errors, without writing custom code as with Selenium.

For these reasons, we should explore CasperJS when writing the automated tests for our Drupal theme, and ultimately we may be able to refactor some of our existing tests in CasperJS to simplify the tests and reduce the time spent on resolving intermittent Selenium WebDriver issues.

On the performance testing front, we do not currently use any automated testing tools to compare such aspects of performance as page load time before and after making code changes.  This is certainly something we should explore, and the tools used during the demo, PageSpeed and Phantomas, seem like good candidates for investigation. A tool such as PageSpeed can provide both performance metrics and recommendations for how to resolve bottlenecks. Phantomas could be even more useful as it provides an extremely granular variation on the kind of metrics available using PageSpeed and even allows assertions to be made to check for specific expected results in the metrics retrieved. On performance, see also the blog post from DrupalCon day 1 for the session summary on optimising page delivery to mobile devices.

Finally, CSS regression testing with Wraith, an open source tool developed by the BBC, was demonstrated.  This tool produces a visual diff of output from two different environments to detect unexpected variation in the visual layout following CSS or code changes.  Again, we do not do any CSS regression testing as part of our deployment process for the University’s new central Drupal CMS, but the demo during this talk showed how easy it could be to set up this type of testing. The primary benefit gained is the ability to quickly verify for multiple device sizes that you have not made an unexpected change to the visual layout of a page. CSS regression testing could be particularly useful in the context of ensuring consistency in Drupal theme output following deployment.

I can highly recommend watching the session recording for this session.  It’s my favourite talk from this year’s DrupalCon and worth a look for any web developer.  The excellent session content is in no way specific to Drupal.  Also, the code samples used in the session are freely available and there are links to additional resources, so you can explore further after watching the recording.

Doing Behaviour-Driven Development with Behat

Having attended a similar, but much simpler and more technically focused, presentation at DrupalCamp Scotland 2014, my expectation from this session was to better understand Behaviour Driven Development (BDD) and how Behat can be used to automate testing using purpose written scripts. It was showcased how BDD can be integrated easily in Agile projects because its main driver of information is discussions regarding business objectives. In addition to user stories, examples were provided to better explain the business benefit.

I strongly believe that this testing process is something to look deeper into as it would enable quicker, more comprehensive and better documented user acceptance testing to take place following functionality updates, saving time in writing long documents and hours of manual work. Another clear benefit is that the examples being tested reflect real business needs and requests, ensuring that deliverables actually follow discussed user stories and satisfy their conditions. Finally, this highlights the importance of good planning and how it can help later project stages, like testing, to run more smoothly and quickly.

UX Concerns

Building a Tasty Backend

This session was held in one of the smaller venues and was hugely popular; there was standing room only by the start, or even “sitting on the floor room” only. Obvious health and safety issues there!

The focus of this session was to explore Drupal modules that can help improve the UX for CMS users who may be intimidated by or frankly terrified of using Drupal, demonstrating how it is possible to simplify content creation, content management and getting around the Admin interface without re-inventing the wheel.

The general recommended principle is “If it’s on the page and it really doesn’t need to be, get rid of it!”.  Specific topics covered included:

  • using the Field Group module to arrange related content fields into vertical tabs, simplifying the user experience by showing only what the user needs to see;
  • disabling options that are not really required or don’t work as expected (e.g. the Preview button when editing a node) to remove clutter from the interface;
  • using Views Bulk Operations to tailor and simplify how users interact with lists of content;
  • customising and controlling how each CMS user interacts with the Admin menu system using modules such as Contextual Administration, Admin Menu Source and Admin Menu Per Menu.

The most interesting thing about this talk in light of our experience developing the University’s new central Drupal CMS is how closely many of the recommendations outlined in this session match our own module selection and the way in which we are handling the CMS user experience.  It is reassuring to see our approach reflected in suggested best practices, which we have come to through our knowledge and experience of the Drupal modules concerned, combined with prototyping and user testing sessions that have at times both validated our assumptions and exposed flaws in our understanding of the user experience.  As was noted in this session, “Drupal isn’t a CMS, it’s a toolkit for building a CMS”; it’s important that we use that toolkit to build not only a robust, responsive website but also a clear, usable and consistent CMS user experience.

Project Management and Engagement

Getting the Technical Win: How to Position Drupal to a Sceptical Audience

This Presentation started with the bold statement that no one cares about the technology be it Drupal, Adobe, Sitecore or WordPress. Business’s care about solutions and Drupal can offer the solution. Convincing people is hard, removing identified blockers is the easier bit.

In order to understand the drivers for change we must ask the correct questions. These can include:

  1. What are the pain points
  2. What is the competition doing, and most importantly
  3. Take a step back and don’t dive into a solution immediately.

Asking these kinds of questions will help build a trusted relationship. To this end it is sometimes a necessity in certain situations to be realistic and sometimes there is the need to say no. Understanding what success will look like and what happens if change is not implemented are two further key factors.

The presentation then moved on to technical themes. It is important to acknowledge that some people have favoured technologies. While Drupal is not the strongest technology, it has the biggest community and with that huge technical resources, ensuring longevity and support. Another common misconception is around scalability. However, Drupal’s scalability has been proven.

In the last part of the presentation attention turned to the sales process, focussing on the stages and technicalities involved towards closing a deal. The presentation ended with a promising motto “Don’t just sell, promise solutions instead.”

Although this was a sales presentation it offered valuable arguments to call upon when encouraging new areas to come aboard the Drupal train.

SellingDrupal

Looking to the Future

Future-Proof your Drupal 7 Site

This session primarily explored how best to future-proof a Drupal site by selecting modules chosen from the subset that have either been moved into Drupal core in version 8 or have been back ported into Drupal 7.  We are already using most of the long list of modules discussed here for the University’s new Drupal CMS.  For example, we recently implemented the picture and breakpoints modules to meet responsive design requirements, both of which have been back ported to Drupal 7.  This gives us a degree of confirmation that our module selection process will be effective in ensuring that we future-proof the University’s new central Drupal CMS.

In addition to the recommended modules, migrate was mentioned as the new upgrade path from Drupal 7 to Drupal 8, so we should be able to use the knowledge gained in migrating content from our existing central CMS to Drupal when we eventually upgrade from Drupal 7 to Drupal 8.

Symfony2 Best Practices from the Trenches

The framework underpinning Drupal 8 is Symfony2, and whilst we are not yet using Drupal 8, we are exploring web development languages and frameworks in other areas, one of which is Symfony2. As Symfony2 uses OO, it’s also useful to see how design patterns such as Dependency Injection are applied outside the more familiar Java context.

This best practices covered in this session seem to have been discovered through the bitter experience of the engaging presenter, and many of them are applicable to other development frameworks.  Topics covered included:

  • proper use of dependency injection in Symfony2 and how this can allow better automated testing using mock DB classes;
  • the importance of separation of concerns and emphasis on good use of the service layer, keeping Controllers ‘thin’;
  • appropriate use of bundles to manage code;
  • selection of a standard method of configuration to ensure clarity, readability and maintainability (XML, YAML and annotations can all be used to configure Symfony2);
  • the importance of naming conventions;
  • recommended use of Composer for development using any PHP framework, not just Symfony2.

I have attended two or three sessions which talk about Symfony2 at this conference as well as a talk on using Ember.js with headless Drupal.  It’s interesting to note that whilst there are an increasing number of web development languages and tools to choose from, there are many conceptual aspects and best practices which converge across those languages and frameworks.  In particular, the frequent reference to the MVC architecture pattern, especially in the context of frameworks using OO, demonstrates the universality of this particular approach across current web development languages and frameworks. What is also clear from this session is that standardisation of approach and separation of concerns are important in all web development, regardless of your flavour of framework.

The Future of HTML and CSS

This tech-heavy session looked at the past, present and future of the relationship between HTML and CSS, exploring where we are now and how we got here, and how things might change or develop in future. Beginning with a short history lesson in how CSS developed out of the need to separate structure from presentation to resolve cross-browser compatibility issues, the session continued with an exploration of advancements in CSS such as CSS selectors, pseudo classes, CSS Flexbox, etc. and finally moved on to briefly talk about whether the apparent move in a more programmatic direction means that CSS may soon no longer be truly and purely a presentational language.

There was way too much technical detail in this presentation to absorb in the allotted time, but it was an interesting overview of what is now possible with CSS and what may be possible in future.  In terms of the philosophical discussion around whether programmatic elements in CSS are appropriate, I’m not sure I agree that this is necessarily a bad thing.  It seems to me that as long as the ‘logic’ aspects of CSS are directed at presentation concerns and not business logic, there is no philosophical problem.  The difficulty may then lie in identifying the line between presentation concerns and business concerns.  At any rate, this is perhaps of less concern than the potential page load overhead produced by increasingly complex CSS!

IS at DrupalCon Amsterdam – Day 1

This week, a few members of IS have decamped to Amsterdam to attend DrupalCon 2014, which brings together people involved with all aspects of Drupal for a week of talks, labs, Birds-of-a-Feather sessions and many hours of coding on Drupal 8. The focus of Tuesday’s opening Prenote, a DrupalCon fixture beautifully compered by JAM and Robert Douglass, was life-changing Drupal experiences. Whether or not DrupalCon changes your life, the breadth and depth of sessions and associated discussions to be found here this week is undeniably absorbing. For those of us who are currently working on the University’s new central Drupal CMS, DrupalCon provides a unique opportunity to both validate the approach we are taking with our development processes, coding standards and infrastructure, and to discover new modules, best practices and techniques which will benefit our new Drupal CMS.

After Dries’ Keynote, the conference kicked off in earnest. We have crowdsourced some of the highlights of our first day of sessions below. Many thanks to Aileen, Arthur, Riky, Tim, Andrew, Adrian and Stratos, for contributing their thoughts on sessions they attended; any errors, omissions or misinterpretations in their edited summaries are entirely mine. The overriding impression from discussing the sessions we have all attended is that Drupal is often at the bleeding edge of development tools and technologies by virtue of the commercial and community pressures in the Open Source environment. Drupal’s presence as a tool in our development portfolio both challenges our own best practices and introduces new, innovative means of developing quality applications which anticipate the needs of an increasingly diversified technological world.

Most of the sessions mentioned below, along with many more interesting talks, are recorded and available on the DrupalCon website, so if the summaries pique your interest, visit the DrupalCon site for more information!

Development Processes, Deployment and Infrastructure

State of Drupal DevOps

Whilst the focus of this session was Drupal DevOps, Kris Buytaert’s talk applies more generally, covering the reasons why DevOps is not just a team or C.I. or Puppet. It is a cultural attitude that requires long-term thinking and a degree of co-operation from all teams. It’s not just about the lifetime of a particular project, but the lifetime of an application or service. The impact of putting off changes to deployment strategy is an increase in the “technical debt”; it only defers the issues, which then become support problems.

The proposed approach is mainly to expand best practices developed for writing code, such as version control and testing, down into the infrastructure and up into the monitoring tools. For example, the importance of repeatability is heavily emphasised; everything needs to versioned, and this includes infrastructure as well as artifacts. Using such DevOps techniques, we can better map and evaluate the impact of changes. The payoff should be safer, quicker sites or applications that do what people want, and developers get more feedback about why things went wrong (“It works on my machine.” is never an excuse).

Deploying your Sites with Drush

This session covered the Drush Deploy plugin, which allows you to create drush configuration files to describe each of your environments so you can deploy consistently to all servers with one command, reducing human error.

It was interesting to contrast the approach of this plugin vs Capistrano or services like Bamboo or Jenkins. We use drush heavily in the automated deployment process for the University’s new central Drupal CMS, but Bamboo still handles the code deployment to ensure consitency across environments. We use Ant scripts to describe the pre- and post- code deployment tasks for Bamboo to carry out, whereas these are specified in a drush configuration file for the Deploy plugin and are limited to drush functions. It was interesting to compare their approach to handling code rollbacks with our plans for this, even though they do not explicitly include resinstating the database as part of that process. However, we are in a different position as normally we would roll back immediately upon failed deployment rather than some hours/days later when content could have changed significantly.

The importance of adopting an appropriate Git workflow to support deployment where there is a branch for live deployment which is always deployable was also discussed. Having separate live and dev branches is very important, and making use of separate branches for hotfixes and features is recommended: http://nvie.com/posts/a-successful-git-branching-model/.

WF Tools – Continuous Delivery Framework for Drupal

WF Tools is a Continuous Delivery framework for Drupal sites, which was also shown off at DrupalCamp Scotland earlier this year. It is used to deploy code and configuration from git into a freshly spun-up Development virtual host environment. These code changes are all separate “jobs”, working from git branches; WF Tools allows you to tag these jobs with JIRA issues and trigger runs from a build tool such as Jenkins or Bamboo. After a successful run, the Development environment can be assigned to another user for peer review, and their GUI view shows a change log and git diffs for comments and approval or rejection. Any approved jobs continue along the Dev/Test/Staging/Live deployment pipeline.

WF Tools is an interesting solution which could wrap around our existing processes quite well, and the Pfizer implementation of the GUI looks good. However, as we’re already using a lot of Bamboo functionality, and we’re only developing one Drupal site centrally at the moment, it might not be perfect for our current requirements.

Understanding the Building Blocks of Performance

This talk by Josh Waihi covered ways in which a system could be created to fulfil a client’s needs using vertical and horizontal optimisation techniques, supplemented by profiling tools to help find and fix bottlenecks. ‘The building blocks of performance’ were broken down into different important categories, understand, build and optimise:

  • understanding the resource requirements before an application is built;
  • building the infrastructure in a suitable manner which balances complexity with performance;
  • using logging and load testing to optimise the performance of the Drupal system.

Vertical optimisation involves hosting the components in the system – load balancer, http cache, web server with PHP and Drupal, other files, database and database cache – on different servers. In addition, assigning more resources to a site makes for easier location of bottlenecks. Once the vertical optimisation is done then horizontal optimisation can begin. In general the web server is duplicated many times, all referring to the same database and shared files. The main limiting factor here is cost. And finally, load testing and profiling tools help to ensure your system is using the correct amount of resources.

Unlike other sessions where new, cutting edge techniques and technologies were discussed which could be used in the future for Drupal, this session was beneficial because it vindicated the techniques that we are already using:

  • scaling our infrastructure vertically before we scale it horizontally;
  • the importance of using business rather than technical metrics for our performance and capacity testing;
  • the different caching tiers we use to boost performance were aspects of our current practice validated by the expert from Acquia.

However, using PHP-FPM instead of mod_php and interesting diagnostic tools such as XHProf are interesting new ideas I’ll bring back to DevTech.

Front End Concerns

Panels, Display Suite, and Context – oh my! What to use when, why and how

Yes, it did open with a pic of Dorothy et al! This was a very well attended session covering when and why you should/could use the different layout options available for Drupal. With clever use of a ‘Garfield’ rating system it was clear that all have pros and cons depending on use and complexity of the site.

Here’s the “Janet and John” bit…

  • Context provides flexibility by extending core blocks to provide reusable blocks and regions, but Blocks are still hard to maintain and there is only one set of regions for layouts.
  • Panels is powerful with a high level of granularity and a default variant which provides a failover structure. However, the codebase for Panels is heavy and the functionality provided may be overkill for easy layouts.
  • Display Suite has a simpler UI and similar flexible layouts to Panels, but only Entity layouts are supported and because there is no structure across different layouts, things can easily become complicated.

The consensus seems to be that Display Suite ticks most boxes but each method has its merits. You may want to do some research to find the best for your particular project.

Drupal 8 breakpoints and responsive images

Although this session was titled for Drupal 8, both modules covered, picture and breakpoints, have been backported to Drupal 7 and we have recently implemented them to support the theme for the University’s new central Drupal CMS. At this point we only have four variants of the group banner image being served, one per breakpoint that has been defined in the theme.

The session also covered the module’s use of the sizes attribute to serve optimised versions of images according to the viewport width in steps between the breakpoints where required. This is not something we are currently implementing, but we will be in the coming weeks as we approach the initial distribution release of our new Drupal theme.

The State of the Front End

The Front End is moving forward faster than anything else in drupal. Display targets used to be 640 X 480 or 1024 X 760 for IE and Netscape using tables; easy! Now HTML5, CSS, JS, responsive design, etc. add significant complexity to Front End development, and these are not Drupal skills!

Frameworks come and go (e.g. 960 grid, blueprint, bootstrap) and we may catch up or even get ahead, but not for long when things change in the Front End so fast, and techniques fall in and out of fashion. However, Front End is A Thing, and is pushing Drupal forward in the post-responsive world. There are multiple frameworks for everything and too much scope to play design for design’s sake; the goal is truly device-independent design. It’s necessary to accept that you may be ahead, but only ever for a short time; however, there are many tools out there to support the drive to keep up with the rapidly changing world of Front End development.

Performance

Turbocharging Drupal syndication with Node.JS

Where you have to generate feeds from Drupal for high volume of requests, caching is sometimes not an option because requests from downstream clients include timestamps per second (to retrieve things that have changed since last request), or there are user filtered requests which aren’t likely to be repeated.

The approach taken here was to use an indexer module to generate de-normalised tables for the Drupal data on a MongoDB database, optimised for delivery, and put a fast Node.JS REST API in front. In their case-study, Drupal could be as slow as around 1 request per second where many records were being returned in one request, whereas NodeJS could handle 800-3000 requests per second. Response times dropped from up to a minute down to 80-150ms.

To support developing for many parallel/asynchronous requests in NodeJS, there are npm’s such as promises to help.

Getting content to a phone in less than 1000ms

In order for a site to respond quickly enough for a user not to get bored waiting and give up to go elsewhere, it’s generally accepted that pages should be served in under a second. This can be challenging enough in complex Drupal pages, but there are added constraints to consider with mobile devices on slow networks. The DNS lookup, TCP connection, TLS handshake and HTTP request over 3G can come to 800-1000ms alone before you get to the time Drupal takes to serve the content and then the time taken by the client to paint the page. Given that many countries primarily use mobile devices now to access the internet, this is becoming more important.

When painting the page there are some blockers that delay rendering of the content. In particular, moving JS to the footer whenever possible and using async and defer was recommended. The magic module can help achieve this, and any critical JS can be made inline.

CSS can also be split out into what’s required for mobile devices and then LoadCSS can be used to load the remainder without blocking the initial rendering of the page. However, it is not always practical to achieve this.

With TCP, most of the time taken is not due to limitations in bandwidth, but latency on each round trip with initiating each request (handshakes, etc). CDNs can help by putting content closer to the client and allowing more parallel connections (because they are split across different domains), but this is expensive and will likely be blocked in China. Aggregating files and spriting so there are fewer files to load, even inlining very small assets; all of these things can help. Remove things you don’t need. The target is to get first response within 10 packets, 14.6k (RFC 6928), although it’s extremely difficult and very few sites are able to achieve this.

Preparing for SPDY/HTTP2.0 can conflict with some of the HTTP1.1 optimizations above, such as domain sharding and concatenation. Some work has started on supporting SPDY features, such as server push, in Drupal.

Project Management and Engagement

Selling Agile

In this experimental session, the Vesa Palmu CEO of Wunderroot shared some of his lessons learned over the past 10 years using Agile on IT projects.

A core difficulty in getting agreement to use Agile can be the lack of trust between parties. The reasons for this mistrust can be a lack of understanding of what Agile actually means. How does using Agile translate for the business, in terms how this changes their activity during projects? Customers are unsure about the cost versus what will actually be delivered or are unwilling to engage by providing a product owner.

One of the key messages is that in order to sell Agile successfully, one needs to focus on selling the benefits of Agile. Some of the benefits:

  1. Collaborative development approach
  2. Testing development as it progresses
  3. Creating value faster the multiple deliveries
  4. Delivering better quality
  5. Making better decisions along the way

Mixing Agile and Waterfall should be avoided, the benefits of Agile cannot be realised when project teams are in two mind sets.

Equally not all projects sizes are suitable for Agile, as illustrated by the following slide:

AgileProjectSizes

The Myth of the Meerkat: Organising Self-Organising Teams

In this session, speaker Jason Coghlan examined whether the self-organising team is a reality, especially in a commercial or public sector Drupal services environment, focusing on tips for research rather than specific examples of how to coach and build self-organising teams.

The session covered the need to differentiate between control and accountability: the former must be relinquished; the latter, especially individual accountability, is extremely important. In the context of self-organising teams, “Leaders are required, managers are optional”. George will take care of it! The conclusion is that whilst self-organising teams are not suitable for all projects, they are an ideal approach to technology-driven projects where a clear product or solution is delivered, focusing on value and return.

Engaging UX and Design Contributors

This was a really interesting session from Design Researcher, Dani Nordin, highlighting the challenges of integrating User Testing, UX & Design guidelines in an already established community such as the Drupal community. It’s very obvious that it is difficult to engage developers and designers to cooperate in such an open environment, so one thing I took from this session is that there is a strong need for integrating UX into the process of module contribution. Even though it might sound restrictive (especially for people contributing in their own time) it will pave the way for a more user friendly and intuitive Drupal UX. Drupal 8 might be a good opportunity to explore this as well.

Looking to the Future

Drupal 8: The Crash Course

Having attended DrupalCon last year as a relative Drupal newbie, and with most of our current internal development focus on Drupal 7, I approached Larry Garfield’s technical Drupal 8 overview session with mild trepidation. I needn’t have worried. This well structured introduction to Drupal 8 and its use of Symfony2 was very accessible. The code samples were clear and progressively illustrated each concept to give a pretty good high level overview of what to expect from Drupal 8. Coming from a Java/C++/OO background, I can say that it truly seems “Drupal8 is finally not weird”. Lots of familiar code even in the context of an unfamiliar framework!

Twig and the new Drupal 8 Theme system

The current Drupal 7 theming system involves a mixture of markup generated by modules and the theme primarily through theming hooks (functions). This means the front-end developer does not have full control of the markup (and CSS) that’s output.

By converting all the hook functions to Twig templates in the new Classy theme for Drupal 8, the themers can be in full control of all the output. The hope is that this will make frontend developers engage with Drupal in the future and make it easier for web design companies to engage with Drupal.

Changing menus and pagers was demonstrated, two of the most complex components to theme since they are currently buried in module functions, but in Classy these elements are themed in single twig template files which can easily be re-written to change CSS dependencies and markup without needing to know the inner workings of the core pager and menu functions.