Accelerating End User Testing

Do you develop software?

I suspect if you are looking at blogs in this space then that’s probably true, or at least it might be an area of interest for you.

Do you carry out End User Testing?

Well again, if you develop software then in most cases you are likely to be doing so with users in mind.

So End User Testing can be a pretty difficult thing to really get to grips with. In Applications Division we have adopted a technology called TestRail to help us, and frankly, it’s really doing an impressive job

http://www.gurock.com/testrail/

TestRail allows us to do things that have previously been terrifically hard and complex to coordinate. It allows us to understand how User Testing is going and to rapidly get defects straight back into development as they surface in our testing processes. Tracking progress is done through a very intuitive gui and when defects are found they are created by the tester and JIRA is created immediately. No messing about, straight back to development. This is really impacting our productivity positively

Often with big systems there can be quite literally hundreds of workflows and user test cases that need to be validated by teams of End User Testers. Keeping track of progress, or actually more importantly lack of progress, is a project manager’s nightmare. Traditionally, teams have used things like spreadsheets, email and word of mouth to know how far testers are getting on with their test scenarios and plans. Often, people unfortunately become distracted or have their plans interrupted or perhaps they might be unexpectedly absent from work. Knowing someone has not managed to complete a set of tests is crucial in making sure that things are not disappearing down rabbit holes and so that projects can complete in time.

Getting issues straight back to development allows us to start working on the problem or defect straight away, we don’t need to wait until the test run has completed, getting the tester to create the JIRA when the defect is found really speeds things up

Surfacing this has often been a really difficult thing to do but TestRail really helps to address this. It is easy to interpret and allows the project to adapt to the current situation in a way that would not really have been possible previously

We have introduced TestRail as part of our Digital Transformation programme.

You can find out more here;

http://www.projects.ed.ac.uk/project/dti002

Currently we are using TestRail in Human Resources and Finance. We are extending its use to link into both agile and waterfall projects and  expect to adopt this across the entire range of projects we undertake in Apps

If you fancy finding out more about how we are getting on with this please do get in touch

Iain

Combining pdfs and images in ColdFusion using Java and iText

pdfs, images and text combined in real time

We were writing a ColdFusion application that allows users to claim expenses and add PDFs or images of receipts electronically. The client requested that a single PDF for each claim be created which included both the claim details and ALL electronic attachments.

Using <cfpdf> chewed memory and created huge files

ColdFusion has built in PDF functionality but we found that adding BOTH images and merging other pdfs on the fly had performance issues and created documents with bloated file sizes.

Using iText provided a solution

Continue reading “Combining pdfs and images in ColdFusion using Java and iText”

Positive side effects of automated testing

We’ve recently been doing more automated testing in the SSP, and with that has come a lot of the benefits you might expect: an ability to spot faults as we make changes, and a guarantee of functionality working as prescribed among them. But I’ve come across a whole bunch of bonus benefits we get, some related to managing stories and projects, and some as personal gains for me.

Continue reading “Positive side effects of automated testing”

Render Conference 2016 sessions

In April this year I attended the Render Conference, a rebrand and reorganisation of 2015’s jQuery UK Conference. The name change signifies better the broader content of the conference, covering all sorts of front-end topics from CSS and JavaScript to form content and development philosophy.

In this post I’m going to go through each of the talks over the two days and summarise what the speakers talked about. I’ll also adds links to the slides and videos as they become available for those who want to look a little bit deeper. In a separate post, I’ll talk about what the lessons are that we can learn in the University of Edinburgh; and what we can start doing today.

Continue reading “Render Conference 2016 sessions”

IS at DrupalCon – Sessions Day 2

DrupalCon Barcelona 2015, Keynote Day 2 with Nathalie Nahai

On Tuesday and Wednesday I posted some session summaries and comments from DrupalCon 2015 in Barcelona, where myself and a few colleagues are spending this week.

Day 2 of DrupalCon began with a short session celebrating those involved with Drupal. i.e. partners and contributors, highlighting the importance of Drupal community members contributing through sprints, followed by the morning’s Keynote with Nathalie Nahai. Nathalie spoke about web psychology, providing a scientific perspective on how people see and react to different aspects of web content presentation. Admittedly the theme was more applicable to those Drupal users who deal with marketing aspects of websites since it was concerned with how to get, and keep, the attention that you desire from your online audience.  However, the principles apply equally well to any organisation or institution interested in engaging in the most effective way with visitors to their website.

The day continued with many more sessions across a broad range of topics.  We also took part in a couple of Birds of a Feather sessions, one of the great features of DrupalCon, allowing members of the Drupal community with a common background or interest an opportunity to discuss face-to-face the issues they deal with, sharing their knowledge and experience and exploring potential strategies to resolve those issues.

Our experiences of some of Wednesday’s DrupalCon sessions are outlined below. Thanks to Riky, Tim, Andrew, Adrian and Chris for contributing their thoughts on sessions they attended; any errors, omissions or misinterpretations in their edited notes are entirely mine. Most of the sessions mentioned below, along with many more interesting talks, are recorded and available on the DrupalCon YouTube channel.

How to change our Estimation Process Took our Project Endgame from WTF to FTW

This session covered estimation and how to turn this critical project component from something that often leads to a project being perceived as a failure, into an accurate and more reliable part of the project process. A common issue within some organisations is that the person deciding the budget does not have the in-depth knowledge of the project deliverables required to make sensible decisions. A lot of work goes into creating an initial estimate without knowing the details of the deliverables; the “bid”, or in UoE terms the proposal estimate, needs to be made on the objectives, and it should be accepted that this is what the estimate reflects.

During the estimation process, it’s crucial to ask the right questions, to review and to explain the process outlined below (creating transparency), to define scope (get the business to say what they want), and to discuss and agree milestones.

The next part in the process is the discovery, which is best done with UX sketches, but this needs a designer! Rapid iterative design should be the approach, with sketch approval, continuing early tech planning with sketches in preference to wire frames; these sketches are not full set of requirements but enable rough estimates to be produced with a goal of +/- 40% accuracy. This provides an early indication of feature complexity and expedites prioritisation before moving on to wireframes, and long before anything is actually built. Wireframes must be fully approved before beginning the next stage.

The next stage is full Tech Planning, which involves larger group of people with a goal of achieving estimates with 10% accuracy, adding implementation notes. This stage comprises of several 1.5 – 2 hour meetings over a couple of weeks where the deliverables are broken down into tasks and these are estimated in hours. These sessions involve lots of discussion using a kind of low level poker estimation, but do not involve the business. The project manager can then create a budget breakdown based on the estimates; the results, which are now deliverables, are then shared with the business and recommendations discussed with them.

If the estimation is over the the available budget at this stage, the options are clear: descope, share work or find more budget.

During the build pay close attention to:
1.    Large overspend on individual tasks;
2.    New requirements (these need prioritisation!);
3.    Weekly budget reviews and status check ins;
4.    Demo as often as possible as this gets customers excited when they can see a concept come to life!

Finally at project wrap up the project should be within 10% of the estimate. It is worth noting that this process doesn’t really work when the client provides UX, or for time and materials projects – in that case just go Agile!

However, this approach does present two main challenges: it requires partner buy in, and essential meetings are difficult to schedule.

My takeaways from this session are the need to review and constantly update estimates as the project moves forward, the importance of prioritisation and defining what is actually needed, and creating transparency throughout this process. On suitable projects, this could involve additional soft milestones for estimation. Another takeaway relates to the estimation process itself, to make it more accurate. To do this we need to have more detail before committing to delivery. As project managers, we need to be strong and not press ahead when there is insufficient detail; without this, there is a tendency to estimate at an abstract optimistic level.

Drupal Extreme Scaling

The old way to boost Drupal performance is to use the following technologies: memcache, APC/OPcache, Varnish, and server redundancy. We currently use all of these. We can now utilise elastic computing and containerisation to boost performance – as the presenter put it:

“this is not future technology, but present technology”

The speaker’s team was tasked to provide a minimum cost, automated, no-downtime hosting platform for 30 to 100 thousand Drupal sites. To do this, Amazon Web Services infrastructure was used to run a stack consisting of Docker containers, Nginx, MySQL, MongoDB, with Ansible for Configuration Management, a Node.JS administrative application, Apache Mesos as an abstraction layer, and Marathon and Chronos running on Mesos to allow it to control Docker containers and scheduled tasks. The end result gave a platform which could perform EC2 auto scaling and spin up Amazon Machine Images which contain the three used Docker containers (one for the admin application, one for Varnish, and a Drupal container which would be used for every site), while databases were shared with one per 500 sites to minimise their overheads using a clever method of table prefixing.

This was a fascinating, very technical talk which I’d recommend watching to anyone with an interest in successfully solving a huge, complex infrastructural project with modern technologies. Although we only have one Drupal site to run and not tens of thousands, some very useful advice was given based on the presenter’s experiences: Nginx is very flexible and PHP-FPM increases performance significantly (as we found with our own testing); centralised logging is vital; always use authentication on REST APIs; and the combination of cloud hosting and containerisation was excellent. If the task were to be repeated today though, one would likely use the AWS EC2 Container Service instead of Mesos and Marathon. Possibly the most important thing to remember

“Lazy DevOps is the best DevOps!”

Headless D8 in HHVM plus Angular.js and some other things you can do with Platform.sh

Platform.sh is a deployment platform originally developed for PHP applications which now also  supports Python and node.js. It integrates with whatever git repository you want, as well as HipChat, Jira and other tools, allowing multiple applications to be pushed into one build, e.g. front and back-end applications, and appear under separate hostnames. Platform.sh handles all the DNS and varnish config to create these pop-up environments and replicates live database and configuration into your development area. It can also sanitise the database as it’s moved to strip out user passwords and email addresses, etc.

One really useful feature is the ability for developers to specify the version of PHP and control the php.ini file in YAML files. You can also specify which database should be set up for the environment.

The ability to control this non-code configuration and replicate a complex build process for all developers without them all needing the level of expertise to set up their own environment comes in very useful when working with multiple teams. This is especially true if external developers who don’t know your environment are involved.

The session also covered some of the performance gains that can be achieved running Drupal on HHVM (HipHop Virtual Machine), over PHP7 and PHP5.

Defense in Depth: Lessons learned securing 100,000 Drupal Sites

Data breaches can be very expensive, so it is incredibly important to ensure that security consciousness is part of our mind-set in IS Applications. Breaches typically are not due to cracking encryption and hashes or exploiting unknown vulnerabilities, but rather human error. Thought should be given to the “CIA Security Triad” of confidentiality, integrity and availability. Security lists can be used to find out known vulnerabilities which need to be patched, these include: US-CERT and CERT-EU, LWN, Drupal, and security releases by Red Hat.

The main thing we took away from this session was not the quality of the advice, which was all very sensible (do backups, patch your servers and applications, use a Version Control Repository), but the practicalities of implementing that advice. Recommendations like using 2 Factor Authentication for our SSH keys are great, but we aren’t even using SSH keys for connecting to servers. Using enterprise login services so password hashes aren’t stored locally is also sound, but only if we were to use a technology like OAuth to allow it. We need to be doing more good practice when it comes to security; a greater security consciousness within IS Applications would be a great step in the right direction.

Local vs. Remote Development: Do Both by Syncing Your Site From Kitchen to Cloud With Jenkins

There are pros and cons to developing both locally (using VMs on a developer’s PC) and remotely (using Development servers provisioned by Development Technology). CASCADE is a new tool to streamline development workflows and add CI to local development. Effectively, it is extra code which uses Ansible to spin up and configure local Jenkins and GitLab Vagrant boxes and provide an interface to them.

While everyone in the audience was using Vagrant, very few had edited a Vagrantfile or run more than one Vagrant box at a time; a tool like CASCADE could provide developers with a simpler way to have a more advanced local CI environment. I don’t think its use would be appropriate to Development Services, but some of the ideas raised were interesting, especially as we are not generally using Vagrant for local development yet.

Behat+Mink+PhantomJS = Test ALL THE THINGS!

Integration testing is a topic that pops up regularly at DrupalCon, and in retrospect it was interesting to hear this talk on the same day as another talk on unit testing.  This particular session focused on the use of BDD framework Behat for testing, coupled with Mink to simplify interaction with the browser emulator, provided in this case by PhantomJS (other browser emulators such as Selenium webdriver can also be used).

Whilst the speaker was very engaging and did give a decent high level outline of the different components, including the Gherkin language used to define Behat tests, the outline didn’t have a clear structure, which made it difficult to get a grasp on how each component fits into the bigger picture.  That in turn makes it difficult to judge whether any/all of what was demonstrated would be useful in our context.  It was also disappointing to see that whilst the talk description mentioned screenshot comparison, the only real mention of this during the talk was to say that PhantomJS was not the best tool for UI comparison (one attendee suggested wraith as an alternative).  UI testing is something that we definitely need to explore further in the context of our Drupal CMS, and our current set of tools for automated testing (primarily Selenium WebDriver with test suites built in Java) may not be the best starting point.  Unfortunately, although it was interesting to see Mink, which I hadn’t come across before, there was nothing in this particular talk to help us find the best approach to UI testing where there are gaps in our own test suites.

Principles of Solitary Unit Testing

Whereas the earlier session I attended on Behat with Mink and PhantomJS was concerned more with integration testing, this session explored the principles of solitary unit testing, as contrasted with sociable unit testing, where the idea is to limit what is being tested as much as possible, essentially to test one thing without “crossing boundaries” such as writing to disk or reading from a database.

The speaker provided a very clear and interesting summary of the principles of unit testing, exploring aspects such as:

  • the importance of testing “one concrete class” (not counting value objects as these don’t have behaviour), using “doubles” to represent dependencies and objects returned by collaborators, thereby eliminating crossing of boundaries;
  • the stages of solitary unit testing, namely Arrange (setting up the context, e.g. any data required, before carrying out the test), Act (which ideally should call only one method) and Assert (to test whether the test passes);
  • the principle of always asserting last, and limiting each test to one assertion, which is really a general principle rather than a hard and fast rule – it was pointed out by one attendee and acknowledged by the speaker that sometimes it is necessary to break this principle;
  • ways of handling some of the complexity issues around unit testing, for example using ‘object mothers’ or ‘data builders’ to encapsulate setup, writing custom assertions to avoid multiple asserts in one method, and eliminating dependencies, all of which reduce the lines of code in the actual test and help to avoid “fragile tests” which break easily when something changes that isn’t directly related to the specific test;
  • the ability of solitary unit testing to highlight bad OOP code – if the test is difficult to write, the problem could be the code.

The speaker noted that solitary unit testing is not a catch-all, and will not always provide the most appropriate benefit.  In our particular context, end-to-end integration testing is of greater importance than unit testing as we need to ensure that the complex set of contrib and custom modules and configuration settings which comprise our central Drupal CMS function correctly when deployed together in one of our deployment environments; integration testing using Selenium Webdriver is therefore incorporated in our automated deployment process.

Notwithstanding the focus of our own test suites however, the principles explored in this session such as clear code structure, isolating specific functionality, ensuring readability and clarity of tests, minimising what is covered by one test, and limiting the assertions performed, are equally applicable.  It seems to me that many of these principles are a starting point for best practice regardless of the particular type of testing being performed. This was an excellent talk which provoked an equally interesting conversation between attendees and the speaker on when it is appropriate to bend or break the principles.  I highly recommend watching the session recording to anyone with an interest in automated code testing.

SmarTest: Proposal for accelerating the detection of faults in Drupal

The SmarTest module has been developed at the University of Seville, extending SimpleTest to improve automated testing for widely varying system configurations.  With Drupal having a high scope of configuration variability, it can generate multiple test cases for different configurations which can be quite difficult to cover in testing.

As part of the studies performed by the speaker, Ana, and her team at the University of Seville, a diagram was drawn up showing the relationships between a set of modules (48 in their example). After querying how this was produced, I was told that it was quite an involved manual process and, without tools to assist, would be a fairly time consuming if we wanted to have the same thing.

With the tests that they ran across various modules, they found some direct (and possibly obvious) relationships between certain aspects. They found that module size (lines of code) as well as the number of commits on a module directly related to the number of faults found in the modules which they tested, i.e. More code and/or more commits produced more faults. However, the more contributors that there were on a module did the opposite and reduced the number of faults in modules. It was also found that migrating the same modules to a newer version of Drupal introduced more faults again.

The SmarTest module is something that could be interesting for us to run against the configuration of EdWeb, our central Drupal CMS. It aims, among other things, to highlight the most potentially problematic modules.  However, the problem of Drupal being a “variability-intensive system” is not so much of an issue for us as we don’t really expect to vary our configuration drastically or often.

Next generation graphics: SVG

SVG is making a comeback now that Flash is dying off, and high resolution mobile and touch-screen tablet devices require vector graphics to keep logos and icons sharp while keeping file size low.

While there are no SVGs in Drupal 7, Drupal 8 core is now making use of SVG assets to replace PNGs. This session covered SVG as a markup language, even how to write it by hand (if you are that way inclined), as well as the features available for animating and interacting with SVGs using javascript, and how well (or not so well) these features are supported by the different browsers.

There are also some quite significant security risks if you allow users to upload their own SVG files, but that aside, you should be looking to SVGs rather than icon fonts for those vector icons now.

Configuration management in Drupal 8

Configuration Management is a new feature to Drupal 8; in Drupal 7 the closest you have is the features module. This session was a quick tour of how configuration in Drupal 8 can be exported and imported using drush, how it is stored in YAML files, and where it is defined in custom modules.

Dependencies are fully managed in Drupal 8 through these YAML configuration files, and when dependencies are removed, the configuration is deleted. Demonstrations during the session showed how dependencies build up and apply as soon as you use them, for example, a role access filter being applied to a view.

Tips for best practice in changing these files and moving these files between environments were covered, as well as in-depth details of the Configuration Entity and third party settings.

Drupal architectures for flexible content

This session explored the need to understand the requirements of editors and how to deal with the demands of content editors in Drupal.

Current limitations in Drupal were also highlighted, such as the disconnection between content and layout, which is not always a problem, and also how Drupal does not currently have revision history in content editing.

Making Drupal fly – The fastest Drupal ever is here!

Quite simply, this “fastest Drupal ever” is Drupal 8.

Different caching options were discussed and it does look like, due to the issues seen in Drupal 7 and earlier, a lot of attention has been given to performance and customisation of caching options in Drupal 8, .

As mentioned in previous session notes, full page caching via reverse proxy such as Varnish is also possible in Drupal 8.

Birds of a Feather Sessions

Design and Usability Critiques

This was a great BoF discussion, not so much for new information, but confirmation that the UX approach taken during the EdWeb project was fundamentally correct, although the process of incorporating UX into Agile project needs to be lighter, take place earlier and be more frequent. We had 3 hour sessions with a large group of people, who at that time in the iteration were all under pressure to get the iteration completed.

Some recommendations to make Agile UX sessions more effective:
1.    Run combined sessions with developers, business analyst, etc. but make them shorter;
2.    Present results at these sessions;
3.    Be clear about how UX sessions relate to prioritised stories;
4.    Be clear up front about what questions the UX session should answer.

Tips:
http://www.jasonondesign.com/2014/05/05/ladder-of-control/#scrollto
https://www.usertesting.com/b/

Drupal in Higher Education

This session was set up by developers from the University of Adelaide in Australia and was well attended by representatives from Higher Education institutions in the UK and across Europe. The initial introductions showed that a wide range of Drupal experiences at various stages of maturity were represented, from the management of a small number of Drupal sites, through the distribution of a profile across more than 100 devolved Drupal sites, to the wholesale replacement of existing CMSs with a central Drupal service.  As is so often the case during meetings between Drupal users having a common background, the main topics under discussion were pain points; what was apparent in the conversation that ensued was the commonality of these among the experiences of those present.

One area of particular concern was hosting, with almost everyone present agreeing that Universities often suffer from a peculiar fetish for internal hosting which can make it controversial to explore external hosting options.  The main reason for this seems to be the desire to avoid exposing sensitive data, and that is clearly an important issue for many websites managed within the HE sector.  The desire to keep things internal can make Drupal hosting especially difficult where there is a dependency for stability reasons on old versions of infrastructure elements such as PHP.

The approach which is being taken in Adelaide is of particular interest and something that we should explore further, especially given our own desire to look into the possibilities of configuration deployment and tools such as Puppet, Vagrant and Docker for automated deployment of the required server environments. The developers who set up the BoF have created an evolving platform for automated deployment of Drupal 8 to get around their internal hosting issues; they hope to collaborate on this with other institutions and ultimately make it available for wider use.  We are not yet planning for Drupal 8, but the principles of how to manage deployment of Drupal in a devolved HE context are of interest regardless of the Drupal version being used.  We have a relatively sophisticated means of deploying updates to the Distribution Profile that is associated with our central Drupal CMS, but we can do more with our automated deployment process in the admittedly more complex area of configuration and server deployment for the central CMS itself.

Another topic covered was role management – how to ensure that only the appropriate people can perform tasks such as generating a new Drupal site, or use functionality within a site.  LDAP groups were discussed as one means of achieving this, with users automatically added and removed from roles within Drupal based on their LDAP group membership; this requires that groups be configured with the appropriate members, and that there are clearly defined mappings between those groups and roles in the CMS.

Overall this was a really interesting BoF.  It was great to discuss both the positive aspects and common problems associated with using Drupal in an HE context, and to hear how other institutions are deploying and using Drupal. At the end of the session, contact details were shared and this will hopefully lead to further engagement beyond our meet-up in Barcelona.

 

Testing Times – Python unittest and Selenium

In the SSP we have been putting together a small suite of performance tests to use as before/after checks in the SITS upgrade. In the SSP we have a mixture of IS and non-IS staff, so it was important to come up with a procedure which was:

  1. Easily maintainable
  2. Accessible to team members who don’t come from a programming background

Selenium was a natural choice, since it is already being used elsewhere in the department, and it has a handy interface which lets you record tests easily.

Continue reading “Testing Times – Python unittest and Selenium”

IS at DrupalCon Amsterdam – Day 2

Yesterday I posted some session summaries from the first full day of DrupalCon 2014 in Amsterdam, where a few members of IS are spending this week. DrupalCon Day 2 began on Wednesday with a Keynote from Cory Doctorow, a thought-provoking talk on freedom and the internet, a subject about which some of us had previously heard him give at the IT Futures Conference in 2013, and one which has significance well beyond the context of Drupal for anyone who uses the web. The broader relevance of Cory’s speech is reflected in many of the sessions here at DrupalCon; topics such as automated testing or developments in HTML and CSS are of interest to any web developer, not just those of us who work with Drupal.  In particular, the very strong DevOps strand at this conference contains much that we can learn from and apply to all areas of our work, not just Drupal, whether we are developing new tools or managing services.

Our experiences of some of Wednesday’s DrupalCon sessions are outlined below.  Once again, thanks to Aileen, Arthur, Riky, Tim, Andrew, Adrian and Stratos for contributing their thoughts on sessions they attended; any errors, omissions or misinterpretations in their edited summaries are entirely mine. Most of the sessions mentioned below, along with many more interesting talks, are recorded and available on the DrupalCon website, so if the summaries pique your interest, visit the DrupalCon site for more information!

Development Processes, Deployment and Infrastructure

How Cultivating a DevOps Culture will Raise your Team to the Next Level

The main idea explored in this session was how to create a single DevOps team rather than have separate teams. DevOps is a Movement, a better way to work and collaborate. Rather than make a new team, the current teams should work together with fewer walls between them. The responsibility for adding new features and keeping the site up can then be shared, but this does mean that information needs to be shared between the teams to enable meaningful discussion.

The session was very dense and covered many aspects of implementing a DevOps culture, including:

  • common access to monitoring tools and logging in all environments;
  • the importance of consistency between environments and how automation can help with this;
  • the need for version control of anything that matters – if something is worth changing, it is worth versioning;
  • communication of process and results throughout the project life-cycle;
  • infrastructure as code, which is a big change but opens up many opportunities to improve the repeatability of tasks and general stability;
  • automated testing, including synchronisation of data between environments.

The framework changes discussed here are an extension of the road we are already on in IS Apps, but the session raised many suggestions and ideas that could usefully influence the direction we take.

Using Open Source Logging and Monitoring Tools

Our current Drupal infrastructure is configured for logging in the same way as the rest of our infrastructure – in a very simple, default manner. Apache access and error logs and MySQL slow query logs in the default locations, but not much else. Varnish currently doesn’t log to disk at all as its output is too vast to search. If we are having an issue with Apache on an environment, this could mean manually searching through log files on four different servers.

Monitoring isn’t setup by default by DevTech for our Linux hosts – we would use Dell Spotlight to diagnose issues, but it isn’t something which runs all the time. IS Apps is often unaware that there is an issue until it is reported.

We are able to solve these issue by using some form of logging host. This could be running a suite of tools, such as the ‘ELK stack’, which comprises Elasticsearch, Logstash and Kibana.

By using log shipping, we can copy syslog files and other log files from our servers to our log host. Logstash can then filter these logs from their various formats to a standard type, which Elasticsearch, a Java tool based on the Lucene search engine, can then search through. This resulting aggregated data can then be displayed using the Kibana dashboard.

We can also use these log “monitors” to create metrics. Logstash can write out to Graphite which can act as a counter of this data. Grafana acts as a dashboard for Graphite. As well as data from the logs, collectd can also populate Graphite with system data, such as CPU and memory usage. A combination of these three tools could potentially replace Spotlight for some tasks.

We need this. Now. I strongly believe that our current logging and monitoring is insufficient, and while all of this is applicable to any service that we run, our vast new Drupal infrastructure particularly shows the weaknesses in our current practices. One of the 5 core DevOps “CLAMS” values is Measurement, and I think that an enhanced logging and monitoring system will greatly improve the support and diagnosis of services for both Development and Production Services.

Drupal in the HipHop Virtual Machine

When it comes to improving Drupal performance, there are three different areas to focus on. The front end is important as render times will always affect how quickly content is displayed. Data and IO at the back end is also a fundamental part; poor SQL queries for example are a major cause of non-linear performance degradation.

While caching will greatly increase page load times, for dynamic content which can’t be cached, the runtime is a part of the system which can be tuned. The original version of HipHop compiled PHP sites in their entirety to a C binary. The performance was very good, but it took about an hour to compile a Drupal 7 site and it resulted in a 1 Gb binary file. To rectify this, Java Virtual Machine-like Just In Time compilation techniques were introduced for HipHop Virtual Machine (HHVM) which runs as a FastCGI module.

Performance testing has shown that PHP 5.5 with OPcache is about 15% faster than PHP 5.3 with APC, which is what we are currently using, and HHVM 3.1 has about the same performance improvement again over PHP 5.5. However, despite the faster page load times, HHVM might not be perfect for our use. It compiles pages with Hack, which uses strong typing, rather than PHP, and it doesn’t support all elements of the PHP language. It is still very new and documentation isn’t great, but this session demonstrates that it is worth thinking about alternatives to the default PHP that is packaged for our Linux distribution. There are also other PHP execution engines, PHPng (which PHP 7 will be based off of), HippyVM and Recki-CT.

In IS Apps, we may want to start thinking about using the Red Hat Software Collections repository to get access to a supported, but newer, and therefore potentially more performant, version of PHP.

Content Staging in Drupal 8

This technical session provided a very nice overview of content staging models and how these can be implemented in Drupal 8. There was a presentation of core and contrib modules used, as well as example code. The process runs by comparing revisions and their changes using hashcodes and then choosing whether to push to the target websites.

What I would take from this session is that it will be feasible to build content staging in Drupal 8 using several workflows, from simple Staging to Production, up to multiple editorial sandboxes to production or a central editorial hub to multiple production sites. One understandable caveat is that the source and target nodes must share the same fields otherwise only the source fields will be updated, but this can be addressed with proper content strategy management.

Whilst this session focused on Drupal 8, the concepts and approach discussed are of interest to us as we explore how to replicate content in different environments, for example between Live and Training, in the University’s new central Drupal CMS.

Testing

Automated Frontend Testing

This session explored three aspects of automated testing: functional testing, performance testing and CSS regression testing.

From the perspective of developing the University’s new central Drupal CMS, there were a number of things to take away from this session.

In the area of functional testing, we are using Selenium WebDriver test suites written in Java to carry out integration tests via Bamboo as part of the automated deployment process.  Whilst Selenium tests have served us well to a point, we have encountered some issues when dealing with Javascript heavy functionality.  CasperJS, which uses the PhantomJS headless WebKit and allows scripted actions to be tested using an accessible syntax very similar to jQuery, could be a good alternative tool for us.  In addition to providing very similar test suite functionality to what is available to us with Selenium, there are two features of CasperJS that are not available to us with our current Selenium WebDriver approach:

  • the ability to specify browser widths when testing in order to test responsive design elements, which was demonstrated using picturefill.js, and which could prove invaluable when testing our Drupal theme;
  • the ability to easily capture page status to detect, for example, 404 errors, without writing custom code as with Selenium.

For these reasons, we should explore CasperJS when writing the automated tests for our Drupal theme, and ultimately we may be able to refactor some of our existing tests in CasperJS to simplify the tests and reduce the time spent on resolving intermittent Selenium WebDriver issues.

On the performance testing front, we do not currently use any automated testing tools to compare such aspects of performance as page load time before and after making code changes.  This is certainly something we should explore, and the tools used during the demo, PageSpeed and Phantomas, seem like good candidates for investigation. A tool such as PageSpeed can provide both performance metrics and recommendations for how to resolve bottlenecks. Phantomas could be even more useful as it provides an extremely granular variation on the kind of metrics available using PageSpeed and even allows assertions to be made to check for specific expected results in the metrics retrieved. On performance, see also the blog post from DrupalCon day 1 for the session summary on optimising page delivery to mobile devices.

Finally, CSS regression testing with Wraith, an open source tool developed by the BBC, was demonstrated.  This tool produces a visual diff of output from two different environments to detect unexpected variation in the visual layout following CSS or code changes.  Again, we do not do any CSS regression testing as part of our deployment process for the University’s new central Drupal CMS, but the demo during this talk showed how easy it could be to set up this type of testing. The primary benefit gained is the ability to quickly verify for multiple device sizes that you have not made an unexpected change to the visual layout of a page. CSS regression testing could be particularly useful in the context of ensuring consistency in Drupal theme output following deployment.

I can highly recommend watching the session recording for this session.  It’s my favourite talk from this year’s DrupalCon and worth a look for any web developer.  The excellent session content is in no way specific to Drupal.  Also, the code samples used in the session are freely available and there are links to additional resources, so you can explore further after watching the recording.

Doing Behaviour-Driven Development with Behat

Having attended a similar, but much simpler and more technically focused, presentation at DrupalCamp Scotland 2014, my expectation from this session was to better understand Behaviour Driven Development (BDD) and how Behat can be used to automate testing using purpose written scripts. It was showcased how BDD can be integrated easily in Agile projects because its main driver of information is discussions regarding business objectives. In addition to user stories, examples were provided to better explain the business benefit.

I strongly believe that this testing process is something to look deeper into as it would enable quicker, more comprehensive and better documented user acceptance testing to take place following functionality updates, saving time in writing long documents and hours of manual work. Another clear benefit is that the examples being tested reflect real business needs and requests, ensuring that deliverables actually follow discussed user stories and satisfy their conditions. Finally, this highlights the importance of good planning and how it can help later project stages, like testing, to run more smoothly and quickly.

UX Concerns

Building a Tasty Backend

This session was held in one of the smaller venues and was hugely popular; there was standing room only by the start, or even “sitting on the floor room” only. Obvious health and safety issues there!

The focus of this session was to explore Drupal modules that can help improve the UX for CMS users who may be intimidated by or frankly terrified of using Drupal, demonstrating how it is possible to simplify content creation, content management and getting around the Admin interface without re-inventing the wheel.

The general recommended principle is “If it’s on the page and it really doesn’t need to be, get rid of it!”.  Specific topics covered included:

  • using the Field Group module to arrange related content fields into vertical tabs, simplifying the user experience by showing only what the user needs to see;
  • disabling options that are not really required or don’t work as expected (e.g. the Preview button when editing a node) to remove clutter from the interface;
  • using Views Bulk Operations to tailor and simplify how users interact with lists of content;
  • customising and controlling how each CMS user interacts with the Admin menu system using modules such as Contextual Administration, Admin Menu Source and Admin Menu Per Menu.

The most interesting thing about this talk in light of our experience developing the University’s new central Drupal CMS is how closely many of the recommendations outlined in this session match our own module selection and the way in which we are handling the CMS user experience.  It is reassuring to see our approach reflected in suggested best practices, which we have come to through our knowledge and experience of the Drupal modules concerned, combined with prototyping and user testing sessions that have at times both validated our assumptions and exposed flaws in our understanding of the user experience.  As was noted in this session, “Drupal isn’t a CMS, it’s a toolkit for building a CMS”; it’s important that we use that toolkit to build not only a robust, responsive website but also a clear, usable and consistent CMS user experience.

Project Management and Engagement

Getting the Technical Win: How to Position Drupal to a Sceptical Audience

This Presentation started with the bold statement that no one cares about the technology be it Drupal, Adobe, Sitecore or WordPress. Business’s care about solutions and Drupal can offer the solution. Convincing people is hard, removing identified blockers is the easier bit.

In order to understand the drivers for change we must ask the correct questions. These can include:

  1. What are the pain points
  2. What is the competition doing, and most importantly
  3. Take a step back and don’t dive into a solution immediately.

Asking these kinds of questions will help build a trusted relationship. To this end it is sometimes a necessity in certain situations to be realistic and sometimes there is the need to say no. Understanding what success will look like and what happens if change is not implemented are two further key factors.

The presentation then moved on to technical themes. It is important to acknowledge that some people have favoured technologies. While Drupal is not the strongest technology, it has the biggest community and with that huge technical resources, ensuring longevity and support. Another common misconception is around scalability. However, Drupal’s scalability has been proven.

In the last part of the presentation attention turned to the sales process, focussing on the stages and technicalities involved towards closing a deal. The presentation ended with a promising motto “Don’t just sell, promise solutions instead.”

Although this was a sales presentation it offered valuable arguments to call upon when encouraging new areas to come aboard the Drupal train.

SellingDrupal

Looking to the Future

Future-Proof your Drupal 7 Site

This session primarily explored how best to future-proof a Drupal site by selecting modules chosen from the subset that have either been moved into Drupal core in version 8 or have been back ported into Drupal 7.  We are already using most of the long list of modules discussed here for the University’s new Drupal CMS.  For example, we recently implemented the picture and breakpoints modules to meet responsive design requirements, both of which have been back ported to Drupal 7.  This gives us a degree of confirmation that our module selection process will be effective in ensuring that we future-proof the University’s new central Drupal CMS.

In addition to the recommended modules, migrate was mentioned as the new upgrade path from Drupal 7 to Drupal 8, so we should be able to use the knowledge gained in migrating content from our existing central CMS to Drupal when we eventually upgrade from Drupal 7 to Drupal 8.

Symfony2 Best Practices from the Trenches

The framework underpinning Drupal 8 is Symfony2, and whilst we are not yet using Drupal 8, we are exploring web development languages and frameworks in other areas, one of which is Symfony2. As Symfony2 uses OO, it’s also useful to see how design patterns such as Dependency Injection are applied outside the more familiar Java context.

This best practices covered in this session seem to have been discovered through the bitter experience of the engaging presenter, and many of them are applicable to other development frameworks.  Topics covered included:

  • proper use of dependency injection in Symfony2 and how this can allow better automated testing using mock DB classes;
  • the importance of separation of concerns and emphasis on good use of the service layer, keeping Controllers ‘thin’;
  • appropriate use of bundles to manage code;
  • selection of a standard method of configuration to ensure clarity, readability and maintainability (XML, YAML and annotations can all be used to configure Symfony2);
  • the importance of naming conventions;
  • recommended use of Composer for development using any PHP framework, not just Symfony2.

I have attended two or three sessions which talk about Symfony2 at this conference as well as a talk on using Ember.js with headless Drupal.  It’s interesting to note that whilst there are an increasing number of web development languages and tools to choose from, there are many conceptual aspects and best practices which converge across those languages and frameworks.  In particular, the frequent reference to the MVC architecture pattern, especially in the context of frameworks using OO, demonstrates the universality of this particular approach across current web development languages and frameworks. What is also clear from this session is that standardisation of approach and separation of concerns are important in all web development, regardless of your flavour of framework.

The Future of HTML and CSS

This tech-heavy session looked at the past, present and future of the relationship between HTML and CSS, exploring where we are now and how we got here, and how things might change or develop in future. Beginning with a short history lesson in how CSS developed out of the need to separate structure from presentation to resolve cross-browser compatibility issues, the session continued with an exploration of advancements in CSS such as CSS selectors, pseudo classes, CSS Flexbox, etc. and finally moved on to briefly talk about whether the apparent move in a more programmatic direction means that CSS may soon no longer be truly and purely a presentational language.

There was way too much technical detail in this presentation to absorb in the allotted time, but it was an interesting overview of what is now possible with CSS and what may be possible in future.  In terms of the philosophical discussion around whether programmatic elements in CSS are appropriate, I’m not sure I agree that this is necessarily a bad thing.  It seems to me that as long as the ‘logic’ aspects of CSS are directed at presentation concerns and not business logic, there is no philosophical problem.  The difficulty may then lie in identifying the line between presentation concerns and business concerns.  At any rate, this is perhaps of less concern than the potential page load overhead produced by increasingly complex CSS!

Testing Times in the SSP

For a recent project we needed to establish some base line timings using selenium and the process had to be automated.  We solved this by combining Firebug and Selenium and producing HAR files.

This article outlines combing firebug and netexport which gives you a very handy export for analysing all of your timings.  Your can then use the HAR viewer to have a look and see what is going on.

Another approach which we tried was http://assertselenium.com/2012/11/02/performance-data-collection-using-browsermob-proxy-and-selenium/ – using a REST based proxy tool to record the har files.  This means you don’t have to use the firebug part so could be used cross browser.

We also are having a look at a couple of javascript based testing tools which are well worth a look: