Adrian Mouat on ‘Understanding Docker and Containerisation’

Sometimes, what can seem just a useful innovation in IT infrastructure can have a significant effect higher up. Containerisation is one of those things, and one of its experts outlined the how and why in a Software Development Community of Practice industry talk.

One of the advantages of being in Edinburgh is that we have quite the tech scene on our door step. Sometimes literally, as when one of the pioneers of the now ubiquitous Docker container technology turns out to work out of the Codebase side of Argyle House. And that’s not the only connection Adrian has with us; he used to work at the EPCC part of the university. Which made the idea of inviting him over for a general talk on Docker and containerisation both compelling and do-able.

Being in IS, but somewhat removed from actually running server software, I was about as aware of the significance of containers as I was hazy on the details. Fortunately, I was the sort of audience Adrian’s talk was aimed at.

Specifically, he answered the main questions:

What is a container?

A portable, isolated computing environment that’s like a virtual machine, but one that shares its operating system kernel with its host. The point being that it is a lot more efficient in image size, start-up times etc. than a virtual machine.  Docker is a technology for making such containers.

What problem does it solve?

Containers solve the “it works for me” problem where a developer gets some software to work perfectly on her own machine, only to see it fail elsewhere because any of a myriad differences in the computing environment.

Why is it important?

Because it enables two significant trends in software development and architecture. One is the shift to microservices, which encapsulate functionality in small services that do just one thing, but do it well. Those microservices ideally live in their own environment, with no dependencies on anything else outside of their service interface. Container environments such as Docker are ideal for that purpose.

The other trend is devops- blurring the distinction between software development and operations, or at least bringing them much closer together. By making the software environment portable and ‘copy-able’, it becomes much easier and quicker to develop, test and deploy new versions of running software.

What’s the catch?

No technology is magic, so it was good to hear Adrian point to the limitations of Docker as well.  One is the danger of merely shifting the complexity of modern software applications from the inside of a monolithic application to a lot of microservices on the network. This can be addressed by good design and making use of container orchestration technology such as Kubernetes.

The other drawback is that containers are necessarily not great at sharing complex states. Because each small piece of software lives in splendid isolation in its own container with its own lifecycle, making sure that everyone of them is on the same page when they work together in a process is a challenge.

Overall, though, Docker’s ability to make software manageable is very attractive indeed, and, along with the shift to the cloud, could well mean that our Enterprise Architecture will look very different in a few years’ time.

(repost from the Information Services Applications Directorate Blog)

Duncan McDonald and Katie Stockton Roberts: Strangling Monoliths the Bitesize Way

Duncan McDonald and Katie Stockton Roberts on stage

Last week we were very fortunate to welcome Duncan McDonald and Katie Stockton Roberts from the development team for BBC Bitesize. They spoke to us about how the technical architecture of Bitesize has changed over the years, from a hulking PHP monolith to a selection of independent microservices that combine to build the web and mobile applications.

Unfortunately we weren’t able to record the presentation but I’ve attached a copy of the slides below and, after the jump, provide an overview of what was talked about and how it might fit in to the University of Edinburgh.

Download slides (requires University login)
NB: This presentation includes videos and is over 400MB. Don’t download on mobile data.

This talk was organised by the Software Development Community and, if you’re not connected to us already, there are a myriad of ways to get in touch to find out about future events. If you’re a member of staff at the University then you can join the mailing list, or the Slack channel, and anyone can find us on Twitter and Facebook.

We’re particularly interested in hearing ideas for new events, talks and workshops. If there’s something you’d like to know more about, then we’d be keen to help you organise that. And if you have a contact that you think would provide an interesting talk to the community then we’d be keen to hear about them. We also have some budget to help with expenses like transport and catering. To get in touch, please email the organising committee.

Continue reading “Duncan McDonald and Katie Stockton Roberts: Strangling Monoliths the Bitesize Way”

A View from the Prater – IS at DrupalCon Vienna, Day 1

View of the Prater park in Vienna

As we embark upon our next big adventure, planning for the migration from Drupal 7 to Drupal 8 of EdWeb, the University’s central CMS, a group of us from Information Services are here in Vienna this week attending DrupalCon 2017.  We are a small but diverse bunch of project managers, developers, sysadmins, and support staff who all play a part in building, running and managing EdWeb.  For the next few days we’ll be sharing our thoughts on the sessions we attend, recommending top sessions, and giving our key takeaways – not the wurst variety – from our DrupalCon experience.

On Tuesday, we started DrupalCon the right way by attending the always entertaining Pre-note, followed by Dries Buytaert’s traditional Driesnote keynote presentation on the state of Drupal.  We then set out on our different tracks, paths crossing at coffee and lunch, for the first intense but interesting day of DrupalCon sessions.

Continue reading “A View from the Prater – IS at DrupalCon Vienna, Day 1”

Automation within Information Services Applications Team Delivers Efficiency Gains

Recent work within Information Services Applications has realised benefits by significantly reducing the amount of time required to release APIs (code used by developers to innovate and build further tools).

By utilising tools and technologies which partly automate the work involved in rolling out APIs the Development Services teams have managed to reduce the lead time prior to release from around five days to a matter of minutes.  The tools mean that the release process is significantly more straightforward and requires much less setting up and configuration to the point where releasing an API is almost a case of pushing a button.  The automation also ensures that the process is more reliable and less prone to errors.

Furthermore, multiple APIs can be released in parallel meaning that efficiency gains are further increased as more APIs are developed by the University.

The team demonstrated this new capability in a recent deployment of a microservices tool where several existing APIs were redeployed as part of the release.

Further Information on APIs Programme:

There is a need to transform the online experience of students, applicants, alumni, staff and other members of the University community to provide tailored information via personalised digital services.

To deliver these personalised digital services requires a way to plug into information from central systems and use it in new and innovative ways , like new websites, apps for mobile devices and what’s called ‘portlets’ for MyEd that operate like web pages within web pages.

Plugging into the information can be achieved by using Application Programming Interfaces (APIs). Here at the University we use open source tools for APIs – the type of code you can get from online communities where developers collaborate to make even better products and software.

While API technology has been around for a long time, its use beyond the integration of large, complex business systems has grown rapidly in recent years and with the proliferation of devices and mobile technology, API use in getting data from one place to another – from connecting business systems, to mobile devices and apps – has expanded exponentially.  That’s because APIs allow data to be accessed securely and consistently across multiple devices, a tremendously valuable resource for the University.

The University’s Enterprise APIs Programme (part of Digital Transformation) has been set up to deliver APIs to support projects like the User Centred Portal Pilot, the Service Excellence Programme and other digitalisation and transformation Programmes across the University.  In addition to enhancing central services such as MyEd, APIs will provide a consistent way for software developers across the University to build flexible systems at a lower cost, securely, and consistently across multiple systems.

Further Links:

Digital Transformation Website

API Technology Roadmap

 

Puppet Training

wooden-mannequin-croppedAn increasingly important aspect of our strategy is to use automation as much as possible when appropriate. Using automation ensures that our application delivery infrastructure is consistent across our Development, Test and Production environments, and allows a service to be rolled out quickly onto a new server if required. It also enables us to stop doing the same manual task over and over again..

We started using Puppet over a year ago and have since used it to automate many aspects of web server (Apache HTTP Server) and application server (such as Tomcat) configuration. Several of our priority services such as Student records system, MyEd portal and Central Wiki are now built with the help of Puppet.

In order to get the most out of Puppet several Development services staff have attended training provided by Puppet. Last week was the turn of myself and Development Technology colleague Riky to attend Puppet Practitioner. I was a little apprehensive as I hadn’t been to Puppet Fundamentals, the course that Practitioner builds upon. The apprehension was unfounded as the experience that I had gained using puppet “in anger” building services was more than enough to see my through.

Our trainer gave a honest view of the different ways (good and bad) of solving real world problems they had come across whilst using Puppet.

The training covered things to be added to our todo list including, investigating testing of Puppet DSL using rspec. Also something to ponder is syntax validation of Hiera YAML files after I confused Puppet and myself yesterday by missing out a colon between the key and value. Riky found this Testing Hiera Data article that might be handy…

IS at DrupalCon Amsterdam – Day 2

Yesterday I posted some session summaries from the first full day of DrupalCon 2014 in Amsterdam, where a few members of IS are spending this week. DrupalCon Day 2 began on Wednesday with a Keynote from Cory Doctorow, a thought-provoking talk on freedom and the internet, a subject about which some of us had previously heard him give at the IT Futures Conference in 2013, and one which has significance well beyond the context of Drupal for anyone who uses the web. The broader relevance of Cory’s speech is reflected in many of the sessions here at DrupalCon; topics such as automated testing or developments in HTML and CSS are of interest to any web developer, not just those of us who work with Drupal.  In particular, the very strong DevOps strand at this conference contains much that we can learn from and apply to all areas of our work, not just Drupal, whether we are developing new tools or managing services.

Our experiences of some of Wednesday’s DrupalCon sessions are outlined below.  Once again, thanks to Aileen, Arthur, Riky, Tim, Andrew, Adrian and Stratos for contributing their thoughts on sessions they attended; any errors, omissions or misinterpretations in their edited summaries are entirely mine. Most of the sessions mentioned below, along with many more interesting talks, are recorded and available on the DrupalCon website, so if the summaries pique your interest, visit the DrupalCon site for more information!

Development Processes, Deployment and Infrastructure

How Cultivating a DevOps Culture will Raise your Team to the Next Level

The main idea explored in this session was how to create a single DevOps team rather than have separate teams. DevOps is a Movement, a better way to work and collaborate. Rather than make a new team, the current teams should work together with fewer walls between them. The responsibility for adding new features and keeping the site up can then be shared, but this does mean that information needs to be shared between the teams to enable meaningful discussion.

The session was very dense and covered many aspects of implementing a DevOps culture, including:

  • common access to monitoring tools and logging in all environments;
  • the importance of consistency between environments and how automation can help with this;
  • the need for version control of anything that matters – if something is worth changing, it is worth versioning;
  • communication of process and results throughout the project life-cycle;
  • infrastructure as code, which is a big change but opens up many opportunities to improve the repeatability of tasks and general stability;
  • automated testing, including synchronisation of data between environments.

The framework changes discussed here are an extension of the road we are already on in IS Apps, but the session raised many suggestions and ideas that could usefully influence the direction we take.

Using Open Source Logging and Monitoring Tools

Our current Drupal infrastructure is configured for logging in the same way as the rest of our infrastructure – in a very simple, default manner. Apache access and error logs and MySQL slow query logs in the default locations, but not much else. Varnish currently doesn’t log to disk at all as its output is too vast to search. If we are having an issue with Apache on an environment, this could mean manually searching through log files on four different servers.

Monitoring isn’t setup by default by DevTech for our Linux hosts – we would use Dell Spotlight to diagnose issues, but it isn’t something which runs all the time. IS Apps is often unaware that there is an issue until it is reported.

We are able to solve these issue by using some form of logging host. This could be running a suite of tools, such as the ‘ELK stack’, which comprises Elasticsearch, Logstash and Kibana.

By using log shipping, we can copy syslog files and other log files from our servers to our log host. Logstash can then filter these logs from their various formats to a standard type, which Elasticsearch, a Java tool based on the Lucene search engine, can then search through. This resulting aggregated data can then be displayed using the Kibana dashboard.

We can also use these log “monitors” to create metrics. Logstash can write out to Graphite which can act as a counter of this data. Grafana acts as a dashboard for Graphite. As well as data from the logs, collectd can also populate Graphite with system data, such as CPU and memory usage. A combination of these three tools could potentially replace Spotlight for some tasks.

We need this. Now. I strongly believe that our current logging and monitoring is insufficient, and while all of this is applicable to any service that we run, our vast new Drupal infrastructure particularly shows the weaknesses in our current practices. One of the 5 core DevOps “CLAMS” values is Measurement, and I think that an enhanced logging and monitoring system will greatly improve the support and diagnosis of services for both Development and Production Services.

Drupal in the HipHop Virtual Machine

When it comes to improving Drupal performance, there are three different areas to focus on. The front end is important as render times will always affect how quickly content is displayed. Data and IO at the back end is also a fundamental part; poor SQL queries for example are a major cause of non-linear performance degradation.

While caching will greatly increase page load times, for dynamic content which can’t be cached, the runtime is a part of the system which can be tuned. The original version of HipHop compiled PHP sites in their entirety to a C binary. The performance was very good, but it took about an hour to compile a Drupal 7 site and it resulted in a 1 Gb binary file. To rectify this, Java Virtual Machine-like Just In Time compilation techniques were introduced for HipHop Virtual Machine (HHVM) which runs as a FastCGI module.

Performance testing has shown that PHP 5.5 with OPcache is about 15% faster than PHP 5.3 with APC, which is what we are currently using, and HHVM 3.1 has about the same performance improvement again over PHP 5.5. However, despite the faster page load times, HHVM might not be perfect for our use. It compiles pages with Hack, which uses strong typing, rather than PHP, and it doesn’t support all elements of the PHP language. It is still very new and documentation isn’t great, but this session demonstrates that it is worth thinking about alternatives to the default PHP that is packaged for our Linux distribution. There are also other PHP execution engines, PHPng (which PHP 7 will be based off of), HippyVM and Recki-CT.

In IS Apps, we may want to start thinking about using the Red Hat Software Collections repository to get access to a supported, but newer, and therefore potentially more performant, version of PHP.

Content Staging in Drupal 8

This technical session provided a very nice overview of content staging models and how these can be implemented in Drupal 8. There was a presentation of core and contrib modules used, as well as example code. The process runs by comparing revisions and their changes using hashcodes and then choosing whether to push to the target websites.

What I would take from this session is that it will be feasible to build content staging in Drupal 8 using several workflows, from simple Staging to Production, up to multiple editorial sandboxes to production or a central editorial hub to multiple production sites. One understandable caveat is that the source and target nodes must share the same fields otherwise only the source fields will be updated, but this can be addressed with proper content strategy management.

Whilst this session focused on Drupal 8, the concepts and approach discussed are of interest to us as we explore how to replicate content in different environments, for example between Live and Training, in the University’s new central Drupal CMS.

Testing

Automated Frontend Testing

This session explored three aspects of automated testing: functional testing, performance testing and CSS regression testing.

From the perspective of developing the University’s new central Drupal CMS, there were a number of things to take away from this session.

In the area of functional testing, we are using Selenium WebDriver test suites written in Java to carry out integration tests via Bamboo as part of the automated deployment process.  Whilst Selenium tests have served us well to a point, we have encountered some issues when dealing with Javascript heavy functionality.  CasperJS, which uses the PhantomJS headless WebKit and allows scripted actions to be tested using an accessible syntax very similar to jQuery, could be a good alternative tool for us.  In addition to providing very similar test suite functionality to what is available to us with Selenium, there are two features of CasperJS that are not available to us with our current Selenium WebDriver approach:

  • the ability to specify browser widths when testing in order to test responsive design elements, which was demonstrated using picturefill.js, and which could prove invaluable when testing our Drupal theme;
  • the ability to easily capture page status to detect, for example, 404 errors, without writing custom code as with Selenium.

For these reasons, we should explore CasperJS when writing the automated tests for our Drupal theme, and ultimately we may be able to refactor some of our existing tests in CasperJS to simplify the tests and reduce the time spent on resolving intermittent Selenium WebDriver issues.

On the performance testing front, we do not currently use any automated testing tools to compare such aspects of performance as page load time before and after making code changes.  This is certainly something we should explore, and the tools used during the demo, PageSpeed and Phantomas, seem like good candidates for investigation. A tool such as PageSpeed can provide both performance metrics and recommendations for how to resolve bottlenecks. Phantomas could be even more useful as it provides an extremely granular variation on the kind of metrics available using PageSpeed and even allows assertions to be made to check for specific expected results in the metrics retrieved. On performance, see also the blog post from DrupalCon day 1 for the session summary on optimising page delivery to mobile devices.

Finally, CSS regression testing with Wraith, an open source tool developed by the BBC, was demonstrated.  This tool produces a visual diff of output from two different environments to detect unexpected variation in the visual layout following CSS or code changes.  Again, we do not do any CSS regression testing as part of our deployment process for the University’s new central Drupal CMS, but the demo during this talk showed how easy it could be to set up this type of testing. The primary benefit gained is the ability to quickly verify for multiple device sizes that you have not made an unexpected change to the visual layout of a page. CSS regression testing could be particularly useful in the context of ensuring consistency in Drupal theme output following deployment.

I can highly recommend watching the session recording for this session.  It’s my favourite talk from this year’s DrupalCon and worth a look for any web developer.  The excellent session content is in no way specific to Drupal.  Also, the code samples used in the session are freely available and there are links to additional resources, so you can explore further after watching the recording.

Doing Behaviour-Driven Development with Behat

Having attended a similar, but much simpler and more technically focused, presentation at DrupalCamp Scotland 2014, my expectation from this session was to better understand Behaviour Driven Development (BDD) and how Behat can be used to automate testing using purpose written scripts. It was showcased how BDD can be integrated easily in Agile projects because its main driver of information is discussions regarding business objectives. In addition to user stories, examples were provided to better explain the business benefit.

I strongly believe that this testing process is something to look deeper into as it would enable quicker, more comprehensive and better documented user acceptance testing to take place following functionality updates, saving time in writing long documents and hours of manual work. Another clear benefit is that the examples being tested reflect real business needs and requests, ensuring that deliverables actually follow discussed user stories and satisfy their conditions. Finally, this highlights the importance of good planning and how it can help later project stages, like testing, to run more smoothly and quickly.

UX Concerns

Building a Tasty Backend

This session was held in one of the smaller venues and was hugely popular; there was standing room only by the start, or even “sitting on the floor room” only. Obvious health and safety issues there!

The focus of this session was to explore Drupal modules that can help improve the UX for CMS users who may be intimidated by or frankly terrified of using Drupal, demonstrating how it is possible to simplify content creation, content management and getting around the Admin interface without re-inventing the wheel.

The general recommended principle is “If it’s on the page and it really doesn’t need to be, get rid of it!”.  Specific topics covered included:

  • using the Field Group module to arrange related content fields into vertical tabs, simplifying the user experience by showing only what the user needs to see;
  • disabling options that are not really required or don’t work as expected (e.g. the Preview button when editing a node) to remove clutter from the interface;
  • using Views Bulk Operations to tailor and simplify how users interact with lists of content;
  • customising and controlling how each CMS user interacts with the Admin menu system using modules such as Contextual Administration, Admin Menu Source and Admin Menu Per Menu.

The most interesting thing about this talk in light of our experience developing the University’s new central Drupal CMS is how closely many of the recommendations outlined in this session match our own module selection and the way in which we are handling the CMS user experience.  It is reassuring to see our approach reflected in suggested best practices, which we have come to through our knowledge and experience of the Drupal modules concerned, combined with prototyping and user testing sessions that have at times both validated our assumptions and exposed flaws in our understanding of the user experience.  As was noted in this session, “Drupal isn’t a CMS, it’s a toolkit for building a CMS”; it’s important that we use that toolkit to build not only a robust, responsive website but also a clear, usable and consistent CMS user experience.

Project Management and Engagement

Getting the Technical Win: How to Position Drupal to a Sceptical Audience

This Presentation started with the bold statement that no one cares about the technology be it Drupal, Adobe, Sitecore or WordPress. Business’s care about solutions and Drupal can offer the solution. Convincing people is hard, removing identified blockers is the easier bit.

In order to understand the drivers for change we must ask the correct questions. These can include:

  1. What are the pain points
  2. What is the competition doing, and most importantly
  3. Take a step back and don’t dive into a solution immediately.

Asking these kinds of questions will help build a trusted relationship. To this end it is sometimes a necessity in certain situations to be realistic and sometimes there is the need to say no. Understanding what success will look like and what happens if change is not implemented are two further key factors.

The presentation then moved on to technical themes. It is important to acknowledge that some people have favoured technologies. While Drupal is not the strongest technology, it has the biggest community and with that huge technical resources, ensuring longevity and support. Another common misconception is around scalability. However, Drupal’s scalability has been proven.

In the last part of the presentation attention turned to the sales process, focussing on the stages and technicalities involved towards closing a deal. The presentation ended with a promising motto “Don’t just sell, promise solutions instead.”

Although this was a sales presentation it offered valuable arguments to call upon when encouraging new areas to come aboard the Drupal train.

SellingDrupal

Looking to the Future

Future-Proof your Drupal 7 Site

This session primarily explored how best to future-proof a Drupal site by selecting modules chosen from the subset that have either been moved into Drupal core in version 8 or have been back ported into Drupal 7.  We are already using most of the long list of modules discussed here for the University’s new Drupal CMS.  For example, we recently implemented the picture and breakpoints modules to meet responsive design requirements, both of which have been back ported to Drupal 7.  This gives us a degree of confirmation that our module selection process will be effective in ensuring that we future-proof the University’s new central Drupal CMS.

In addition to the recommended modules, migrate was mentioned as the new upgrade path from Drupal 7 to Drupal 8, so we should be able to use the knowledge gained in migrating content from our existing central CMS to Drupal when we eventually upgrade from Drupal 7 to Drupal 8.

Symfony2 Best Practices from the Trenches

The framework underpinning Drupal 8 is Symfony2, and whilst we are not yet using Drupal 8, we are exploring web development languages and frameworks in other areas, one of which is Symfony2. As Symfony2 uses OO, it’s also useful to see how design patterns such as Dependency Injection are applied outside the more familiar Java context.

This best practices covered in this session seem to have been discovered through the bitter experience of the engaging presenter, and many of them are applicable to other development frameworks.  Topics covered included:

  • proper use of dependency injection in Symfony2 and how this can allow better automated testing using mock DB classes;
  • the importance of separation of concerns and emphasis on good use of the service layer, keeping Controllers ‘thin’;
  • appropriate use of bundles to manage code;
  • selection of a standard method of configuration to ensure clarity, readability and maintainability (XML, YAML and annotations can all be used to configure Symfony2);
  • the importance of naming conventions;
  • recommended use of Composer for development using any PHP framework, not just Symfony2.

I have attended two or three sessions which talk about Symfony2 at this conference as well as a talk on using Ember.js with headless Drupal.  It’s interesting to note that whilst there are an increasing number of web development languages and tools to choose from, there are many conceptual aspects and best practices which converge across those languages and frameworks.  In particular, the frequent reference to the MVC architecture pattern, especially in the context of frameworks using OO, demonstrates the universality of this particular approach across current web development languages and frameworks. What is also clear from this session is that standardisation of approach and separation of concerns are important in all web development, regardless of your flavour of framework.

The Future of HTML and CSS

This tech-heavy session looked at the past, present and future of the relationship between HTML and CSS, exploring where we are now and how we got here, and how things might change or develop in future. Beginning with a short history lesson in how CSS developed out of the need to separate structure from presentation to resolve cross-browser compatibility issues, the session continued with an exploration of advancements in CSS such as CSS selectors, pseudo classes, CSS Flexbox, etc. and finally moved on to briefly talk about whether the apparent move in a more programmatic direction means that CSS may soon no longer be truly and purely a presentational language.

There was way too much technical detail in this presentation to absorb in the allotted time, but it was an interesting overview of what is now possible with CSS and what may be possible in future.  In terms of the philosophical discussion around whether programmatic elements in CSS are appropriate, I’m not sure I agree that this is necessarily a bad thing.  It seems to me that as long as the ‘logic’ aspects of CSS are directed at presentation concerns and not business logic, there is no philosophical problem.  The difficulty may then lie in identifying the line between presentation concerns and business concerns.  At any rate, this is perhaps of less concern than the potential page load overhead produced by increasingly complex CSS!

Using Oracle Transportable Tablespaces to refresh Schemas/Tablespaces in Databases

If there is a requirement to refresh large schemas/tablespaces within a  databases regularly it is worth considering using transportable tablespaces (TTS). This method is ideal for moving large data quickly thus minimising downtime. The time taken will depend on the size of the data files being moved and the amount of DDL contained, but generally speaking the operation will not take much longer than the time to move the data files. Interestingly TTS forms the basis for the new pluggable databases to be delivered in 12c, there is a “plugged_in” column in dba_tablespaces which will be set to “yes” after using TTS.

There are some limitations which can be found in the link below, but in the most cases we are able to use TTS.

http://docs.oracle.com/cd/B28359_01/server.111/b28310/tspaces013.htm#ADMIN11394

http://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmxplat.htm#BRADV05432

If we are refreshing using point in time data, where there is no requirement for  real time data, we would use RMAN backups to create our TTS sets. This means there is no effect on our source system i.e. no need to put the tablespaces into read only mode for the duration of the copy.

rman tts

TTS Example

I recently transported sits_data and sits_indx in STARDUST(target) from STARTEST(source) using an RMAN catalog and a recent backup of the source db. RMAN will handle the creation of an auxiliary database for you to facilitate the point in time recovery of the desired tablespaces.

 

Assumptions:

  • You are already using an RMAN catalog to backup the source system.
  • The target system already exists.
  • There is adeququite space for another copy of the data files.
  • The Tablespaces do not exist on the target system( i.e. dropped or renamed)
  • TTS_IMP directory is pointing to /u22/oradata/STARDUST
  • tablespaces have already been checked using
    exec dbms_tts.TRANSPORT_SET_CHECK(‘tablespace(s)

Execution:

        1. login to source oracle server as oracle.
        2. source the environment file for STARTEST – orastartest
        3. connect to rman –  rman target=/ catalog=recman/xxx@rmantest
        4. issue:
          RMAN> transport tablespace sits_data,sits_indx
          TABLESPACE DESTINATION ‘/u22/oradata/STARDUST’
          auxiliary destination ‘/b01/backup/TTS’
          until time ‘sysdate-2/24’;
          This will create a dump file and import script which will be used to import the ddl into the target db.
        5. source the environment file for STARDUST – orastardust
        6. copy import line from import script in /u22/oradata/STARDUST
        7. e.g. impdp / directory=TTS_DIR dumpfile= ‘dmpfile.dmp’ transport_datafiles= /u22/oradata/STARDUST/sits_data01.dbf, /u22/oradata/STARDUST/sits_data02.dbf, /u22/oradata/STARDUST/sits_data03.dbf, /u22/oradata/STARDUST/sits_data04.dbf,/u22/oradata/STARDUST/sits_data05.dbf, /u22/oradata/STARDUST/sits_data06.dbf, /u22/oradata/STARDUST/sits_data07.dbf, /u22/oradata/STARDUST/sits_data08.dbf, /u22/oradata/STARDUST/sits_data09.dbf, /u22/oradata/STARDUST/sits_data10.dbf, /u22/oradata/STARDUST/sits_data11.dbf, /u22/oradata/STARDUST/sits_data12.dbf, /u22/oradata/STARDUST/sits_indx01.dbf, /u22/oradata/STARDUST/sits_indx02.dbf, /u22/oradata/STARDUST/sits_indx04.dbf, /u22/oradata/STARDUST/sits_indx05.dbf, /u22/oradata/STARDUST/sits_indx06.dbf, /u22/oradata/STARDUST/sits_indx03.dbf
        8. The tablespacess are now available in STARDUST.
        9. alter tablespace sits_data read write;
        10. alter tablespace sits_indx read write;

 

Resilient File Infrastructure

Resilient file infrastructure

In the last 2-3 years a number of key services have been advanced, upgraded and replaced. With these changes have come some architectural alterations that have strained our ability to guarantee data integrity in the event of a disaster. This has come about due to design choices by vendors primarily on how they retain objects in their applications. For example in some of the services vendors choose now to retain both transactional database information and real objects that are referred to in the database in associated external file systems. This might take the form of a Word document or a PDF for example where the application holds metadata in the transactional database and the real file in an external file system.
Databases are now typically synchronised in real time across two datacentres at King’s Buildings and Appleton Tower and it follows that it is now very important that the objects held in the external file systems are replicated in a similar manner to ensure that in the event of a disaster both transactional database information and the associated external file system objects can be recovered to the same point in time with no data loss.
Most recently attempts were made to address this problem and within the tel013 and uwp006 projects a resilient file system that could replicate content from King’s Buildings to Appleton Tower was prepared and evaluated. However during evaluation a number of technical constraints emerged that proved that this solution would not be viable.

The requirement for the resilient file system still exists and so we propose to do the following;

• Gather a complete set of the applications and their priority that should make use of this resilient file system service
• Evaluate the technical demands that these applications will impose on a resilient file system and prepare a set of technical requirements
• Catalogue a set of potential solutions that might be used to satisfy these requirements
• Evaluate these potential solutions against the technical requirements
• Identify the preferred solution and prepare a recommendation on which solution to implement

The information gathering and evaluation will be carried out by staff in both ITI and Applications Division

Iain Fiddes