Creating a ‘Bootiful’ new Visitor Registration System

The time has come to replace the venerable Visitor Registration System that has served the University now for quite some time. In June the team established for the COM011 project, destined to fulfil this task, ran user workshops and collected some 480 ‘user stories’ from interested parties around the University who use the incumbent system. User Stories are the cornerstone of the Agile methodology which has been chosen for the project.

Using Agile will allow the team to adapt to changing requirements and produce an end product that reflects the will of the users. What better way to complement this than by also introducing a new light weight, flexible, development tool that encourages rapid development and prototyping into IS apps technology stack? Spring Boot, (or just ‘Boot’), which was released in April, is the culmination of an effort by the huge Java/Spring community to prove the speed and ease with which Java applications can be created. This technology was showcased, to massive excitement, when it was shown that Boot could deliver an entire running web application in a tweet.

Boot has been used as the basis of the new Visitor Registration project; we now have a framework in our code repository that can be reused by anyone who wants to quickly setup a fully functional web application, with responsive front end, security enabled for various user roles, Rest endpoints, Soap endpoints and backend Oracle integration. And all of this functionality is fully unit and integration tested – in keeping with the goal of Agile that software quality should always be paramount. The new Visitor Registration System, using cutting edge technologies, will hopefully stand the test of time as well as its predecessor.

Oracle SOA vs Spring – SOAP Web Service throughput testing

We are soon going to embark on a major project to introduce enterprise notification handling at the University. Part of that will be the ability to handle a large number of messages in an efficient and robust manner. We already use Oracle SOA Suite here at the University, but wanted to test its throughput versus a lighter approach, that of Java and the Spring framework.

The scenarios

We chose four scenarios to test:

  • Basic assign, parameter passed in is passed out as response
  • DB Write , parameter passed in is written to Oracle Database
  • DB Read, parameter passed in is used to read value from Oracle Database
  • DB Read/Write, parameter passed in is written to Oracle Database, then read back out again

Testing constraints

We then applied the same constraints to both Oracle SOA and Java:

  • A connection pool must be used with the same settings (min 1, max 10 connections)
  • The same table structure/setup must be used with both technologies
  • We use the same back-end Oracle database
  • Testing would be done using a SOAP UI load test

For Oracle SOA, we set up a simple composite which tested the various features.

For Java Spring, we used Spring Boot, Spring Web Services, and Spring JPA.

The results

The results were as follows (total timings are rounded up to the nearest second):

Oracle SOA

500 calls 2000 calls 5000 calls
Assign 2 sec | 293 ms avg 6 sec | 504 ms avg 16 sec | 593 ms avg
Write 3 sec | 1284 ms avg 10 sec | 861 ms avg 29 sec | 1094 ms avg
Read 2 sec | 389 ms avg 9 sec | 838 ms avg 21 sec | 803 ms avg
Write Read 3 sec | 1038 ms avg 18 sec | 1644 ms avg 36 sec | 1403 ms avg

Java (Spring framework)

500 calls 2000 calls 5000 calls
Assign 1 sec | 101 ms avg 1 sec | 82 ms avg 2 sec | 72 ms avg
Write 1 sec | 112 ms avg 2 sec | 232 ms avg 5 sec | 203 ms avg
Read 1 sec | 73 ms avg 1 sec | 116 ms avg 3 sec | 116 ms avg
Write Read 1 sec | 271 ms avg 3 sec | 256 ms avg 6 sec | 234 ms avg

Conclusions

It is clear that the Java Spring solution is giving better throughput times,, and that is especially evident when we increase the load. However it would be unfair to use throughput times alone in looking at what Oracle SOA provides. It gives for example an “out of the box” message resilience and  support for automated message retry that would have to be coded in when using Java even with the benefit of Spring frameworks. However, Spring can provide a very useful high throughput entry point into Oracle SOA.

We want to benefit from the strengths of each of the technologies, so we are going to use the following:

  • Java Spring Web Services will be used as the initial entry point for creating/editing/deleting notification messages
  • The Java Spring WS will put a message in a queue for Oracle SOA
  • Oracle SOA will poll the queue for messages, then will apply the necessary business processing and rule logic for pushing notifications out
  • Oracle SOA will handle message retry in the event of processing failures
  • Java Spring Web Services will be used for pulling user notifications out for subscriber systems

As with most of the modern web, building a solution is about choosing the right set of technologies and not choosing a single technology approach. We’re confident now that we can introduce the necessary scale to handle a modern enterprise notifications system.

Using Oracle Transportable Tablespaces to refresh Schemas/Tablespaces in Databases

If there is a requirement to refresh large schemas/tablespaces within a  databases regularly it is worth considering using transportable tablespaces (TTS). This method is ideal for moving large data quickly thus minimising downtime. The time taken will depend on the size of the data files being moved and the amount of DDL contained, but generally speaking the operation will not take much longer than the time to move the data files. Interestingly TTS forms the basis for the new pluggable databases to be delivered in 12c, there is a “plugged_in” column in dba_tablespaces which will be set to “yes” after using TTS.

There are some limitations which can be found in the link below, but in the most cases we are able to use TTS.

http://docs.oracle.com/cd/B28359_01/server.111/b28310/tspaces013.htm#ADMIN11394

http://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmxplat.htm#BRADV05432

If we are refreshing using point in time data, where there is no requirement for  real time data, we would use RMAN backups to create our TTS sets. This means there is no effect on our source system i.e. no need to put the tablespaces into read only mode for the duration of the copy.

rman tts

TTS Example

I recently transported sits_data and sits_indx in STARDUST(target) from STARTEST(source) using an RMAN catalog and a recent backup of the source db. RMAN will handle the creation of an auxiliary database for you to facilitate the point in time recovery of the desired tablespaces.

 

Assumptions:

  • You are already using an RMAN catalog to backup the source system.
  • The target system already exists.
  • There is adeququite space for another copy of the data files.
  • The Tablespaces do not exist on the target system( i.e. dropped or renamed)
  • TTS_IMP directory is pointing to /u22/oradata/STARDUST
  • tablespaces have already been checked using
    exec dbms_tts.TRANSPORT_SET_CHECK(‘tablespace(s)

Execution:

        1. login to source oracle server as oracle.
        2. source the environment file for STARTEST – orastartest
        3. connect to rman –  rman target=/ catalog=recman/xxx@rmantest
        4. issue:
          RMAN> transport tablespace sits_data,sits_indx
          TABLESPACE DESTINATION ‘/u22/oradata/STARDUST’
          auxiliary destination ‘/b01/backup/TTS’
          until time ‘sysdate-2/24’;
          This will create a dump file and import script which will be used to import the ddl into the target db.
        5. source the environment file for STARDUST – orastardust
        6. copy import line from import script in /u22/oradata/STARDUST
        7. e.g. impdp / directory=TTS_DIR dumpfile= ‘dmpfile.dmp’ transport_datafiles= /u22/oradata/STARDUST/sits_data01.dbf, /u22/oradata/STARDUST/sits_data02.dbf, /u22/oradata/STARDUST/sits_data03.dbf, /u22/oradata/STARDUST/sits_data04.dbf,/u22/oradata/STARDUST/sits_data05.dbf, /u22/oradata/STARDUST/sits_data06.dbf, /u22/oradata/STARDUST/sits_data07.dbf, /u22/oradata/STARDUST/sits_data08.dbf, /u22/oradata/STARDUST/sits_data09.dbf, /u22/oradata/STARDUST/sits_data10.dbf, /u22/oradata/STARDUST/sits_data11.dbf, /u22/oradata/STARDUST/sits_data12.dbf, /u22/oradata/STARDUST/sits_indx01.dbf, /u22/oradata/STARDUST/sits_indx02.dbf, /u22/oradata/STARDUST/sits_indx04.dbf, /u22/oradata/STARDUST/sits_indx05.dbf, /u22/oradata/STARDUST/sits_indx06.dbf, /u22/oradata/STARDUST/sits_indx03.dbf
        8. The tablespacess are now available in STARDUST.
        9. alter tablespace sits_data read write;
        10. alter tablespace sits_indx read write;

 

Overlapping projects: two sides of the same coin

Over the past few months in IS SSP we have been working on two major SITS projects: UG Paperless and Direct Paperless Admissions. Both projects have come out of the push towards moving away from paper applications and doing all of our admissions processing online. The UG Admissions project was first implemented last year and is now in its second iteration. Direct Admissions will be released for the first time this fall. Continue reading “Overlapping projects: two sides of the same coin”

An ERASMUS study visit

We have said goodbye to our visitor from the University of Trento, Mauro Ferrari.  Mauro is a web application developer and used the EU’s ERASMUS scheme to fund a two-week study visit to learn how we develop software and in particular how we implement the University’s portal, MyEd.

Mauro’s highlights were:

  • Learning about MyEd, particularly Martin Morrey’s EDUCAUSE talk on analytics
  • Talking about our project methods, especially agile projects
  • Seeing how we use development tools such as JIRA and Bamboo
  • Learning about the Drupal Features module for managing changes in Drupal modules

Mauro also complimented us on our attention to the experience of users and our commitment to migrate data across versions.  He was interested to learn about SAP BI Suite and could see how it would help his University but thought that this would be beyond his team’s current capabilities.

Mauro was more critical of some aspects of the user experience in MyEd.  One example he gave was the way that the whole page redraws when a user changes something in the Event Booking portlet.  He also thought the list of available portlets was hard to scroll.  He gave demos of the Trento portal to several of us; there may be lessons that we can learn from their work.

I was interested to learn of Trento’s approach to managing identities with multiple roles.  Each of their systems prompts you to choose your role when you log in, so you have a single identity and can select which role to use if more than one applies.  Their portal allows you to group all your portlets regardless of role.  This would be a big change for us and I am not suggesting that we change tack, just noting that it was interesting to see the different approach.

Mauro also demonstrated their system for creating and managing applications, which covers everything from Doctoral positions to summer school places to public lectures to internal events and more.  Basically it is a sophisticated form editor with a back-end that lets organisers check applications and so forth.  It clearly works for Trento; for us I think the question it raises is whether a central service of this sort would be useful.  Such a service would combine Events Booking, (use of) EventBrite for public events, OLLBook for evening courses, and possibly more.  I don’t see this as a priority but again it was interesting to compare the approaches.

My overall lesson from his visit is that we are a very effective and mature organisation with much to teach other universities.  Which is not to say that we know everything or that we cannot learn from other universities in return.

I would like to thank everyone who gave their time to talk to Mauro for helping to create a successful visit for our guest. I also thank Mauro for choosing us for his study; we were very pleased to be his hosts.

Resilient File Infrastructure

Resilient file infrastructure

In the last 2-3 years a number of key services have been advanced, upgraded and replaced. With these changes have come some architectural alterations that have strained our ability to guarantee data integrity in the event of a disaster. This has come about due to design choices by vendors primarily on how they retain objects in their applications. For example in some of the services vendors choose now to retain both transactional database information and real objects that are referred to in the database in associated external file systems. This might take the form of a Word document or a PDF for example where the application holds metadata in the transactional database and the real file in an external file system.
Databases are now typically synchronised in real time across two datacentres at King’s Buildings and Appleton Tower and it follows that it is now very important that the objects held in the external file systems are replicated in a similar manner to ensure that in the event of a disaster both transactional database information and the associated external file system objects can be recovered to the same point in time with no data loss.
Most recently attempts were made to address this problem and within the tel013 and uwp006 projects a resilient file system that could replicate content from King’s Buildings to Appleton Tower was prepared and evaluated. However during evaluation a number of technical constraints emerged that proved that this solution would not be viable.

The requirement for the resilient file system still exists and so we propose to do the following;

• Gather a complete set of the applications and their priority that should make use of this resilient file system service
• Evaluate the technical demands that these applications will impose on a resilient file system and prepare a set of technical requirements
• Catalogue a set of potential solutions that might be used to satisfy these requirements
• Evaluate these potential solutions against the technical requirements
• Identify the preferred solution and prepare a recommendation on which solution to implement

The information gathering and evaluation will be carried out by staff in both ITI and Applications Division

Iain Fiddes

Bursary Management first releases

A new development was implemented in July 2014 to enable processing of Access Bursary applications within EUCLID.

Access Bursaries are one type of a number of centrally and locally administered bursaries and scholarships available from the University of Edinburgh.  Access Bursaries are centrally administered by the Student Administration department.

The internal user base is very small, and the main business user worked closely with the project team to produce a tailored solution.

The project was implemented mid-cycle and therefore started in the middle of the process.  All applications for 2014/5 had already been received, reviewed and scored in the legacy system.  Data was exported from the legacy system to create fund bid records in EUCLID.

Bursary staff have a suite of screens allowing them to:

  • View basic application data and imported references
  • Make initial decision (award, reject  or place on waiting list)
  • Allocate the applicant to a specific bursary fund
  • Release bursary transactions to Finance for payment (due September 2014)

A further release is scheduled for October 2014, to include an online Application Form, Reference request and upload, and staff application processing with full application data.

Jon Martin and Morag Iannetta

Online Registration Phase 2 implemented

Online Registration was implemented late 2013, enabling all students registering on programmes starting on or after 01 January 2014 to register online.  The process allows students to confirm personal details, submit queries regarding information held, and to register for, or decline places on, programmes.  If the student requires Immigration clearance, they are only partially registered via the online registration process: Immigration Compliance staff at the University will then check appropriate documentation and complete the registration for the student.

In the Summer of 2014 the Online Registration functionality was enhanced ahead of the registration period for the main annual cohort of students:

  • Additional protected characteristics questions have been added to the registration process to satisfy HESA requirements
  • When registered/partially registered, students are able to view, update and upload documentation for passports and visas via student self-service for Immigration Compliance
  • Immigration Compliance functionality has been enhanced to allow users to mark documentation as checked, to request further documentation, and allows the user to complete the registration process for international students from ‘Immigration Overview’ screens in addition to ‘Validate International Documentation’ screens.

Jon Martin and Morag Iannetta