Skip to content.
Sections
Personal tools
You are here: Home » News » Reference models, BPEL and stitching web services together

Reference models, BPEL and stitching web services together

Christina Smart
Last modified 03 Aug, 2005
Published 03 Aug, 2005
A report from the e-Learning Framework (ELF) Developers Forum and Assessment Special Interest Group in Glasgow.

This two day meeting, the 2nd ELF Developers Forum and 17th Assessment SIG was held in Glasgow on the 20th and 21st July. The key themes explored were how workflow and reference model projects will stitch web services together to deliver pieces of online learning. Unfortunately I missed some of the sessions, below is an outline of the ones that I attended.

The ELF developer Forum was held at the George Square Travel Inn. There was a lot of buzz about Warwick Bailey’s talk on the use of BPEL (Business Process Execution Language) to orchestrate assessment rendering, sequencing and packaging web services to deliver an adaptive learning sequence. This work is part of the ASSIS project funded by the Distributed e-Learning strand. The example developed began with a question, which students would answer, the learning material delivered to the student would then vary depending on the way the student answered the question. Other projects might now consider using BPEL. Warwick had used the visual editor ActiveWebFlow (published by ActiveEndpoints) that had been used in the ASSIS project produces clean and usable code.

Wilbert Kraan from CETIS outlined the IMS General Web Services (GWS) proposal for generating new specifications. Essentially, the GWS describes a standard way of constructing a web service, specifying things like messaging models and error handling.

The meeting also heard about developments in two of the reference model projects, COVARM and XCRI. Balbir Barn from Thames Valley University outlined the work of the COVARM project in the area of course validation. The project has been gathering requirements from staff in four universities on course validation processes. The COVARM project has produced models of the validation process which staff have found very useful for visualising and managing the process. The next challenge will be to reach a consensus process that can be mapped into a specification for a web service.

XCRI aims to standardise information of courses for the UK sector. Mark Stubbs from Manchester Metropolitan University set out the scope of the project. The first phase had been about gathering resources and systems from organisations such as UCAS. Phase two will involve developing a draft schema and a curriculum service. The final phase will include developing a toolkit and demonstration for a curriculum service. Project developments and collected resources can be tracked on the project Blog on the ELF site.

In another room of the Travel Inn, assessment developers were exchanging question items in a code bash. The consensus at the end of the day was that it had been a very useful event both in terms of system interoperability and networking with other developers.

The Assessment SIG was held at Strathclyde University in the plush Court rooms. The day began with Steve Lay of Cambridge Assessment outlining developments with QTI v2.0. Version 2.1 of the specification will include an XML description of the entire test.

The group then heard from developments in the FREMA project which is developing a Reference Model for the assessment domain. David Millard from Southampton University outlined the project’s approach of starting by defining the assessment domain, looking at common usage then developing use cases, and service profiles. Yvonne Howard presented the concept maps of the assessment domain that the project has been developing. These concept maps help stakeholders to identify which part of the domain they are working in. In addition, the project is developing a detailed knowledge base of the assessment area with details of projects, people and resources. Steve Jeyes explored what the priority areas were for the assessment domain – could the SIG say which web services should be developed next? Areas discussed included item banks, processes to support summative assessment – and the marking load of lecturers, as well as grade books and links between feedback and learning and the formative process.

Niall Barr from Strathclyde has been involved in several web service projects in the assessment domain. He talked about the progress of RQP (Remote Question Protocol) and the Serving Maths project. He also reported on developments in the APIS toolkit project. Niall has written a tool called WSGI for generating interoperable WSDL from pseudo-code, which could be used by other ELF developers.

After lunch Dick Bacon illustrated the close link between detailed feedback and student learning in the SToMP learning environment. He had used QTI v1 and added extensions to calculate errors in the numeric responses that students give in resolving equations. The new SToMP algorithms can calculate student errors in orders of magnitude, and scale. The system also allows students to gain marks for their working out as well as the final mark.

Unfortunately I had to leave the event at this point – so missed the discussion on the future of QTI v1.x and item banking, as well as the talk on item banking from Mhairi McAlpine of the Scottish Qualifications Authority.

My overall impression of the two days was that through the reference model projects and the use of BPEL, the community is now unpacking the detail of each of the ELF bricks and working out how services can be stitched together to support common processes in HE and FE, including learning.

The ELF Developer's Forum presentations are now available

The Assessment SIG minutes and presentations are also now available

 

Supported by JISC Supported by CETIS
Powered by Plone