HP Service Virtualization (SV) 2.30 was released on December 18 2012. For those not familiar with the concept of Service Virtualisation, it is a way to replace an interfacing system in a Development or a Test environment with an emulated interface, thereby saving time and money by avoiding building additional environments. A great overview of the service virtualization field can be found here.
At this stage the HP SV product is relatively new, but is maturing rapidly. While only a handful of new features have been listed, for the most part these are significant. In version 2.30, HP has chosen to focus on solid functional improvements – a sensible move given its relatively small feature set in the previous version. HP SV is now more capable of virtualizing multiple layers of large scale composite application. In other words, it’s now very well suited to enterprise level applications.
At a high-level some of the notable new features include:
- JDBC protocol support
- Unlimited virtualization agent and protocol support
- Automatic session identification
- More flexible data manipulation options
- Multiple IBM WebSphere MQ operations on a single MQ service
- 10 new supported languages
JDBC is here
Ask any large enterprise about their application architecture, and the majority will tell to you it has a database. Of all the enhancements, the introduction of the JDBC protocol for virtualization is by far the most important. Using a basic architecture as an example, HP SV has transitioned from being capable of virtualizing not only application servers and middleware layers, but also the database layer. This opens up the breadth of options for virtualizing a composite application, all layers of the architecture are now candidates for virtualization.
HP SV supports the JDBC protocol by virtualising the connection to a database via JDBC. Both JS2E applications using the JDBC API (v3.0, v4.0 or v4.1) and Java applications deployed on a J2EE application server (v1.4+) are supported. Given the wide use of JDBC this should have most database solutions covered.
Another important feature is the ability to configure an unlimited number of virtualization agents. This opens up HP SV so that it is capable of running many virtual services. Putting it at a high-level, multiple applications can be virtualized at once, across multiple environments (e.g. development, UAT, production, load test). Again, this is a great step forward in supporting the complexity and size of enterprise level applications.
May applications use sessions to maintain stateful behaviour. HP SV can now be configured to use session identifiers to automatically create stateful behaviours in a virtual data model. HP calls these stateful workflows tracks. This is a welcome enhancement as the creation tracks were manual up to this point. This should flow on to quicker implementation of virtual services due to time saved.
Flexible Data Manipulation
The use of external files is very important to the flexibility of the HP SV product. By allowing data to be configured offline, data models can be created more efficiently and quickly. More complex data sets can be created, allowing the data to drive the virtual service behaviours.
The export/import functionality has been improved with the introduction of what HP are calling ‘schema first’. Essentially, you take a service definition/model and create a data file (schema). This schema is created in Microsoft Excel format and automatically binds the columns and primary/foreign key relationships to the virtual service. At this point, it is a simple matter of populating the Excel file with the required data. This is also a great time saver because:
- The data file structure (schema) can be complex to create from scratch
- The manual binding of data file columns to virtual service request/response fields can be tedious
- The manual pairing of primary and foreign key constraints is avoided
HP SV can be configured to automatically synchronise with external data files. Any additional learned behaviours can be automatically updated in the exported data file. Alternatively, any external data file updates can be automatically updated in the virtual service.
The user interface remains mostly the same; it has its usability quirks but is generally intuitive and quick to learn. Thankfully most of the quirks only expose themselves when working with the more advanced features.
A nice new feature is the Topology Diagram, which provides a visual representation of the virtual service being configured. Here you can quickly see how the application under test, virtual service and real service interact.
Another useful addition is the endpoint connection test. This simple option runs a quick check to inform you if your configured virtual endpoints are in conflict with any other endpoint configurations. With the introduction of unlimited agents, this could potentially be a great time saver if you have configured many virtual services (for example conflicting ports).
The ability to manually import messages simultaneously (SOAP and XML only) via the HP SV GUI has also been introduced. Given the flexible data manipulation options and learning capability, I don’t expect this to be the first choice. To import messages manually you need to do one of the following via the HP SV GUI:
- Manually copy each request or response (individually) from the clipboard, or
- Create request/response files and configure them for import
Both options are manual and tedious, with a single file required for each request and response. These files must also be named in an appropriate format to pair the request and response files. The best bet is to explore the learning modes and import/export functionality of HP SV. If these options are not available, manual message import is more of a last resort.
As noted earlier, the enhancements made to HP SV 2.30 have transitioned the product into a more suitable solution for the enterprise level. Large composite applications are now viable targets for virtualization with JDBC and unlimited agent support. The protocol breadth is expanding, opening up HP SV to more types of application architectures. The management of external data is easier to maintain and several enhancements have been made to speed up configuration efforts. Overall, a solid set of features for a solid product.