Author: JDS

Is DevPerfOps a thing?

New technology terms are constantly being coined. One of our lead consultants answers the question: Is DevPerfOps a thing?


Hopefully it’s no surprise that traditional performance testing has struggled to adapt to the new paradigm of DevOps, or even Agile. The problems can be boiled down to a few key factors:

  1. Performance testing takes time—time to script, and time to execute. A typical performance test engagement is 4-6 weeks, during which time the rest of the project team has made significant progress. 
  2. Introducing performance testing to a CI/CD pipeline is really challenging. Tests typically run for an hour or more and can hold up the pipeline, and they are extremely sensitive to application changes as the scripts operate at the protocol level (typically HTTP requests). 
  3. Performance testing often requires a full-sized, production-like infrastructure environment. These aren’t cheap and are normally unavailable during early development, making “early performance testing” more of an aspirational idea, rather than something that is always practical.

All the software vendors will tell you that DevOps with performance testing is easy, but none of them have solved the above problems. For the most part, they have simply added a CI/CD hook without even attempting to challenge the concept of performance testing in DevOps at all.

A new world

So what would redefining the concept of performance testing look like? Allow me to introduce DevPerfOps. We would define the four main activities of a DevPerfOps engineer as:

  1. Tactical Load Tests
  2. Microbenchmarking
  3. APM and Code Reviews
  4. Container Workloads

Let’s start at the top.

Tactical Load Testing

This is can be broadly defined as running performance tests within the limitations of a CD/CI pipeline. Is it perfect? No. Is it better than not doing anything or struggling with a full end-to-end test? Absolutely, yes!

Tactical Load Testing has the following objectives:

  • Leverage CI toolkits like Jenkins for execution and results collection.
  • Use lightweight toolsets for execution (JMeter, Taurus, Gatling, LoadRunner, etc. are all perfectly good choices).
  • Configure performance tests to run for short durations. 15-20 minutes is the ideal balance. This can be achieved by minimising the run logic, reducing think times, and focusing on only testing individual business processes.
  • Expect to run your performance tests on a scaled-down environment, or a single server node. Think about what this will do to your workload profile.

Microbenchmarking

These are mini unit or integration tests designed to ensure that a component operates within a performance benchmark and that deviations are detected.

Microbenchmarking can target things like core application code, SQL queries, and Web Service integrations. There is an enormous amount of potential scope here; for example, you can write a microbenchmark test for a login, or a report execution, or a data transformation--anything goes if it makes sense.

Most importantly, microbenchmarking is a great opportunity to test early in the project lifecycle and provide early feedback.

APM and Code Reviews

In past years, having access to a good APM tool or profiler, or even the source code, has been a luxury. Not anymore--these tools are everywhere, and while a fully featured tool like AppDynamics or New Relic is beneficial for developers and operations, a lot can be achieved with low-cost tools like YourKit Profiler or VisualVM.

APM or profilers allow slow execution paths to be identified, and memory usage and CPU utilisation to be measured. Resource intensive workloads can be easily identified and performance baked into the application.

Container Workloads

Containers will one day rule the world. If you don’t know much about containers, you need to learn about them fast.

In terms of performance engineering, each container handles a small unit of workload and container orchestration engines like Kubernetes will auto-scale that container horizontally across the entire data centre. It’s not uncommon for applications to run on hundreds or thousands of containers.

What’s important here is that the smallest unit of workload is tiny, and scaling is something that a well-designed application will get for free. This allows your performance tests to scale accordingly, and it allows the whole notion of “what is application infrastructure” to be challenged. In the container world, an application is a service definition… and the rest is handled for you.

So, is DevPerfOps a thing? We think so, and we think it will only continue to grow and expand from here. It’s time that performance testing meets the needs of 2018 IT teams, and JDS can help. If you have any questions or want to discuss more, please get in touch with our performance engineering team by emailing [email protected]. If you’re looking for a quick, simple way of getting actionable insights into the performance health of your website or application, check out our current One Second Faster promotion.

Our team on the case

Read more Tech Tips

Monitoring Atlassian Suite with AppDynamics

Millions of IT professionals use JIRA, Confluence, and Bitbucket daily as the backbone of their software lifecycle. These tools are critical to getting anything done in thousands of organisations. If you’re reading this, it’s safe to guess that’s the case for your organisation too!

Application monitoring is crucial when your business relies on good application performance. Just knowing that your application is running isn’t enough; you need assurance that it’s performing optimally. This is what a JDS client that runs in-house instances of JIRA, Confluence, and Bitbucket, recently found out.

This client, a major Australian bank, started to notice slowness with JIRA, but the standard infrastructure monitoring they were using was not providing enough insight to allow them to determine the root cause.

JDS was able to instrument their Atlassian products with AppDynamics APM agents to gain insights into the performance of the applications. After deployment of the Java Agents to the applications, AppDynamics automatically populated the topology map below, known as a Flow Map. This Flow Map shows the interactions for each application, accompanied by overall application and Business Transaction health, and metrics like load, response time, and errors.

After some investigation, we found the root cause of the JIRA slowness was some Memcached backends. Once we determined the root cause and resolved the issue, operational dashboards were created to help the Operations team monitor the Atlassian application suite. Below is a screenshot of a subsection of the dashboard showing Database Response Times, Cache Response Times, and Garbage Collection information.

An overview dashboard was also created to assist with monitoring across the suite. The Dashboard has been split out to show Slow, Very Slow, and Error percentages along with Average Response Times and Call Volumes for each application. Drilldowns were also added to take the user directly to the respective application Flow Map. Using these dashboards, they can, at a glance, check the overall application health for the Atlassian products. This has helped them improve the quality of service and user experience.

The bank’s JIRA users now suffer from far fewer slowdowns, particularly during morning peaks when many hurried story updates are taking place in time for stand-ups! The DevOps team is also able to get a heads-up from AppDynamics when slowness starts to occur, rather than when performance has fallen off a cliff.

So if you’re looking for more effective ways to monitor your Atlassian products, give our AppDynamics team a call. We can develop and implement a customised solution for your business to help ensure your applications run smoothly and at peak performance.

Our team on the case

Our AppDynamics stories

5 quick tips for customising your SAP data in Splunk

Understanding how your SAP system is performing can be a time-consuming process. With multiple environments, servers, APIs, interfaces and applications, there are numerous pieces to the puzzle, and stitching together the end-to-end story requires a lot of work.

That’s where SAP PowerConnect can assist. This SAP-certified tool simplifies the process by seamlessly collating your SAP metrics into a single command console: Splunk. PowerConnect compiles and stores data from each component across your SAP landscape and presents the information in familiar, easily accessible, and customisable Splunk dashboards. When coupled with Splunk’s ability to also gather machine data from non-SAP systems, this solution provides a powerful insight mechanism to understand your end user’s experience.

Given the magnitude of information PowerConnect can collate and analyse, you may think that setting it up would take days—if not weeks—of effort for your technical team. But one of PowerConnect’s key features is its incredibly fast time to value. Whatever the size of your environment, PowerConnect can be rapidly deployed and have searchable data available in less than ten minutes. Furthermore, it is highly customisable in providing the ability to collect data from custom SAP modules and display these in meaningful context sensitive dashboards, or integrate with Splunk IT Service Intelligence.

Here are some quick tips for customising your SAP data with PowerConnect.

1. Use the out-of-the-box dashboards

SAP software runs on top of the SAP NetWeaver platform, which forms the foundation for the majority of applications developed by SAP. The PowerConnect add-on is compatible with NetWeaver versions 7.0 through to 7.5 including S/4 HANA. It runs inside SAP and extracts machine data, security events, and logs from SAP—and ingests the information into Splunk in real time.

PowerConnect can access all data and objects exposed via the SAP NetWeaver layer, including:

  • API
  • Function Modules
  • IDoc
  • Report Writer Reports
  • Change Documents
  • CCMS
  • Tables
PowerConnect has access to all the data and objects exposed via the SAP NetWeaver layer

If your SAP system uses S/4 HANA, Fiori, ECC, BW components or all the above, you can gain insight into performance and configuration with PowerConnect.

To help organise and understand the collated data, PowerConnect comes with preconfigured SAP ABAP and Splunk dashboards out of the box based on best practices and customer experiences:

Sample PowerConnect for SAP: SAP ABAP Dashboard

Sample PowerConnect for SAP: Splunk Dashboard

PowerConnect centralises all your operational data in one place, giving you a single view in real time that will help you make decisions, determine strategy, understand the end-user experience, spot trends, and report on SLAs. You can view global trends in your SAP system or drill down to concentrate on metrics from a specific server or user.

2. Set your data retention specifications

PowerConnect also gives you the ability to configure how, and how long, you store and visualise data. Using Splunk, you can generate reports from across your entire SAP landscape or focus on specific segment(s). You may have long data retention requirements or be more interested in day-to-day performance—either way, PowerConnect can take care of the unique reporting needs for your business when it comes to SAP data.

You have complete control over your data, allowing you to manage data coverage, retention, and access:

  • All data sets that are collected by PowerConnect can be turned off, so you only need to ingest data that interests you.
  • Fine grain control over ingested data is possible by disabling individual fields inside any data sets.
  • You can customise the collection interval for each data set to help manage the flow of data across your network.
  • Data sets can be directed to different indexes, allowing you to manage different data retention and archiving rates for different use cases.

3. Make dashboards for the data that matters to you

You have the full power of Splunk at your disposal to customise the default dashboards, or you can use them as a base to create your own. This means you can use custom visualisations, or pick from those available on Splunkbase to interpret and display the data you’re interested in.

Even better, you’re not limited to SAP data in your searches and on your dashboards; Splunk data from outside your SAP system can be correlated and referenced with your SAP data. For example, you may want to view firewall or load balancer metrics against user volumes, or track sales data with BW usage.

4. Compare your SAP data with other organisational data

It is also possible to ingest PowerConnect data with another Splunk app, such as IT Service Intelligence (ITSI) to create, configure, and measure Service Levels and Key Performance Indicators. SAP system metrics can feed into a centralised view of the health and key performance indicators of your IT services. ITSI can then help proactively identify issues and prioritise resolution of those affecting business-critical services. This out-of-the-box monitoring will give you a comprehensive view of how your SAP system is working.

The PowerConnect framework is extensible and can be adapted to collect metrics from custom developed function modules. Sample custom extractor code templates are provided to allow your developers to quickly extend the framework to capture your custom-developed modules. These are the modules you develop to address specific business needs that SAP doesn’t address natively. As with all custom code, the level of testing will vary, and gaining access to key metrics within these modules can help both analyse usage as well as expose any issues within the code.

5. Learn more at our PowerConnect event

If you are interested in taking advantage of PowerConnect for Splunk to understand how your SAP system is performing, come along to our PowerConnect information night in Sydney or Melbourne in May. Register below to ensure your place.

PowerConnect Explainer Video

How to maintain versatility throughout your SAP lifecycle

There are many use cases for deploying a tool to monitor your SAP system. Releasing your application between test environments, introducing additional users to your production system, or developing new functionality—all of these introduce an element of risk to your application and environment. Whether you are upgrading to SAP HANA, moving data centres, or expanding your use of ECC modules or mobile interfaces (Fiori), you can help mitigate the risk with the insights SAP PowerConnect for Splunk provides.

Upgrading SAP

Before you begin upgrades to your SAP landscape, you need to verify several prerequisites such as hardware and OS requirements, source release of the SAP system, and background process volumes. There are increased memory, session, and process requirements when performing the upgrade, which need to be managed. The SAP PowerConnect solution provides you with all key information about how your system is responding during the transition, with up-to-date process, database, and system usage information.

Triaging and correlating events or incidents is also easier than ever with PowerConnect through its ability to time series historic information. It means you can look back to a specific point in time and see what the health of the system or specific server was, the configuration settings, etc. This is a particularly useful feature for regression testing.

Supporting application and infrastructure migration

Migration poses risks. It’s critical to mitigate those risks through diligent preparation, whether it’s ensuring your current code works on the new platform or that the underlying infrastructure will be fit for purpose.

For example, when planning a migration from an ABAP-based system to an on-premise SAP HANA landscape, there are several migration strategies you can take, depending on how quickly you want to move and what data you want to bring across. With a greenfield deployment, you start from a clean setup and bring across only what you need. The other end of the spectrum is a one-step upgrade with a database migration option (DMO), where you migrate in-place.

Each option will have its own advantages and drawbacks; however, both benefit from the enhanced visibility that PowerConnect provides throughout the deployment and migration process. As code is deployed and patched, PowerConnect will highlight infrastructure resource utilisation issues, greedy processes, and errors from the NetWeaver layer. PowerConnect can also analyse custom ABAP code and investigate events through ABAP code dumps by ID or user.

Increasing user volumes

Deployments can be rolled out in stages, be it through end users or application functionality. This is an effective way to ease users onto the system and lessen the load on both end-user training and support desk tickets due to confusion. As user volume increases, you may find that people don’t behave like you thought they would—meaning your performance test results may not match up with real-world usage. In this case, PowerConnect provides the correlation between the end-user behaviour and the underlying SAP infrastructure performance. This gives you the confidence that if the system starts to experience increased load, you will know about it before it becomes an issue in production. You can also use PowerConnect to learn the new trends in user activity, and feed that information back into the testing cycle to make sure you’re testing as close to real-world scenarios as possible.

It may not be all bad news. PowerConnect can highlight unexpected user behaviour in a positive light, where you might find new users are introduced to the system, they don’t find a feature as popular as you thought they would. Hence you would then be able to turn off the feature to reduce licence usage or opt to promote the feature internally. PowerConnect will not only give you visibility into system resource usage, but also what users are doing on the system to cause that load.

Feedback across the development lifecycle

PowerConnect provides a constant feedback solution with correlation and insights throughout the application delivery lifecycle. Typically migrations, deployments, and upgrades follow a general lifecycle of planning, deploying, then business as usual, before making way for the next patch or version.

During planning and development, you want insights into user activity and the associated infrastructure performance to understand the growth of users over time.

  • With the data retention abilities of Splunk, PowerConnect can identify trends from the last hour right back to the last year and beyond. These usage trends can help define performance testing benchmarks by providing concurrent user volumes, peak periods, and what transactions the users are spending time on.
  • In the absence of response time SLAs, page load time goals can be defined based on current values from the SAP Web Page Response Times dashboard.
  • With the ability to compare parameters, PowerConnect can help you make sure your test and pre-prod environments have the same configuration as production. When the test team doesn’t have access to run RZ10 to view the parameters, a discrepancy can be easy to miss and cause unnecessary delays.

Once in production, PowerConnect also gives you client-centric and client-side insights.

  • You can view the different versions of SAP GUI that users have installed or see a world map showing the global distribution of users.
  • Splunk can even alert from a SecOps perspective, and notify you if someone logs in from a country outside your user base. You can view a list of audited logins and browse the status of user passwords.
  • The power of Splunk gives you the ability to alert or regularly report on trends in the collected data. You can be informed if multiple logins fail, or when the CPU vs Work processes is too high. Automated scripts can be triggered when searches return results so that, for example, a ServiceNow ticket can be raised along with an email alert.

Even after a feature has completed its lifecycle and is ready to be retired, PowerConnect remains rich with historical data describing usage, issues, and configuration settings in Splunk, even if that raw data disappears or has been aggregated from SAP.

Backed by the power of Splunk, and with the wealth of information being collected, the insights provided by PowerConnect will help you effectively manage your SAP system throughout your SAP lifecycle.

What is AIOps?

Gartner has coined another buzz word to describe the next evolution of ITOM solutions. AIOps uses the power of Machine Learning and big data to provide pattern discovery and predictive analysis for IT Ops.

What is the need for AIOps?

Organisations undergoing digital transformation are facing a lot of challenges that they wouldn’t have faced even ten years ago. Digital transformation represents a change in organisation structure, processes, and abilities, all driven by technology. As technology changes, organisations need to change with it.

This change comes with challenges. The old ITOps solutions now need to manage micro services, public and private APIs, and Internet-of-Things devices.

As consumers, IT managers are used to personalised movie recommendations from Netflix, or preemptive traffic warnings from Google. However, their IT management systems typically lack this sort of smarts—reverting to traffic light dashboards.

There is an opportunity in the market to combine big data and machine learning with IT operations.

Gartner has coined this concept AIOps: Artificial Intelligence for IT Ops.

AIOps platforms utilize big data, modern machine learning and other advanced analytics technologies to directly and indirectly enhance IT operations (monitoring, automation and service desk) functions with proactive, personal and dynamic insight. AIOps platforms enable the concurrent use of multiple data sources, data collection methods, analytical (real-time and deep) technologies, and presentation technologies.” – Colin Fletcher, Gartner

AIOps 1

Current State – Gartner Report

Gartner coined the term AIOps in 2016, although they originally called it Algorithmic IT Operations. They don’t yet produce a magic quadrant for AIOps, but that is likely coming.

Gartner has produced a report which summarises both what AIOps is hoping to solve, and which vendors are providing solutions.

Eleven core capabilities are identified, with only four vendors able to do all 11: HPE, IBM, ITRS, and Moogsoft.

How does Splunk do AIOps?

Splunk is well positioned to be a leader in the AIOps field. Their AIOps solution is outlined on their website. Splunk AIOps relies heavily on the Machine Learning Toolkit, which provides Splunk with about 30 different machine learning algorithms.

Splunk provides an enterprise machine learning and big data platform which will help AIOps managers:

  • Get answers and insights for everyone: Through the Splunk Query Language, users can predict past, present, and future patterns of IT systems and service performance.
  • Find and solve problems faster: Detect patterns to identify indicators of incidents and reduce irrelevant alerts.
  • Automate incident response and resolution: Splunk can automate manual tasks, which are triggered based on predictive analytics.
  • Predict future outcomes: Forecast on IT costs and learn from historical analysis. Better predict points of failure to proactively improve the operational environment.
  • Continually learn to take more informed decisions: Detect outliers, adapt thresholds, alert on anomalous patterns, and improve learning over time.

Current offerings like ITSI and Enterprise Security also implement machine learning, which take advantage of anomaly detection and predictive algorithms.

As the complexity in IT systems increases, so too will the need to analyse and interpret the large amount of data generated. Humans have been doing a good job to date, but there will come a point where the complexity will be too great. Organisations which can complement their IT Ops with machine learning will have a strategic advantage over those who rely on people alone.

Conclusion

Top 7 benefits of JDS Active Robot Monitoring

JDS has spent a lot of time this month showing how our bespoke synthetic monitoring solution, Active Robot Monitoring with Splunk, is benefitting a wide variety of businesses. ARM has been used to resolve website issues for a major superannuation company and is improving application performance for a large Australian bank. We’re also currently implementing an ARM solution for one of the biggest universities in Australia and a major medical company. Find out more about the benefits of JDS Active Robot Monitoring below.


Summary of ARM

ARM is a capability developed by JDS that enables synthetic performance monitoring for websites, mobile, cloud-based, on-premise, and SaaS apps. It provides IT staff and managers a global view of what’s happening in your environment, as it’s happening. You can then use the customisable results dashboard to easily consume performance data, and drill down to isolate issues by location or transaction layer.

Top 7 benefits of ARM

1. Get an overall picture of an application’s end-to-end performance

How long does it take for your page to load, or for a user to log in? Can they log in? You may be getting green lights from all of the back-end components individually, but not realise the login process is taking three times longer than normal. ARM gives you the full picture, helping you spot performance issues you may not notice in the back-end.

2. Small increase in data ingested

If you’re already using Splunk, the amount of data you ingest with ARM is minimal, meaning you are getting even more out of your enterprise investment at an extremely low cost.

3. Fast time to value

Many IT projects can take years to show a return on investment, but ARM is not one of them. Once implemented, IT and development teams see value fast as their ability to hone in on and resolve issues accelerates and the number of user issues decreases.

4. Performance and availability metrics based on users location

See how your website, system, or application performs in different locations to find out where issues may be occurring and how to fix them.

5. Proactively find and alert on issues before users do

Users discovering glitches or errors is damaging to a business’s reputation. The ARM robots are constantly on the look-out for problems in the system and will alert you when issues arise so you can resolve them before they negatively impact your customers.

6. Monitor performance 24/7, even while users are asleep

Humans sleep; robots don’t. ARM monitors your application 24/7 to ensure even your late-night customers have a stellar user experience.

7. Get unlimited transactions

Unlike other synthetic monitoring tools, which charge on a per-transaction basis (i.e. every user transaction you want to run invites a new charge), ARM allows you unlimited transactions, so you can measure whatever actions you think your users may take.

 

How synthetic monitoring will improve application performance for a large bank

JDS is currently working with several businesses across Australia to implement our custom synthetic monitoring solution, Active Robot Monitoring—powered by Splunk. ARM is a simple and effective way of maintaining the highest quality customer experience with minimal cost. While other synthetic monitoring solutions operate on price-per-transaction model, ARM allows you to conduct as many transactions as you want using under the umbrella of your Splunk investment. We recently developed a Splunk ARM solution for one of the largest banks in Australia and are in the process of implementing it. Find out more about the problem presented, our proposed solution, and the expected results below.


The problem

A large Australian bank (‘the Bank’) needs to properly monitor the end-to-end activity of its core systems/applications. This is to ensure that the applications are available and performing as expected at all times. Downtime or poor performance, even for only a few minutes, could potentially result in great loss of revenue and reputation damage. While unscheduled downtime or performance degradation will inevitably occur at some point, the Bank wants to be notified immediately of any performance issues. They also want to identify the root cause of the problem easily, resolve the issue, and restore expected performance and availability as quickly as possible. To achieve this, the Bank approached JDS for a solution to monitor, help triage, and highlight error conditions and abnormal performance.

The solution

JDS proposed implementing the JDS Active Robot Monitoring (ARM) Splunk application. ARM is a JDS-developed Splunk application which utilises scripts written in a variety of languages (e.g. Selenium) with custom built Splunk dashboards. In this case, Selenium is used to emulate actual users interacting with the web application. These interactions or transactions will be used to determine if the application is available, whether a critical function of the application is working properly, and what the performance of the application is like. All that information will be recorded in Splunk and used for analysis.

Availability and performance metrics will be displayed in dashboards, which fulfils several purposes—namely providing management with a summary view of the status of applications and support personnel with more information to help identify the root cause of the problem efficiently. In this case, Selenium was chosen as it provides for complete customisations not available in other similar offerings in the synthetic monitoring segment, and when coupled with Splunk’s analytical and presentation capability, provides the best solution to address the Bank’s problem.

The expected results

With the implementation of the JDS ARM application at the Bank, availability, and performance of their core applications is expected to improve and remain at a higher standard. Downtime, if it occurs, will be quickly rectified as support personnel will be alerted immediately and have access to all the vital data required to do a root cause analysis of the problem quickly. Management will have a better understanding of the health of the application and will be able to assign valuable resources more effectively to work on it.

What can ARM do for your business?

Throughout the month of November 2017, JDS is open to registrations for a free on-site workshop at your business. We will discuss Active Robot Monitoring and how it could benefit your organisation specifically. To register for this exclusive opportunity,  please enter your information below and one of our account executives will contact you to set up a time to meet at your location.

Implementing an electronic signature in ALM


 

This is where organisations like the FDA (Food and Drug Administration in the United States) and TGA (Therapeutic Goods Administration in Australia) come in, enforcing regulatory controls around all aspects of the manufacturing process to minimise risk and ensure a high level of quality.

These regulatory bodies understand that effective quality assurance goes much further than just regulating the raw materials and manufacturing process. Any software used to control or support the manufacturing process must also adhere to strict quality standards, because a bug in the software can lead to problems in manufacturing with real-world consequences for patients. Software quality assurance can therefore literally be a matter of life or death l.

To ensure that medical manufacturers conduct satisfactory software testing and maintain the required quality assurance standards, the FDA have published the General Principles of Software Validation document which “outlines general validation principles that the Food and Drug Administration (FDA) considers to be applicable to the validation of medical device software or the validation of software used to design, develop, or manufacture medical devices.”

The JDS solution

JDS Australia recently implemented HPE Application Lifecycle Management (ALM) for one of our clients, a manufacturer of medicine delivering more than 10,000 patient doses per week to hospitals in Australia and overseas. ALM is a software testing tool that assists with the end-to-end management of the software testing lifecycle. This includes defining functional and non-functional requirements for a given application and creating test cases to confirm those requirements are met. It also manages all aspects of test execution, the recording and managing of defects, and reporting across the entire testing effort. ALM enables an organisation to implement and enforce their test strategy in a controlled and structured manner, while providing a complete audit trail of all the testing that was performed.

To comply with the FDA requirements, our client required customisation of ALM to facilitate approvals and sign-off of various testing artefacts (test cases, test executions, and defects). The FDA stipulates that approvals must incorporate an electronic signature that validates the user’s identity to ensure the integrity of the process. As an out-of-the-box implementation, ALM does not provide users with the ability to electronically sign artefacts. JDS developed the eSignature add-in to provide this capability and ensure that our client conforms to the regulatory requirements of the FDA.

The JDS eSignature Add-in was developed in C# and consists of a small COM-aware dll file that is installed and registered on the client machine together with the standard ALM client. The eSignature component handles basic field-level checks and credential validation, while all other business rules are managed from the ALM Workflow Customisation. This gives JDS the ability to implement the electronic signature requirements as stipulated by the FDA, while giving us the flexibility to develop customer-specific customisations and implement future enhancements without the need to recompile and reinstall the eSignature component.

Let’s look at a simple test manager approval for a test case to see how it works.

To start with, new “approval” custom fields are added to the Test Plan module which may contain data such as the approval status, a reason/comment and the date and time that the approval was made. These fields are read-only with their values set through the eSignature Workflow customisation. A new “Approve Test” button is added to the toolbar. When the user clicks this button, the Test Approvals form is presented to the user who selects the appropriate approval status, provides a comment, and enters their credentials. When the OK button is clicked, the eSignature component authenticates the user in ALM using an API function from the HPE Open Test Architecture (OTA) COM library.

If the user is successfully authenticated, eSignature passes the relevant information to the ALM workflow script which sets the appropriate field values. The approvals functionality can be further customised to do things such as making the test case read-only or sending an email to the next approver in line to review and approve the test case.

As it currently stands, the eSignature has been implemented in three modules of ALM – Test Plan for test cases that require two levels of review and approval, Test Lab for test execution records that require three levels of review and approval, and Defects to manage the assignment and approvals of defects. This can easily be expanded to include other modules, such as the approvals of test requirements or software releases.

The JDS eSignature add-in has a very small footprint, is easy to install and configure, and provides our clients with the ability to effectively implement an electronic signature capability as part of their software testing strategy.

If you have compliance requirements or are seeking ways to automate your test management processes, contact our support team at JDS Australia. Our consultants are highly skilled across a range of software suites and IT platforms, and we will work with your business to develop custom solutions that work for you.

Contact us at:

T: 1300 780 432

E: [email protected]

The Splunk Gardener

The Splunk wizards at JDS are a talented bunch, dedicated to finding solutions—including in unexpected places. So when Sydney-based consultant Michael Clayfield suffered the tragedy of some dead plants in his garden, he did what our team do best: ensure it works (or ‘lives’, in this case). Using Splunk’s flexible yet powerful capabilities, he implemented monitoring, automation, and custom reporting on his herb garden, to ensure that tragedy didn’t strike twice.

My herb garden consists of three roughly 30cm x 40cm pots, each containing a single plant—rosemary, basil, and chilli. The garden is located outside our upstairs window and receives mostly full sunlight. While that’s good for the plants, it makes it harder to keep them properly watered, particularly during the summer months. After losing my basil and chilli bush over Christmas break, I decided to automate the watering of my three pots, to minimise the chance of losing any more plants. So I went away and designed an auto-watering setup, using soil moisture sensors, relays, pumps, and an Arduino—an open-source electronic platform—to tie it all together.

Testing the setup by transferring water from one bottle to another.
Testing the setup by transferring water from one bottle to another.

I placed soil moisture sensors in the basil and the chilli pots—given how hardy the rosemary was, I figured I could just hook it up to be watered whenever the basil in the pot next to it was watered. I connected the pumps to the relays, and rigged up some hosing to connect the pumps with their water source (a 10L container) and the pots. When the moisture level of a pot got below a certain level, the Arduino would turn the equivalent pump on and water it for a few seconds. This setup worked well—the plants were still alive—except that I had no visibility over what was going on. All I could see was that the water level in the tank was decreasing. It was essential that the tank always had water in it, otherwise I'd ruin my pumps by pumping air.

To address this problem, I added a float switch to the tank, as I was aiming to set it up so I could stop pumping air if I forgot to fill up the tank. Using a WiFi adapter, I connected the Arduino to my home WiFi. Now that the Arduino was connected to the internet, I figured I should send the data into Splunk. That way I'd be able to set up an alert notifying me when the tank’s water level was low. I'd also be able to track each plant’s moisture levels.

The setup deployed: the water tank is on the left; the yellow cables coming from the tank are for the float switch; and the plastic container houses the pumps and the Arduino, with the red/blue/black wires going to the sensors planted in the soil of the middle (basil) and right (chilli) pots. Power is supplied via the two black cables, which venture back inside the house to a phone charger.
The setup deployed: the water tank is on the left; the yellow cables coming from the tank are for the float switch; and the plastic container houses the pumps and the Arduino, with the red/blue/black wires going to the sensors planted in the soil of the middle (basil) and right (chilli) pots. Power is supplied via the two black cables, which venture back inside the house to a phone charger.

Using the Arduino’s Wifi library, it’s easy to send data to a TCP port. This means that all I needed to do to start collecting data in Splunk was to set up a TCP data input. Pretty quickly I had sensor data from both my chilli and basil plants, along with the tank’s water status. Given how simple it was, I decided to add a few other sensors to the Arduino: temperature, humidity, and light level. With all this information nicely ingested into Splunk, I went about creating a dashboard to display the health of my now over-engineered garden.

The overview dashboard for my garden. The top left and centre show current temperature and humidity, including trend, while the top right shows the current light reading. The bottom left and centre show current moisture reading and the last time each plant was watered. The final panel in the bottom right gives the status of the tank's water level.
The overview dashboard for my garden. The top left and centre show current temperature and humidity, including trend, while the top right shows the current light reading. The bottom left and centre show current moisture reading and the last time each plant was watered. The final panel in the bottom right gives the status of the tank's water level.

With this data coming in, I was able to easily understand what was going on with my plants:

  1. I can easily see the effect watering has on my plants, via the moisture levels (lower numbers = more moisture). I generally aim to maintain the moisture level between 300 and 410. Over 410 and the soil starts getting quite dry, while putting the moisture probe in a glass of water reads 220—so it’s probably best to keep it well above that.
  2. My basil was much thirstier than my chilli bush, requiring about 50–75% more water.
  3. It can get quite hot in the sun on our windowsill. One fortnight in February recorded nine 37+ degree days, with the temperature hitting 47 degrees twice during that period.
  4. During the height of summer, the tank typically holds 7–10 days’ worth of water.

Having this data in Splunk also alerts me to when the system isn't working properly. On one occasion in February, I noticed that my dashboard was consistently displaying that the basil pot had been watered within the last 15 minutes. After a few minutes looking at the data, I was able to figure out what was going on.

Using the above graph from my garden’s Splunk dashboard, I could see that my setup had correctly identified that the basil pot needed to be watered and had watered it—but I wasn't seeing the expected change in the basil’s moisture level. So the next time the system checked the moisture level, it saw that the plant needed to be watered, watered it again, and the cycle continued. When I physically checked the system, I could see that the Arduino was correctly setting the relay and turning the pump on, but no water was flowing. After further investigation, I discovered that the pump had died. Once I had replaced the faulty pump, everything returned to normal.

Since my initial design, I have upgraded the system a few times. It now joins a number of other Arduinos I have around the house, sending data via cheap radio transmitters to a central Arduino that then forwards the data on to Splunk. Aside from the pump dying, the garden system has been functioning well for the past six months, providing me with data that I will use to continue making the system a bit smarter about how and when it waters my plants.

I've also 3D printed a nice case in UV-resistant plastic, so my gardening system no longer has to live in an old lunchbox.

Our team on the case

Using Splunk and Active Robot Monitoring to resolve website issues

Recently, one of JDS’ clients reached out for assistance, as they were experiencing inconsistent website performance. They had just moved to a new platform, and were receiving alerts about unexpectedly slow response times, as well as intermittent logon errors. They were concerned that, were the reports accurate, this would have an adverse impact on customer retention, and potentially reduce their ability to attract new customers. When manual verification couldn’t reproduce the issues, they called in one of JDS’ sleuths to try to locate and fix the problem—if one existed at all.

The Plot Thickens

The client’s existing active robot monitoring solution using the HPE Business Process Monitor (BPM) suite showed that there were sporadic difficulties in loading pages on the new platform and in logging in, but the client was unable to replicate the issue manually. If there was an issue, where exactly did it lie?

Commencing the Investigation

The client had deployed Splunk and it was ingesting logs from the application in question—but its features were not being utilised to investigate the issue.

JDS consultant Danesen Narayanen entered the fray and was able to use Splunk to analyse the data received. He could therefore immediately understand the issue the client was experiencing. He confirmed that the existing monitoring solution was reporting the problem accurately, and that the issue had not been affecting the client’s website prior to the re-platform

Using the data collected by HPE BPM as a starting point, Danesen was able to drill down and compare what was happening with the current system on the new platform to what had been happening on the old one. He quickly made several discoveries:

1. There appeared to be some kind of server error.

Since the re-platform, there had been a spike in a particular server error. Our JDS consultant reviewed data from the previous year, to see whether the error had happened before. He noted that there had previously been similar issues, and validated them against BPM to determine that the past errors had not had a pronounced effect on BPM—the spike in server errors seemed to be a symptom, rather than a cause.

Database deadlocks were spiking.
Database deadlocks were spiking
It was apparent that the error had happened before

2. There seemed to be an issue with user-end response time.

Next, our consultant used Splunk to look at the response time by IP addresses over time, to see if there was a particular location being affected—was the problem at server end, or user end? He identified one particular IP address which had a very high response time. What’s more, this was a public IP address, rather than one internal to the client. It seemed like there was a end-user problem—but what was the IP address that was causing BPM to report an issue?

Daily response time for all IPs (left axis), and for the abnormal IP (right axis). All times are in seconds.
Daily response time for all IPs (left axis), and for the abnormal IP (right axis). All times are in seconds.

Tracking Down the Mystery IP Address

At this point our consultant called for the assistance of another JDS staff member, to track down who owned the problematic IP address. As it turned out, the IP address was owned by the client, and was being used by a security tool running vulnerability checks on the website. After the re-platform, the tool had gone rogue: rather than running for half an hour after the re-platform, it continued to open a number of new web sessions throughout the day for several days.

The Resolution

Now that the culprit had been identified, the team were quickly able to log in to the security tool to turn it off, and the problem disappeared. Performance and availability times returned to what they should be, BPM was no longer reporting issues, and the client’s website was running smoothly once more. Thanks to the combination of Splunk’s power, HPE's active monitoring tools, and JDS’ analytical and diagnostic experience, resolution was achieved in under a day.

Case Study: Australian Red Cross Blood Service enhances critical application performance

“We depend on technology to deliver essential services to our people and healthcare professionals around the country,” says Wayne Bolton, manager of Applications and Integration Services for the Blood Service. “If our applications are unavailable, slow or not performing as intended, we’re potentially impacting patient care. In this business, time is critical.”

Historically, the Blood Service tested and monitored its infrastructure and applications in a manual, siloed and time-consuming manner. Given the criticality of its services and the highly regulated industry it operates in, the Blood Service needed more insightful information about the quality, performance, and availability of its applications.

Today, the Blood Service has that insight. Over a period of time, it has adopted best practices to gain visibility into its critical business services and understand what its users are experiencing. This was achieved by taking an end-to-end lifecycle approach to optimising applications from pre-production through to post-production or day-to-day operations management. How? By using Application Lifecycle Management (ALM) with HP Quality Center software and HP Functional Testing software in conjunction with HP Application Performance Management including HP LoadRunner software, HP SiteScope software, HP Business Process Monitor software, HP Business Service Management software and HP Diagnostics software.

Objective

Drive improvements in the quality, performance and availability of business-critical services

Approach

Engaged HP Platinum Partner JDS Australia to secure application delivery and perform validation services

IT improvements

  • Obtained single point of truth to for application validation records
  • Unified functional, performance and quality management
  • Gained operational efficiencies by migrating to a paperless testing environment
  • Enhanced the Blood Services’ reputation
  • Provided evidence of a code issue to the application vendor to ensure a timely fix

 

About the Red Cross Blood Service

The Australian Red Cross Blood Service (Blood Services) performs a critical role in Australia’s health system. It provides quality blood products, essential services
and leading-edge research to improve the lives of patients.

A non-profit organisation with more than 4,000 employees, the Blood Service must be ready to respond around the clock to deliver blood and search its extensive records for specialised requirements for particular patients. More than 520,000 Australians are blood donors across 120 collection sites every year.

The organisation’s infrastructure is comprised of a range of servers in two main sites and approximately 40 enterprise applications, of which the mission-critical National Blood Management System (NBMS) has the largest footprint with more than 3,000 users. The performance of its systems is, therefore, a top priority.

Industry

Health

Primary applications

  • ePROGESA (Blood management system)
  • Oracle eBusiness Suite (Financials and Procurement)
  • Chris 21 (Human Resources)
  • Genesys Call Centre Enterprise Software
  • Collection Site Reference Application (CSRA)
  • HP Application Lifecycle Management Solution
  • HP Application Performance Management Solution

Primary hardware

  • IBM P570
  • IBM Blades

Primary software

  • AIX
  • Linux
  • Windows® XP
  • HP Application Lifecycle Management including:
    • HP Quality Center Software
    • HP Functionality Testing Software
  • HP Application Performance Management including:
    • HP LoadRunner Software
    • HP Diagnostics Software
    • HP Business Service Management Software
    • HP Business Process Monitor Software
    • HP SiteScope Software

Regulatory compliance drives change

The catalyst came as a result of the need to be able to demonstrate the validation state of the NBMS to both internal and external auditors.

“In the beginning, we were looking for a solution that would allow us to better manage the validation of the National Blood Management System and meet our compliance obligations,” explains Bolton. “In the past, validations were performed on paper, needed considerable manpower and would often take months to complete. In 2006, we decided to do what we could to automate the process and began looking around for a suitable solution.

“We selected HP based on the solution’s deep functionality, automation capabilities, scalability potential and industry leadership.”

Partnering for success

Understanding that it could reach faster time to value with an implementation partner, the Blood Service engaged JDS Australia (JDS) to assist with the project.

An HP Platinum Partner and winner of coveted HP Software Solutions Partner Excellence Awards for six consecutive years, JDS is regarded as an expert in the field of software testing, application/infrastructure monitoring and service management.

JDS believes that for most organisations getting a partner on-board takes the risks out of deployment and maximises return on software investment. “For the Blood Service, leveraging specialist services from JDS has really paid off. It allowed the organisation to focus on core competencies and strategic direction, while we managed testing and monitoring execution. It also brought something else – a roadmap for the future.”

“Embarking on this project without JDS would have been a difficult, if not an impossible undertaking,” explains Bolton. “With their assistance, we were up and running on HP Quality Center very quickly and had standardised on a central quality platform. We were managing and controlling software requirements and test cases with relative ease. Not long after this, we implemented HP Functional Testing and began functional and regression testing of more than 70 percent of our core business processes.

“For the first time in our history, we had a single source of truth for our testing assets and could much more easily demonstrate our validation efforts to internal and external audit. Our people could go to a central location to access, manage and reuse test cases.

“We gained operational efficiencies by migrating to a paperless testing environment and achieved real-time visibility into our validation progress. Overall, HP Application Lifecycle Management (ALM) unified functional, performance and quality management. It increased visibility and enabled us to better align business and technology requirements.“

Today, there are numerous examples where the Blood Service is realising benefits. “For instance, we can now run execution reports on the validation scripts on our blood manufacturing application in 30 minutes, rather than perhaps spending days recalling paper records from off-site storage,” adds Bolton.

“In addition, when we encountered an issue with HP Functional Testing not recognising a certain JAVA class, we asked JDS for help. They collaborated closely with the HP R&D team and within three weeks a global patch was released. This would not have been possible without the high-level relationship JDS has with HP.”

Broadening the HP horizon

Getting results on the board quickly with quality and compliance management paved the way for the next phase of evolution with HP and JDS. The Blood Service decided to upgrade its NBMS to take advantage of significant technical enhancements.

This third-party application, known as ePROGESA, is used by many blood banks around the world. Yet the Blood Service was cautious in its approach towards the upgrade as it was such a major undertaking and others had experienced issues.

“If we were going to execute this upgrade successfully, it became clear that we needed performance testing,” says Bolton. “We were transitioning from a green-screen application that was not scalable to a new n-tier J2EE environment. It was not a trivial matter and we needed to ensure it would perform as intended when launched.”

Once again, the Blood Service engaged JDS. This time, it was to validate system performance prior to going live on ePROGESA and ensure the vendor was meeting its contractual obligations. JDS leveraged Application Performance Lifecycle solutions including HP LoadRunner to emulate predicted loads and HP LoadRunner Diagnostics to deep-dive into the detail. HP SiteScope was also used to correlate infrastructure metrics while the system was under load.

Bolton says the project was unusually complex, “We were working with three different suppliers - one was responsible for the infrastructure, another handled the application and JDS was looking after performance testing. It made for an interesting working relationship, because we had to marry input from three sources prior to going live.”

Predicting and proving system behaviour

“During this time, we had a situation where ePROGESA was simply not performing as intended,” says Bolton. “After evaluating a range of possibilities, we threw more memory at it. When this didn’t yield any results, we began to suspect there could be a bottleneck in the application’s code.”

“When discussing this with JDS, we again turned to HP for answers. We needed to have a detailed look at the problem. Within hours, JDS had isolated the specific line of code that was causing the problem.”

JDS explains, “We used HP LoadRunner in conjunction with HP LoadRunner Diagnostics to deep-dive into the detail and independently ascertain that the performance issues experienced were indeed code-related. It was the silver bullet the Blood Service needed and a patch for ePROGESA was issued.”

Subsequent performance testing and tuning allowed the Blood Service to meet its objectives and deliver response times that were acceptable to the business.

“This gave us the confidence to go live,” says Bolton. “The beauty of HP LoadRunner is that you can draw a line in the sand to benchmark performance and correlate this to what is happening on the hardware. By using it alongside HP LoadRunner diagnostics, you can access all the detailed information you need. This was incredibly valuable and the insight obtained helped us make informed decisions about the readiness of ePROGESA and minimise the risks.”

Monitoring end-user behaviour

Next on the Blood Service’s agenda was enterprise-grade production monitoring. JDS recommended HP Business Service Management (BSM) and associated tools including HP SiteScope, HP Business Process Monitor (BPM) and HP Diagnostics.

These tools were complemented with HP BPM transactions to synthetically gauge end-user performance and availability across its distributed locations and learn of potential issues before end-users were impacted.
Within a short period of deploying these solutions, the Blood Service realised significant operational benefits. “We quickly had evidence to show the business that we were meeting the ePROGESA service levels of 99.98 percent availability,” says Tony Oosterbeek, Acting ICT manager. “Actual response times on business transactions were being met, and in fact, far exceeding expectations. We had an early warning system to resolve issues before our users were impacted. More importantly, we had complete traceability between the performance and availability our end-users experienced.”

Since then, the Blood Service has adopted this same proactive approach to address system availability for other applications including its Collection Site Reference Application – an in-house system used by its national call centre. “Recently, we needed to find out if the application could scale up from 100 to 135 users,” explains Brett Renton, IS Acting operations manager. “HP LoadRunner was put to work and we quickly determined that the user breakpoint would be 180 people. This gave us the confidence we needed to go ahead.”

“Another example of the benefits we are realising with HP Business Service Management (BSM) is with our Oracle™ Financials suite. After we decided to upgrade the software, hardware and database elements to improve performance, we leveraged HP Business Process Monitor (BPM) to better understand the timing issues around business transactions and used this data to justify the cost of the upgrade. It was a really good way to make a clear business case and the results speak for themselves. We now proactively know exactly what our end users are experiencing and can detect any performance or availability issues across all key geographic locations”.

Business Benefits

  • Improved ability to meet regulatory audits by access to validation data in hours rather than days or weeks
  • Achieved availability of 99.8 percent for National Blood Management System (NBMS)
  • Achieved proactive end-user visibility of business transaction times for Oracles™ Financials application
  • Mitigated risks of deploying applications in critical functions

Solid future

Adopting a lifecycle approach to quality, performance and availability of key business applications has enhanced the Blood Services’ capability.

There is now a focus on extending the discipline of validation to other systems, “Although we’ve improved processes in areas including requirements, testing and performance, the greatest outcome is that we have brought all these best practices together. This combination provides collaborative processes and analysis capabilities for traceability and consistent reporting across the lifecycle. It has brought the organisation to a common place that allows us to achieve governance, compliance and accountability at a lower risk.”

“Thanks to HP and JDS, we’ve realised the full advantages of adopting a lifecycle management approach to managing our applications – from testing, through to pre-production through to go-live. We’ve mitigated risks, ensured quality and delivered more responsive, stable services to support our users and the organisations mission to improve the lives of patients,” says Bolton.

Case Study: Flash Group optimises performance of the Global Corporate Challenge Website

In 2009, Flash experienced some performance issues with the Global Corporate Challenge (GCC) website—the world’s first virtual health program that encourages corporations to help their employees get active and healthy—that resulted in speed degradation, functionality errors and site downtime

With the number of GCC participants predicted to double in 2010 to 120,000, Flash needed to drive a higher level of application performance and mitigate the risks it had previously. As a result, the company turned to HP Preferred Partner, JDS Australia, for a solution. The company adopted a Business Technology Optimization (BTO) approach to application performance with HP LoadRunner software for predicting the behaviour and performance of the GCC website under load.

Objective

Flash Group wanted to mitigate the risk of performance issues for the launch of the 2010 Global Corporate Challenge (GCC) website.

Approach

Flash engaged HP Preferred Partner, JDS Australia and adopted a Business Technology Optimization (BTO) strategy with HP LoadRunner software to obtain an accurate picture of end-to-end system performance.

IT improvements

  • Ensured the quality and performance of the GCC website for the 2010 programme.
  • Established a standardised procedure for load testing the website.
  • Identified and eliminated performance bottlenecks to tune for better performance.
  • Matured its website development methodology.
  • Raised its profile and credibility as an organisation that produces high-performing, user-friendly websites.
  • Delivered 99.99 per cent uptime on its systems with web servers only reaching 20 percent system capacity, and page response times of less than two seconds which resulted in a high-quality user experience and enhanced the programme’s brand value.

About Flash Group

Flash Group (Flash) is one of Australia’s fastest growing full-service advertising agencies, offering integrated services including above and below the line advertising with in-house digital, strategy and design.

The company’s 30 staff are dedicated to servicing a group of high profile clients that spans retail, healthcare, travel, fashion, hardware, consumer electronics and entertainment. This includes leading brands such as Pioneer, Stanley, Global Corporate Challenge, Contiki, Origin Energy, Clive Peeters, and more.

Every year, the company assists Global Corporate Challenge (GCC), a world-first virtual health programme that encourages corporations to help their employees get active and healthy. The programme sees people from around the globe form teams and don pedometers for a period of 16 weeks and record their daily step count on the GCC website, which was designed and built by Flash.

Industry

Marketing and Advertisement

Primary hardware

  • Multiple Virtual Web and Database Servers hosted externally running Windows Server 2008

Primary software

  • HP LoadRunner software

Predicting system behaviour and application performance

“The stability and performance of the GCC website is critical to the long-term success of the programme,” explains Carla Clark, digital producer, Flash Group.

“While we undertook some basic testing in 2009, we did not have adequate visibility to obtain an accurate end-to-end picture of the website’s performance, particularly at peak loads. This was apparent when we experienced issues during the 2009 program and it was the impetus for us to seek a performance validation solution.

“Despite the broad experience of our team, we wanted to leverage specialised expertise in performance validation, so we invited JDS Australia to recommend an appropriate software solution. We settled on HP LoadRunner software, due to its functionality, reliability and versatility.”

Partnership provides expertise and speeds time to benefit

An HP Platinum Partner and winner of the coveted HP Software and Solutions Partner of the Year Award for the past four years, JDS is widely regarded as a leader in the BTO space. The company provides extensive and in-depth knowledge of HP’s suite of testing and monitoring solutions, offering support to clients in a variety of industries.

The account manager at JDS Australia believes this is quite an unusual project, as Flash is one of the first creative agencies he has come across that realised the importance of performance validation for a website it had developed. “Ensuring that mission-critical systems such as the GCC website are available and performing as intended is something that all organisations grapple with. However, we don’t often see creative agencies trying to predict system behaviour and application performance at this level – that’s usually the domain of IT teams or developers.

“For organisations (such as Flash) that don’t have in-house performance testing expertise, getting a partner on-board takes the hassle out of deployment. In this instance, JDS provided a roadmap to help Flash mitigate the risk of deploying the GCC website and prevent the costly performance problems it had previously incurred. We helped the team stress test the website to handle the large increase in participants and determine the peak loads and transactional volumes, which in turn enabled us to recommend how best to setup the IT infrastructure. The testing also identified bottlenecks, which the website developers rectified this year.”

Carla Clark believes that having an HP partner involved made all the difference to this project. She says, “Having JDS on board meant that we could focus on our core competencies, while allowing them to do what they do best—provide the services needed to ensure the GCC website would be available and performing as and when required. JDS has assisted Flash in getting the most out of HP LoadRunner in a short space of time.”

Mitigating risk and gaining confidence

The company’s vision in adopting HP LoadRunner was to ensure the GCC website would be scalable in line with the rising number of users. “We wanted to adopt a long-term approach to this project and create a robust website to keep pace with the programme’s planned growth,” explains Tim Bigarelli, senior developer at Flash. “This also entailed the migration to a new IT infrastructure to further enhance our ability to support the website’s evolution.”

Flash began preparations for the launch of this year’s website by having JDS test the previous application on the old infrastructure to establish performance benchmarks. The next round of tests were applied to the new code base using both the old and new infrastructure. “The results uncovered were extremely beneficial as it enabled us to redevelop the website for maximum performance and functionality. But more importantly, it provided us with complete visibility into the performance of the application from end-to-end, which enabled us to verify that the new application would sustain loads of 1,000 concurrent users over the first peak hour on the launch day with an average login time of 7-8 minutes per user and average response times for all pages under two seconds to avoid abandonment,” adds Bigarelli.

Better decisions, operational efficiencies and improved client satisfaction

As a result of deploying HP LoadRunner to validate the performance of the GCC website, Flash has realised considerable benefits. The organisation has facilitated better decision-making, particularly on the development side, experienced operational efficiencies and improved client satisfaction.

Clark says, “HP LoadRunner software takes the guesswork out of the GCC website’s development. It provides confidence that the application will work as intended and it gives us the data we need to support our decisions. In short, it helps us avoid application performance problems at the deployment stage.

“By giving us a true picture of end-to-end performance, diagnosing application and systems bottlenecks and enabling us to tune for better performance, we mitigated the risk of failure for the GCC website. And with access to facts, figures and baseline measurements, we were able to tune the application for success.”

Putting the website to the test

Following considerable testing, Flash launched the GCC website on May 13, 2010. As expected, traffic was extremely high, with an average of 130,000 visitors on the first two days, and a peak of 8,403 visitors in the first hour.

“The GCC website performed according to our expectations and we are delighted with the business outcomes of HP LoadRunner software,” says Clark.

“Thanks to the preparative measures we put in place, our systems thrived and delivered 99.99 percent uptime, with our web servers only reaching 20 percent system capacity and page response times of under two seconds. This enabled us to provide a high-quality user experience, which is enhancing the programme’s brand value.

“Overall, HP LoadRunner software helped us solve key issues this year and identify areas for performance improvements for next year. We have benefited from knowing that performance testing prevents potential failures - such as the ones we experienced last year. As a result, we have considerably reduced the opportunity cost of defects, while driving productivity and quality in our operational environment to deliver a robust GCC website this year, that performs as intended.”

Business Benefits

  • Mitigated the risks of poor performance with a consistent approach to load testing.
  • Adopted a consistent approach to load testing to make confident, informed decisions about the performance and scalability of the GCC website.
  • Gained a true picture of end-to-end performance, which enabled better decision-making and functionality changes.
  • Increased client satisfaction through a fast, high-performing website.
  • Resolved issues with the production architecture and configuration before users were impacted.
  • Gained understanding and confidence in the performance characteristics of the website prior to going live.

Looking ahead

HP will continue to play a key role as the performance validation backbone of the GCC website. By leveraging the functionality and flexibility of HP LoadRunner software, Flash will continue to derive value from predicting system behaviour and application performance. The company is also exploring options to extend its HP investment by utilising the HP LoadRunner scripts with HP Business Availability Center software to monitor the performance and availability of the GCC website from an end user perspective.

In the future, Clark is keen to have someone in the team take the lead on testing. She says: “This project has demonstrated to us just how important testing really is, so we are focused on ensuring it becomes part of our routine development. We are also keen to share the functionality of HP LoadRunner to other clients with similar-sized projects.

“On the whole, HP LoadRunner software has helped Flash mature its website development methodology. We deployed a higher quality GCC website, improved client satisfaction and raised our profile and credibility as an organisation that produces high-performing, user-friendly and scalable websites,” concludes Clark.