Author: JDS

Fast-track ServiceNow upgrades with Automated Testing Framework (ATF)

Why automate?

ServiceNow provides two releases a year, delivering new features to consistently support best practice processes. ServiceNow has flagged they will move towards “N-1” upgrades in 2019, meaning every customer will need to accept each release in the future. To keep up, ServiceNow customers should consider automated testing. This can provide regression testing, completed consistently for each release, and reduce the time and effort needed per upgrade. Effective test automation on ServiceNow is provided by the Automated Testing Framework (ATF), which is included in the ServiceNow platform for all applications. It enables no-code and low-code users to create automated tests with ease. Aided by complementary processes and methodology, ATF reduces upgrade pain by cutting back on manual tests, reducing business impact and accelerating development efficiency.

What is the Automated Test Framework?

Testing frameworks can be viewed as a set of guidelines used for creating and designing test cases. Test automation frameworks take this a step further by using a range of automation tools designed to make the practice of quality assurance more manageable for testing teams.

ServiceNow’s ATF application combines the above into an easy to use and implement solution specifically for ServiceNow applications and functionality. Think of ATF as a tool to streamline your upgrade and QA processes by building tests to check if software or configuration changes have potentially ‘broken’ any existing functionality. It also means developers would no longer be required to start invasive activities like code refactoring to generate new test cases.

Overview of test creation in ServiceNow ATF - click to enlarge

ATF’s unique features include:

  • Codeless setup of test steps and templates (reusable across multiple tests)
  • Testing Service Catalog from end-to-end including submitting Catalog forms and request fulfilment
  • Module testing e.g. ITSM (Incident, Problem & Change Management)
  • Testing forms in the Service Portal (Included in London release)
  • Batching with test suites e.g. Execute multiple tests in a defined order and trigger based on a schedule
  • Custom step configurations & unit testing using the Jasmine test framework
       

Using the codeless test step configuration in ATF to set a Variable value on a Service Catalog form

Simplify your upgrades

For most software upgrades, testing is a burdensome and complicated process, with poorly defined test practices often leading to compromised software quality and potentially, failed projects. Without automation, teams are forced to manually test upgrades - a costly, time-consuming and often ineffective exercise. In most cases, this can be attributed to a lack of testing resources and missing test processes, leading to inconsistent execution.

To address these, ATF introduces structure and enforces consistency across different tests applied to common ServiceNow use cases. Test consistency is important, as this forms a baseline of instance stability based on existing issues, meaning defects caused by upgrades are more reliably determined. ATF allows for full automation of test suites, where the run order can be configured based on tests that depend on each other.  

The following illustrates a simple test dependency created in ATF:

An example ‘hierarchical’ test structure implemented within ATF

A common issue resolved by automated testing is the impact on Agile development cycles. Traditionally, developers would run unit tests by hand, based on steps also written by the developer, potentially leading to missed steps and incomplete scenarios. ATF can be used to run automated unit tests between each build cycle, saving time through early detection and remediation of software issues along with shortened development cycles. The fewer issues present before the upgrade, the fewer issues introduced during the upgrade.

A common requirement of post-upgrade activities is to execute a full regression test against the platform. This means accounting for countless business logic, client scripts, UI policies and web forms that may have been affected. Rather than factoring all of these scenarios into lengthy UAT/Regression cycles, ATF can reduce the load on the business by running those common and repetitive test cases involving multiple user interactions and data entry.

The below example shows how a common use case for testing, field ‘state’ validation on a Service Portal form, is applied using the test step module:

Validating visible & mandatory states of variables with ATF against a Record Producer in Service Portal

Unfortunately, not everything can be automated with ATF out of the box, there are some gaps that are a result of complex UI components, portal widgets and custom development work. It is also important to note custom functionality will add overhead to the framework implementation, and will require specialised scripting knowledge and making use of the ‘Step Configurations’ module to create custom test steps.

When configured properly, it’s possible to automate between 40-60% of test cases with ATF depending on environment complexity and timeframes. The benefits can largely be seen during regression testing post-upgrade, and unit testing during development projects.

In summary, implementing an ATF is a great way of delivering value to ServiceNow upgrade projects and enabling development teams to be more agile. Assessing the scope and depth of testing using an agreed methodology is a great way to determine what is required, and to achieve ‘buy-in’ from others, rather than taking the ‘automate everything’ approach.

JDS is here to help

Recognised as experts in the local Test Automation space for over 12 years, JDS' specialist team has adapted this experience to provide a proven framework for ServiceNow ATF implementation.

We have developed a Rapid Automated Testing tool which can use your existing data to take up to 80% of the work out of building your automated tests. Contact us today to find out how we can build test cases based on real data and automate the development of your testing suite in a fraction of the time that manual test creation would require.

We can help you get started with ServiceNow ATF. In just a few days, a ServiceNow ATF “Kick Start” engagement will provide you with the detail you need to scope and plan ATF on your platform.

Conclusion

Want to know more? Email [email protected] or call 1300 780 432 to reach our team.

Our team on the case

Our ServiceNow stories

ServiceNow Catalog Client Scripts: G_Form Clear Values

ServiceNow’s Service Portal allows businesses to interact with their users/customers with a catalog of various items, giving users easy access to services that help them conduct their business activities.

Recently, JDS was engaged to assist a customer with a Human Resources (HR) onboarding form that integrated with SAP. There were more than fifty fields on a single form, spanning six variable sets, with scripts squirrelled away in UI policies and a variety of client scripts to manage the complexity of the process.

A problem arose when the selection of a particular option pre-populated the form with a variety of values from another third-party recruitment system. If the user changed their mind and chose another option, fields were hidden that still retained values. This caused erroneous values to be posted to the backend.

As the form was large, scrolling well off the screen, it was difficult to troubleshoot problems, but JDS realised that the ability to reset the form to its default values whenever key choices changed or UI policies were switched would drastically simplify the form. Unfortunately, the g_form.clearValues() didn’t work quite as expected.

Behind the scenes, there are a number of special functions that allow these catalog items to respond to a user’s input. In this article, we’re going to examine the behaviour of the g_form.clearValue(), which is used to clear values when switching between options.

Depending on the requirements of a particular service catalog item, there can be a need to clear the values on a form and start again (such as when switching between users with different privileges, etc.), but this is where the g_form.clearValue() can be a little tricky.

Consider this: the code below seems simple enough—clear each value, but look at what happens when we try that…

The first two fields respond as you’d expect, clearing values from a reference field and a plain text field, but look at the colour option: it switches from pink to black, while the option of having 4G switches from no to yes.

The problem lies in how ServiceNow handles fields. A selection control, like the one used for cover colours, switches to ‘none,’ but as this particular list of choices doesn’t have ‘none’ as an option, ServiceNow takes the FIRST option, which in this case is black.

We could simply make sure all our selection controls have ‘none’ as an option, but in a large catalog with complex forms, there’s a good chance we’ll miss some of them. Besides, what we really need is a way to reset to our default values. Having ‘none’ as an option won’t solve that issue.

When it comes to Boolean fields, like our 4G field above, the clear values function switches options, reversing them!

On complex forms with lots of combinations, especially those that don’t fit on a single page, this can cause unpredictable behaviour that confuses users. The solution JDS developed is to introduce the ability to reset to default.

Rather than trying to guess which selection a user should have when there’s no option for ‘none,’ or simply flipping Boolean values, JDS recommends resetting the form to the values it had when it originally loaded. There’s no way to do this out-of-the-box, so we’re going to have to get a little creative. The actual code is quite small, but it has to be placed in a strategic location.

Step One: Build a Catalog Client Script Library

Behind the scenes, ServiceNow retains a large amount of information about the widgets on each portal page, including the value of various fields, so we’re going to tap into this to reset our form to the default values. To do this, we need to add a UI script that runs in the background whenever our catalog loads. We’ll use this as a way of adding a library of common functions that are accessible from any catalog item.

Note: this UI script is available as a download at the bottom of this page.

By default, ServiceNow does not allow access to components like the document object or the angular object from a catalog client script, so we’re going to sidestep this limitation by accessing these through our UI Script.

There’s a good reason ServiceNow doesn’t allow access to these components by default. The reason is that people tend to abuse these libraries and do all kinds of ridiculous things, like inserting values directly into forms instead of using the g_form libraries built into ServiceNow.

As the adage goes, “With great power comes great responsibility.” It’s fine to use components like the angular and the DOM, but they need to be used wisely. Don’t abuse them.

As you’ll see, we’ll use the angular object to get our values but NOT to set our values, as that’s where the danger lies. Instead, we’ll use the tried and tested g_form library to reset the defaults on our catalog item.

Our code is concise—only seven lines. The actions we’re undertaking are:

  • Retrieve the field object from the widget’s angular scope
  • Loop through that object to pick up each of the initial form values
  • Store them in a session object so we can retrieve them later

If you haven’t used session storage before, it provides a convenient way of passing information between components/pages. HTML is inherently state-less, so session storage allows you to overcome this limitation and share information freely. Normally, session storage is used while switching between pages, but in this case, we’re using it as a global variable shared by client scripts on the same page.

When it comes to resetting the form, it’s simply a case of looping through our session storage variable.

Notice how we pass the g_form through to our script along with ignore, a comma-separated list of any fields we don’t want reset.

Step Two: Allow the catalog widget to access this library

Now that we’ve built a library of common functions, the next step is to ensure it’s available to our catalog items. To do this, we need to add this as a JavaScript library to the service catalog widget. There are a few steps involved, but it’s straightforward, as described below.

You’ll find these within ServiceNow under SC Catalog Item in Widgets in the Portal.

Add a new Dependency called Catalog Client Script Library at the bottom of this widget record.

This new Dependency needs a JS Includes.

Which then refers to our UI Script.

All of these entries are simply a means of pointing to the UI Script we developed previously.

Step Three: Setting and Get Defaults

Our UI Script will automatically get the defaults for each catalog item when it loads, so there’s no need for an onLoad script. All we need to do is identify when and where we want to reset our form to its default values.

Once we’ve identified the trigger for resetting the form to its default state, we can add a Catalog Client Script to that onChange event.

By passing the g_form object through to our UI Script, we can use the default ServiceNow means of manipulating form elements rather than doing something risky that might backfire with an upgrade.

Also, notice we’ve passed a few fields we want to leave untouched.

In this way, we can develop a complex form catering for multiple scenarios, showing and hiding fields and resetting them at will.

Summary

ServiceNow’s g_form API provides a host of useful functions for managing catalog items, but its clear values function is a little overzealous. In light of that, we’ve developed a means of restoring the initial default values instead of blindly clearing values.

Going forward, you can add more and more common functions to this catalog script library and access them from your various catalog items.

Conclusion

To find out more about how JDS can help you with your ServiceNow needs, contact our team today on 1300 780 432, or email [email protected].

Our team on the case

Our ServiceNow stories

Is DevPerfOps a thing?

New technology terms are constantly being coined. One of our lead consultants answers the question: Is DevPerfOps a thing?


Hopefully it’s no surprise that traditional performance testing has struggled to adapt to the new paradigm of DevOps, or even Agile. The problems can be boiled down to a few key factors:

  1. Performance testing takes time—time to script, and time to execute. A typical performance test engagement is 4-6 weeks, during which time the rest of the project team has made significant progress. 
  2. Introducing performance testing to a CI/CD pipeline is really challenging. Tests typically run for an hour or more and can hold up the pipeline, and they are extremely sensitive to application changes as the scripts operate at the protocol level (typically HTTP requests). 
  3. Performance testing often requires a full-sized, production-like infrastructure environment. These aren’t cheap and are normally unavailable during early development, making “early performance testing” more of an aspirational idea, rather than something that is always practical.

All the software vendors will tell you that DevOps with performance testing is easy, but none of them have solved the above problems. For the most part, they have simply added a CI/CD hook without even attempting to challenge the concept of performance testing in DevOps at all.

A new world

So what would redefining the concept of performance testing look like? Allow me to introduce DevPerfOps. We would define the four main activities of a DevPerfOps engineer as:

  1. Tactical Load Tests
  2. Microbenchmarking
  3. APM and Code Reviews
  4. Container Workloads

Let’s start at the top.

Tactical Load Testing

This is can be broadly defined as running performance tests within the limitations of a CD/CI pipeline. Is it perfect? No. Is it better than not doing anything or struggling with a full end-to-end test? Absolutely, yes!

Tactical Load Testing has the following objectives:

  • Leverage CI toolkits like Jenkins for execution and results collection.
  • Use lightweight toolsets for execution (JMeter, Taurus, Gatling, LoadRunner, etc. are all perfectly good choices).
  • Configure performance tests to run for short durations. 15-20 minutes is the ideal balance. This can be achieved by minimising the run logic, reducing think times, and focusing on only testing individual business processes.
  • Expect to run your performance tests on a scaled-down environment, or a single server node. Think about what this will do to your workload profile.

Microbenchmarking

These are mini unit or integration tests designed to ensure that a component operates within a performance benchmark and that deviations are detected.

Microbenchmarking can target things like core application code, SQL queries, and Web Service integrations. There is an enormous amount of potential scope here; for example, you can write a microbenchmark test for a login, or a report execution, or a data transformation--anything goes if it makes sense.

Most importantly, microbenchmarking is a great opportunity to test early in the project lifecycle and provide early feedback.

APM and Code Reviews

In past years, having access to a good APM tool or profiler, or even the source code, has been a luxury. Not anymore--these tools are everywhere, and while a fully featured tool like AppDynamics or New Relic is beneficial for developers and operations, a lot can be achieved with low-cost tools like YourKit Profiler or VisualVM.

APM or profilers allow slow execution paths to be identified, and memory usage and CPU utilisation to be measured. Resource intensive workloads can be easily identified and performance baked into the application.

Container Workloads

Containers will one day rule the world. If you don’t know much about containers, you need to learn about them fast.

In terms of performance engineering, each container handles a small unit of workload and container orchestration engines like Kubernetes will auto-scale that container horizontally across the entire data centre. It’s not uncommon for applications to run on hundreds or thousands of containers.

What’s important here is that the smallest unit of workload is tiny, and scaling is something that a well-designed application will get for free. This allows your performance tests to scale accordingly, and it allows the whole notion of “what is application infrastructure” to be challenged. In the container world, an application is a service definition… and the rest is handled for you.

So, is DevPerfOps a thing? We think so, and we think it will only continue to grow and expand from here. It’s time that performance testing meets the needs of 2018 IT teams, and JDS can help. If you have any questions or want to discuss more, please get in touch with our performance engineering team by emailing [email protected]. If you’re looking for a quick, simple way of getting actionable insights into the performance health of your website or application, check out our current One Second Faster promotion.

Our team on the case

Read more Tech Tips

Monitoring Atlassian Suite with AppDynamics

Millions of IT professionals use JIRA, Confluence, and Bitbucket daily as the backbone of their software lifecycle. These tools are critical to getting anything done in thousands of organisations. If you’re reading this, it’s safe to guess that’s the case for your organisation too!

Application monitoring is crucial when your business relies on good application performance. Just knowing that your application is running isn’t enough; you need assurance that it’s performing optimally. This is what a JDS client that runs in-house instances of JIRA, Confluence, and Bitbucket, recently found out.

This client, a major Australian bank, started to notice slowness with JIRA, but the standard infrastructure monitoring they were using was not providing enough insight to allow them to determine the root cause.

JDS was able to instrument their Atlassian products with AppDynamics APM agents to gain insights into the performance of the applications. After deployment of the Java Agents to the applications, AppDynamics automatically populated the topology map below, known as a Flow Map. This Flow Map shows the interactions for each application, accompanied by overall application and Business Transaction health, and metrics like load, response time, and errors.

After some investigation, we found the root cause of the JIRA slowness was some Memcached backends. Once we determined the root cause and resolved the issue, operational dashboards were created to help the Operations team monitor the Atlassian application suite. Below is a screenshot of a subsection of the dashboard showing Database Response Times, Cache Response Times, and Garbage Collection information.

An overview dashboard was also created to assist with monitoring across the suite. The Dashboard has been split out to show Slow, Very Slow, and Error percentages along with Average Response Times and Call Volumes for each application. Drilldowns were also added to take the user directly to the respective application Flow Map. Using these dashboards, they can, at a glance, check the overall application health for the Atlassian products. This has helped them improve the quality of service and user experience.

The bank’s JIRA users now suffer from far fewer slowdowns, particularly during morning peaks when many hurried story updates are taking place in time for stand-ups! The DevOps team is also able to get a heads-up from AppDynamics when slowness starts to occur, rather than when performance has fallen off a cliff.

So if you’re looking for more effective ways to monitor your Atlassian products, give our AppDynamics team a call. We can develop and implement a customised solution for your business to help ensure your applications run smoothly and at peak performance.

Our team on the case

Our AppDynamics stories

5 quick tips for customising your SAP data in Splunk

Understanding how your SAP system is performing can be a time-consuming process. With multiple environments, servers, APIs, interfaces and applications, there are numerous pieces to the puzzle, and stitching together the end-to-end story requires a lot of work.

That’s where SAP PowerConnect can assist. This SAP-certified tool simplifies the process by seamlessly collating your SAP metrics into a single command console: Splunk. PowerConnect compiles and stores data from each component across your SAP landscape and presents the information in familiar, easily accessible, and customisable Splunk dashboards. When coupled with Splunk’s ability to also gather machine data from non-SAP systems, this solution provides a powerful insight mechanism to understand your end user’s experience.

Given the magnitude of information PowerConnect can collate and analyse, you may think that setting it up would take days—if not weeks—of effort for your technical team. But one of PowerConnect’s key features is its incredibly fast time to value. Whatever the size of your environment, PowerConnect can be rapidly deployed and have searchable data available in less than ten minutes. Furthermore, it is highly customisable in providing the ability to collect data from custom SAP modules and display these in meaningful context sensitive dashboards, or integrate with Splunk IT Service Intelligence.

Here are some quick tips for customising your SAP data with PowerConnect.

1. Use the out-of-the-box dashboards

SAP software runs on top of the SAP NetWeaver platform, which forms the foundation for the majority of applications developed by SAP. The PowerConnect add-on is compatible with NetWeaver versions 7.0 through to 7.5 including S/4 HANA. It runs inside SAP and extracts machine data, security events, and logs from SAP—and ingests the information into Splunk in real time.

PowerConnect can access all data and objects exposed via the SAP NetWeaver layer, including:

  • API
  • Function Modules
  • IDoc
  • Report Writer Reports
  • Change Documents
  • CCMS
  • Tables
PowerConnect has access to all the data and objects exposed via the SAP NetWeaver layer

If your SAP system uses S/4 HANA, Fiori, ECC, BW components or all the above, you can gain insight into performance and configuration with PowerConnect.

To help organise and understand the collated data, PowerConnect comes with preconfigured SAP ABAP and Splunk dashboards out of the box based on best practices and customer experiences:

Sample PowerConnect for SAP: SAP ABAP Dashboard

Sample PowerConnect for SAP: Splunk Dashboard

PowerConnect centralises all your operational data in one place, giving you a single view in real time that will help you make decisions, determine strategy, understand the end-user experience, spot trends, and report on SLAs. You can view global trends in your SAP system or drill down to concentrate on metrics from a specific server or user.

2. Set your data retention specifications

PowerConnect also gives you the ability to configure how, and how long, you store and visualise data. Using Splunk, you can generate reports from across your entire SAP landscape or focus on specific segment(s). You may have long data retention requirements or be more interested in day-to-day performance—either way, PowerConnect can take care of the unique reporting needs for your business when it comes to SAP data.

You have complete control over your data, allowing you to manage data coverage, retention, and access:

  • All data sets that are collected by PowerConnect can be turned off, so you only need to ingest data that interests you.
  • Fine grain control over ingested data is possible by disabling individual fields inside any data sets.
  • You can customise the collection interval for each data set to help manage the flow of data across your network.
  • Data sets can be directed to different indexes, allowing you to manage different data retention and archiving rates for different use cases.

3. Make dashboards for the data that matters to you

You have the full power of Splunk at your disposal to customise the default dashboards, or you can use them as a base to create your own. This means you can use custom visualisations, or pick from those available on Splunkbase to interpret and display the data you’re interested in.

Even better, you’re not limited to SAP data in your searches and on your dashboards; Splunk data from outside your SAP system can be correlated and referenced with your SAP data. For example, you may want to view firewall or load balancer metrics against user volumes, or track sales data with BW usage.

4. Compare your SAP data with other organisational data

It is also possible to ingest PowerConnect data with another Splunk app, such as IT Service Intelligence (ITSI) to create, configure, and measure Service Levels and Key Performance Indicators. SAP system metrics can feed into a centralised view of the health and key performance indicators of your IT services. ITSI can then help proactively identify issues and prioritise resolution of those affecting business-critical services. This out-of-the-box monitoring will give you a comprehensive view of how your SAP system is working.

The PowerConnect framework is extensible and can be adapted to collect metrics from custom developed function modules. Sample custom extractor code templates are provided to allow your developers to quickly extend the framework to capture your custom-developed modules. These are the modules you develop to address specific business needs that SAP doesn’t address natively. As with all custom code, the level of testing will vary, and gaining access to key metrics within these modules can help both analyse usage as well as expose any issues within the code.

5. Learn more at our PowerConnect event

If you are interested in taking advantage of PowerConnect for Splunk to understand how your SAP system is performing, come along to our PowerConnect information night in Sydney or Melbourne in May. Register below to ensure your place.

PowerConnect Explainer Video

Find out more

To find out more about how JDS can help you implement PowerConnect, contact our team today on 1300 780 432, or email [email protected].

Our team on the case

Our Splunk stories

How to maintain versatility throughout your SAP lifecycle

There are many use cases for deploying a tool to monitor your SAP system. Releasing your application between test environments, introducing additional users to your production system, or developing new functionality—all of these introduce an element of risk to your application and environment. Whether you are upgrading to SAP HANA, moving data centres, or expanding your use of ECC modules or mobile interfaces (Fiori), you can help mitigate the risk with the insights SAP PowerConnect for Splunk provides.

Upgrading SAP

Before you begin upgrades to your SAP landscape, you need to verify several prerequisites such as hardware and OS requirements, source release of the SAP system, and background process volumes. There are increased memory, session, and process requirements when performing the upgrade, which need to be managed. The SAP PowerConnect solution provides you with all key information about how your system is responding during the transition, with up-to-date process, database, and system usage information.

Triaging and correlating events or incidents is also easier than ever with PowerConnect through its ability to time series historic information. It means you can look back to a specific point in time and see what the health of the system or specific server was, the configuration settings, etc. This is a particularly useful feature for regression testing.

Supporting application and infrastructure migration

Migration poses risks. It’s critical to mitigate those risks through diligent preparation, whether it’s ensuring your current code works on the new platform or that the underlying infrastructure will be fit for purpose.

For example, when planning a migration from an ABAP-based system to an on-premise SAP HANA landscape, there are several migration strategies you can take, depending on how quickly you want to move and what data you want to bring across. With a greenfield deployment, you start from a clean setup and bring across only what you need. The other end of the spectrum is a one-step upgrade with a database migration option (DMO), where you migrate in-place.

Each option will have its own advantages and drawbacks; however, both benefit from the enhanced visibility that PowerConnect provides throughout the deployment and migration process. As code is deployed and patched, PowerConnect will highlight infrastructure resource utilisation issues, greedy processes, and errors from the NetWeaver layer. PowerConnect can also analyse custom ABAP code and investigate events through ABAP code dumps by ID or user.

Increasing user volumes

Deployments can be rolled out in stages, be it through end users or application functionality. This is an effective way to ease users onto the system and lessen the load on both end-user training and support desk tickets due to confusion. As user volume increases, you may find that people don’t behave like you thought they would—meaning your performance test results may not match up with real-world usage. In this case, PowerConnect provides the correlation between the end-user behaviour and the underlying SAP infrastructure performance. This gives you the confidence that if the system starts to experience increased load, you will know about it before it becomes an issue in production. You can also use PowerConnect to learn the new trends in user activity, and feed that information back into the testing cycle to make sure you’re testing as close to real-world scenarios as possible.

It may not be all bad news. PowerConnect can highlight unexpected user behaviour in a positive light, where you might find new users are introduced to the system, they don’t find a feature as popular as you thought they would. Hence you would then be able to turn off the feature to reduce licence usage or opt to promote the feature internally. PowerConnect will not only give you visibility into system resource usage, but also what users are doing on the system to cause that load.

Feedback across the development lifecycle

PowerConnect provides a constant feedback solution with correlation and insights throughout the application delivery lifecycle. Typically migrations, deployments, and upgrades follow a general lifecycle of planning, deploying, then business as usual, before making way for the next patch or version.

During planning and development, you want insights into user activity and the associated infrastructure performance to understand the growth of users over time.

  • With the data retention abilities of Splunk, PowerConnect can identify trends from the last hour right back to the last year and beyond. These usage trends can help define performance testing benchmarks by providing concurrent user volumes, peak periods, and what transactions the users are spending time on.
  • In the absence of response time SLAs, page load time goals can be defined based on current values from the SAP Web Page Response Times dashboard.
  • With the ability to compare parameters, PowerConnect can help you make sure your test and pre-prod environments have the same configuration as production. When the test team doesn’t have access to run RZ10 to view the parameters, a discrepancy can be easy to miss and cause unnecessary delays.

Once in production, PowerConnect also gives you client-centric and client-side insights.

  • You can view the different versions of SAP GUI that users have installed or see a world map showing the global distribution of users.
  • Splunk can even alert from a SecOps perspective, and notify you if someone logs in from a country outside your user base. You can view a list of audited logins and browse the status of user passwords.
  • The power of Splunk gives you the ability to alert or regularly report on trends in the collected data. You can be informed if multiple logins fail, or when the CPU vs Work processes is too high. Automated scripts can be triggered when searches return results so that, for example, a ServiceNow ticket can be raised along with an email alert.

Even after a feature has completed its lifecycle and is ready to be retired, PowerConnect remains rich with historical data describing usage, issues, and configuration settings in Splunk, even if that raw data disappears or has been aggregated from SAP.

Backed by the power of Splunk, and with the wealth of information being collected, the insights provided by PowerConnect will help you effectively manage your SAP system throughout your SAP lifecycle.

Want to find out more?

Find out more about SAP PowerConnect for Splunk and how it can be a key tool for your business in 2018 by attending our event, "Splunkify your SAP data with PowerConnect" in May.

Choose the most convenient location and date for you, and register below. We look forward to seeing you there.

Find out more

To find out more about how JDS can help you implement PowerConnect, contact our team today on 1300 780 432, or email [email protected].

Our team on the case

Read more Tech Tips

How to revitalise your performance testing in SAP

When you go live with a new system, feature, or application, you want to be confident in how your system will respond. Performance testing is a critical part of the development cycle for any new or updated application. Performance testing will push your SAP system to the limits, with the aim to find any performance or system bottlenecks before you go live. Enhancing your test with SAP PowerConnect for Splunk will not only help you identify bottlenecks but also get an understanding of how your underlying SAP system behaves under load.

Database metrics can be correlated with performance test data to understand how the system responds to load

There are many benefits to using SAP PowerConnect when testing SAP—starting with rapid deployment.

Rapid deployment

Whether you are using a production or test environment, PowerConnect can be quickly installed, with searchable data available in less than ten minutes. This is good news for performance testing, where deadlines are often tight. It is compatible with NetWeaver versions 7.0 through to 7.5, so if you have older components or are comparing performance after a migration to the latest version, you’ll be covered.

Shortly after installation, you will experience the next benefit—ease of access to data. With an intuitive and highly visual interface, both testing and development teams will have easy and instant access to data from all areas of your SAP system. This will speed up defect investigations, as testers will no longer need to gain special system access or need to submit requests to the Basis team to gather monitoring information.

Highly visual, real-time data

Performance testing may find issues in a variety of places. PowerConnect provides more than 50 dashboards with a visual representation of the data gathered from your SAP system. You can focus on a specific server or look for trends in a system-wide metric. The generated graphs can then be used in the test reports to illustrate resource usage or contention.

Custom code is often more likely to cause issues than out-of-the-box code, which is why it is targeted during performance testing. It can be difficult for testers to know what custom code is doing without working closely with the developers—which isn’t always feasible.

PowerConnect allows you to add monitoring of custom function modules (anything starting with Z_*) by extending its framework. Example extractor code templates are provided to let you collect your own specific metrics very easily. Monitoring your custom code gives your testers increased visibility and frees your developers to work on any bug fixes or changes.

Splunk is a data platform, which means you can mix and match data from other sources to find potential correlations. This can be particularly helpful if you onboard your performance test data. Having both sets of data lets you visualise SAP metrics correlated with test progression. For example, you could compare work process utilisation against user volume, and potentially find system limits in terms of number of concurrent users.

Splunk’s data retention is also a bonus. Inside SAP, the data is aggregated, so detail is not available after a few days or weeks. Knowing when your test started, you can view that time window in Splunk, and still have the same level of detail as the day of the test. This allows you to easily compare today’s performance with the performance from six months ago, greatly helping regression testing analysis.

Volume test optimisation

SAP performance testing will frequently occur alongside volume test optimisation (VTO). This is the process where an SAP expert will analyse the system while under load and make expert tuning recommendations. A common output of VTO is system-wide or server specific parameter changes. It’s sometimes hard for testers to know exactly what parameter changes have been applied, as they typically don’t have access to this config. PowerConnect has a dashboard in the similar vein of a “Gold master” report, which gives testers the ability to compare the current set of parameters across each server, so they can clearly see what has been applied. This dashboard can highlight discrepancies between parameter settings on different servers, which can save time trying to diagnose issues due to misconfiguration.

Parameter comparison between different SAP servers lets testers verify the right settings have been applied without needing system access.

All these benefits give you impressive insight into your SAP system in the test environment—but once testing has completed, it makes sense to have the same level of monitoring in Production. This will give you confidence that you can identify issues as they develop, or perhaps allow you to focus on known triggers so you can act before they become issues.

PowerConnect is the only SAP-Certified SAP to Splunk connector, so you can be sure it has met the stringent SAP development guidelines. You’ll benefit from all the above benefits in production, including rapid deployment, ease of use and fast access to data, a highly visual interface, and the ability to correlate with other Splunk data.

Want to find out more?

Find out more about SAP PowerConnect for Splunk and how it can be a key tool for your business in 2018 by attending our event, "Splunkify your SAP data with PowerConnect." JDS and Splunk are co-hosting three events across Australia between March and May 2018, starting with our event in Brisbane on 28 March. Register today to ensure your place!

Find out more

To find out more about how JDS can help you implement PowerConnect, contact our team today on 1300 780 432, or email [email protected].

Our team on the case

Read more Tech Tips

Visualise and consolidate your data with SAP PowerConnect for Splunk

SAP systems can be complex. If you have multiple servers and operating systems across different data centres and the cloud, it’s often difficult to know what’s happening under the hood. SAP PowerConnect for Splunk makes it easier than ever to get valuable insight into your SAP system, using its intuitive visual interface and consolidated system data.

Leverage the power of Splunk’s intuitive interface

Traditionally, visualising data using SAP has been a challenge. Inside SAP the data is aggregated, so detail is not available after a few days or weeks—assuming historical records are even being kept. The data that is available is often displayed in tabular format, which needs to be manipulated to provide any meaningful analysis. Visualisations within SAP are limited, and inconsistent across platforms.

A key benefit of SAP PowerConnect is its highly visual interface, which allows your IT operators and executives to see what’s going on in your SAP instance at a glance. Once the data is ingested, you have the full visualisation power of Splunk at your disposal. You can also combine other data from Splunk to create correlations and see visual trends between different systems.

The PowerConnect solution provides more than 50 pre-built dashboards showing you the status of everything from individual metrics to global system trends. You also have the option to collect and display any custom metrics via a function module that you develop.

Whether you want to visualise HANA DB performance or global work processes, you can visually track your data to clearly see trends, spikes, and issues over time.

To illustrate the improved visualisation PowerConnect has over native SAP, compare the following:

This screen shows standard performance metrics from within SAP:

SAP Data PowerConnect

SAP PowerConnect for Splunk converts this tabular data and presents it on a highly visual dashboard:

PowerConnect for Splunk

Bring all your data into one place

By using PowerConnect to consolidate your SAP system data within Splunk, you will discover a wealth of information. Detailed SAP data relating to specific servers, processes, and users will be presented in a range of targeted dashboards. You can see live performance metrics, or look back at trends with a high level of granularity from the past months or years on demand.

With your SAP data available in Splunk, you can:

  • Configure custom alerting based on known triggers, or a combination of metrics from multiple servers or processes
  • Be alerted when you hit a certain user load, or when a custom module encounters an error
  • Schedule custom reports to provide a summary of performance from high-level trends right down to low level CPU metrics
  • Monitor and regularly report on key performance indicators, visualising historical trends over time, including important events during the year

Often different teams within an enterprise will each have a tool of choice to monitor their part of the landscape. This may let the team work efficiently, but can hamper knowledge sharing across the organisation. This is especially true with complex tools, where unintended misuse can cause issues so access is often held back. PowerConnect opens up access to your SAP data by making it available to everyone from your basis team to your executives, and removes the silos.

Furthermore, you can further enrich your SAP PowerConnect data by combining data from non-SAP or third-party applications within your organisation. For example, if your SAP system makes calls to a web service that you manage, you can use Splunk to consume available data to help triage during issues, or complete the picture of what your users are doing on the system. This provides an even deeper level of correlation between applications and business services.

Leveraging the power of Splunk and SAP PowerConnect, you will have a consolidated view of your enterprise operational data, with open and accessible information displayed in a highly visual interface.

Want to find out more?

Find out more about SAP PowerConnect for Splunk and how it can be a key enabler for your business in 2018 by attending our event, "Splunkify your SAP data with PowerConnect." JDS and Splunk are co-hosting three events across Australia throughout the month of March. Each event will open at 5pm, with the presentation to take place from 5.30-6.30pm. Light canapés and beverages will be served.

Choose the most convenient location and date for you, and register below. We look forward to seeing you there.

Find out more

To find out more about how JDS can help you implement PowerConnect, contact our team today on 1300 780 432, or email [email protected].

Our team on the case

Our Splunk stories

How to effectively manage your CMDB in ServiceNow

Configuration management is a popular topic with our customers, and JDS has been involved in a number of such projects over the years. Specifically, we deal regularly with configuration management databases, or CMDB. The CMDB is a foundational pillar in ITIL-based practices; properly implemented and maintained, it is the glue that holds together all IT operations.

A good CMDB keeps a system of logical records for all IT devices/components, known as Configuration Items (CIs). When all of your organisation’s assets and devices are logically presented, this gives you and your executives greater visibility and insight into your IT environment. As an example, all of the Windows servers are tracked in CMDB, and all of its version details are also tracked. This greatly aids any incident resolution or analysis tasks.

ServiceNow CMDB
ServiceNow CMDB

ServiceNow CMDB

The ServiceNow platform offers a fully integrated CMDB and JDS consultants are experts at implementing and populating the ServiceNow CMDB. The process of discovery can be an onerous and time-consuming task, as you search your entire organisation for each CI and enter it into your CMDB.

ServiceNow Incident Record
ServiceNow incident record referencing a CI

Our team of ServiceNow engineers not only help with the manual processes at the outset, but we also introduce automation to ensure that all new CIs are discovered and entered as soon as they are brought into the environment.

ServiceNow CMDB Health Dashboards
ServiceNow CMDB Health Dashboards

The ServiceNow CMDB, like any other CMDB, faces one particular challenge: as the CMDB grows, it’s increasingly difficult for administrators to keep track and keep data relevant. As complexity increases, it can be challenging to ensure your CMDB remains accurate and relevant.  

The complexity and dynamic nature of today’s IT environments means organisations move and upgrade continuously (e.g. server names and IP addresses are commonly repurposed when new projects enter the picture).

A common myth is that the CMDB is hands-free. It’s common for administrators to turn on the CMDB and populate it initially without giving enough thought to strategies for keeping the data up to date. In reality, it takes constant vigilance or its value to the organisation will quickly diminish. CMDBs require careful maintenance to stay relevant, and JDS can assist with this process.

One of the most important things an IT administrator can do is plan and have a clear scope and purpose when enabling a CMDB. The CMDB supports other critical platform features (e.g. incident, problem and change management, etc). JDS can further narrow down the types of CIs that should be retained (e.g. exclude non-managed assets such as desktop, laptops, and non-production servers). Excluding non-managed assets avoids paying unnecessary licensing costs while also keeping your CMDB focused and concise.

CMDB receiving data
CMDB receiving data from many integration points

It’s common for an organisation to use different vendor tools and often their capabilities overlap each other. In the image above, the customer is receiving CI information from integrations with different vendor products, with each tool reporting conflicting information. JDS specialises in integration across multiple data sources and ensuring the most relevant data is ingested into your CMDB.

A CMDB is fundamentally an operational tool. It needs to be concise and ultimately informational to be useful.

While it is tempting to keep all data in your CMDB, often having too much data introduces confusion and fatigue for users. The strength of ServiceNow is its ability to link to related records, allowing you to have a rich CMDB without it being cluttered.

The benefits of Service Mapping

Service Mapping is a key feature of ServiceNow ITOM, and it’s a different approach to the traditional CMDB population methods. Rather than populating all known assets, Service Mapping adopts a service-oriented approach where only relevant devices and components for a technology or business service are tracked and viewed.

This “lean” approach focuses on the CIs that matter while avoiding many pitfalls of the traditional CMDB.

Service Mapping can be further utilised to drive the event impact dashboard (available as part of ServiceNow ITOM offerings).

Service Mapping
Service Mapping in ServiceNow

Having fewer CIs also means that administrators avoid spending time keeping track of things that are of minimal relevance or importance.

In summary, the ServiceNow CMDB offers some great benefits for practitioners, but it’s important to have a clear understanding of the scope your organisation needs to avoid the pitfalls of a bloated CMDB. JDS recommends using ServiceNow ITOM to manage and make your CMDB more efficient. We have the breadth of skills and expertise to configure your CMDB for the needs of your organisation, as well as to work with you on the Service Mapping of your business. Contact one of our representatives today to find out more or set up an in-person discussion.

Conclusion

To find out more about how JDS can help you with your ServiceNow needs, contact our team today on 1300 780 432, or email [email protected].

Our team on the case

Our ServiceNow stories

Breaking down silos to create an enterprise capability

Traditionally, IT manages services in silos (networks, servers, databases, security, etc), which inevitably leads to an incomplete picture when things go wrong. Each team has a piece of the puzzle, but no one can see the picture on the box. Lack of visibility is a significant risk for companies, particularly those businesses with complex solution stacks and thousands of users. Breaking down silos in your business is an essential part of good IT management.

Correctly implemented, IT Operations Management allows for the rapid triage of outages and service degradation. According to Monash University, regardless of industry, workers with six to ten years experience in a given field will spend most of their time trying to understand the cause of a problem. Few if any IT organisations calculate their Mean Time To Resolution (MTTR), but if they did, the research from Monash and RMIT suggests 45% of that time would be spent simply trying to isolate the cause of the problem.

A variety of solutions exist to help with this issue. Dashboard technology continues to advance each year, providing executives with a single pane of glass view of their business enabling IT services in real time. Artificial intelligence in IT operations is on the rise, and can be leveraged both with your internal IT users and your customers to anticipate behaviour and provide solutions in the event of an error.

The best IT management solutions focus on leveraging your existing software investments and integrating with your business processes. ServiceNow ITOM does exactly that, seamlessly integrating event management, service mapping, and orchestration with your current systems and environment. ITOM complements your current portfolio of solutions and seeks to orchestrate disparate processes and systems into a single workflow.

The ServiceNow ITOM dashboard gives executives a full view of each service in their environment, with colours associated with incident priority (red = critical; yellow = minor; blue = warning). By integrating event management into your processes, you will have an at-a-glance view of what is working and needs attention across your whole IT environment.

ITOM dashboard

System outages are inevitable, no matter how sophisticated your system and skilled your team. Implementing a solution such as ITOM helps you protect your brand by ensuring your team can proactively identify the root cause of an issue, and resolve it before it negatively impacts your users.

When you implement service mapping and orchestration as part of your ITOM solution, you can identify which services in your environment are impacted when one server fails, and put automation in place to resolve regular issues.

service mapping

If you are interested in seeing how ITOM service mapping and orchestration works, register your interest using the form below and one of our local account executives will be in touch with you to schedule an on-site demo.

Conclusion

To find out more about how JDS can help you with your ServiceNow needs, contact our team today on 1300 780 432, or email [email protected].

Our team on the case

Our ServiceNow stories

What is AIOps?

Gartner has coined another buzz word to describe the next evolution of ITOM solutions. AIOps uses the power of Machine Learning and big data to provide pattern discovery and predictive analysis for IT Ops.

What is the need for AIOps?

Organisations undergoing digital transformation are facing a lot of challenges that they wouldn’t have faced even ten years ago. Digital transformation represents a change in organisation structure, processes, and abilities, all driven by technology. As technology changes, organisations need to change with it.

This change comes with challenges. The old ITOps solutions now need to manage micro services, public and private APIs, and Internet-of-Things devices.

As consumers, IT managers are used to personalised movie recommendations from Netflix, or preemptive traffic warnings from Google. However, their IT management systems typically lack this sort of smarts—reverting to traffic light dashboards.

There is an opportunity in the market to combine big data and machine learning with IT operations.

Gartner has coined this concept AIOps: Artificial Intelligence for IT Ops.

AIOps platforms utilize big data, modern machine learning and other advanced analytics technologies to directly and indirectly enhance IT operations (monitoring, automation and service desk) functions with proactive, personal and dynamic insight. AIOps platforms enable the concurrent use of multiple data sources, data collection methods, analytical (real-time and deep) technologies, and presentation technologies.” – Colin Fletcher, Gartner

AIOps 1

Current State – Gartner Report

Gartner coined the term AIOps in 2016, although they originally called it Algorithmic IT Operations. They don’t yet produce a magic quadrant for AIOps, but that is likely coming.

Gartner has produced a report which summarises both what AIOps is hoping to solve, and which vendors are providing solutions.

Eleven core capabilities are identified, with only four vendors able to do all 11: HPE, IBM, ITRS, and Moogsoft.

How does Splunk do AIOps?

Splunk is well positioned to be a leader in the AIOps field. Their AIOps solution is outlined on their website. Splunk AIOps relies heavily on the Machine Learning Toolkit, which provides Splunk with about 30 different machine learning algorithms.

Splunk provides an enterprise machine learning and big data platform which will help AIOps managers:

  • Get answers and insights for everyone: Through the Splunk Query Language, users can predict past, present, and future patterns of IT systems and service performance.
  • Find and solve problems faster: Detect patterns to identify indicators of incidents and reduce irrelevant alerts.
  • Automate incident response and resolution: Splunk can automate manual tasks, which are triggered based on predictive analytics.
  • Predict future outcomes: Forecast on IT costs and learn from historical analysis. Better predict points of failure to proactively improve the operational environment.
  • Continually learn to take more informed decisions: Detect outliers, adapt thresholds, alert on anomalous patterns, and improve learning over time.

Current offerings like ITSI and Enterprise Security also implement machine learning, which take advantage of anomaly detection and predictive algorithms.

As the complexity in IT systems increases, so too will the need to analyse and interpret the large amount of data generated. Humans have been doing a good job to date, but there will come a point where the complexity will be too great. Organisations which can complement their IT Ops with machine learning will have a strategic advantage over those who rely on people alone.

Conclusion

To find out more about how JDS can help you implement AIOps, contact our team today on 1300 780 432, or email [email protected].

Our team on the case

Using Splunk to look for Spectre and Meltdown security breaches

Meltdown and Spectre are two security vulnerabilities that are currently impacting millions of businesses all over the world. Since the news broke about the flaw in Intel processor chips that opened the door to once-secure information, companies have been bulking up their system security and implementing patches to prevent a breach.

Want to make sure your system is protected from the recent outbreak of Spectre and Meltdown? One of our Splunk Architects, Andy Erskine, explains one of the ways JDS can leverage Splunk Enterprise Security to check that your environment has been successfully patched.

What are Spectre and Meltdown and what do I need to do?

According to the researchers who discovered the vulnerabilities, Spectre “breaks the isolation between different applications”, which allows attackers to expose data that was previously considered secure. Meltdown “breaks the most fundamental isolation between user applications and the operating system”.

Neither type of attack requires software vulnerabilities to be carried out. Labelled “side channel attacks”, they are not solely based on operating systems as they use side channels to acquire the breached information from the memory location.

The best way to lower the risk of your business’s sensitive information being hacked is to apply the newly created software patches as soon as possible.

Identifying affected systems

Operating system vendors are forgoing regular patch release cycles and publishing operating system patches to address this issue.

Tools such as Nessus/Tenable, Qualys, Tripwire IP360, etc. regularly scan their environments for vulnerabilities such as this and can identify affected systems by looking for the newly released patches.

Each plugin created for the Spectre and Meltdown vulnerabilities will be marked with at least one of the following CVEs:

Spectre:

CVE-2017-5753: bounds check bypass

CVE-2017-5715: branch target injection

Meltdown:

CVE-2017-5754: rogue data cache load

To analyse whether your environment has been successfully patched, you would want to ingest data from these traditional vulnerability management tools and present the data in Splunk Enterprise Security.

Most of these tools have a Splunk app that brings the data in and maps to the Common Information Model. From there, you can use searches that are listed to identify the specific CVEs associated with Spectre and Meltdown.

Once the data is coming into Splunk, we can then create a search to discover and then be proactive and notify on any vulnerable instances found within your environment, and then make them a priority for patching.

Here is an example search that customers using Splunk Enterprise Security can use to identify vulnerable endpoints:

tag=vulnerability (cve=" CVE-2017-5753" OR cve=" CVE-2017-5715" OR cve=" CVE-2017-5754")
| table src cve pluginName first_found last_found last_fixed
| dedup src
| fillnull value=NOT_FIXED last_fixed
| search last_fixed=NOT_FIXED
| stats count as total

Example Dashboard Mock-Up

JDS consultants are experts in IT security and proud partners with Splunk. If you are looking for advice from the experts to implement or enhance Splunk Enterprise Security or any other Splunk solution, get in touch with us today.

Conclusion

To find out more about how JDS can help you with your security needs, contact our team today on 1300 780 432, or email [email protected].

Our team on the case

Our Splunk stories