Category: Tech Tips

5 quick tips for customising your SAP data in Splunk

Understanding how your SAP system is performing can be a time-consuming process. With multiple environments, servers, APIs, interfaces and applications, there are numerous pieces to the puzzle, and stitching together the end-to-end story requires a lot of work.

That’s where SAP PowerConnect can assist. This SAP-certified tool simplifies the process by seamlessly collating your SAP metrics into a single command console: Splunk. PowerConnect compiles and stores data from each component across your SAP landscape and presents the information in familiar, easily accessible, and customisable Splunk dashboards. When coupled with Splunk’s ability to also gather machine data from non-SAP systems, this solution provides a powerful insight mechanism to understand your end user’s experience.

Given the magnitude of information PowerConnect can collate and analyse, you may think that setting it up would take days—if not weeks—of effort for your technical team. But one of PowerConnect’s key features is its incredibly fast time to value. Whatever the size of your environment, PowerConnect can be rapidly deployed and have searchable data available in less than ten minutes. Furthermore, it is highly customisable in providing the ability to collect data from custom SAP modules and display these in meaningful context sensitive dashboards, or integrate with Splunk IT Service Intelligence.

Here are some quick tips for customising your SAP data with PowerConnect.

1. Use the out-of-the-box dashboards

SAP software runs on top of the SAP NetWeaver platform, which forms the foundation for the majority of applications developed by SAP. The PowerConnect add-on is compatible with NetWeaver versions 7.0 through to 7.5 including S/4 HANA. It runs inside SAP and extracts machine data, security events, and logs from SAP—and ingests the information into Splunk in real time.

PowerConnect can access all data and objects exposed via the SAP NetWeaver layer, including:

  • API
  • Function Modules
  • IDoc
  • Report Writer Reports
  • Change Documents
  • CCMS
  • Tables
PowerConnect has access to all the data and objects exposed via the SAP NetWeaver layer

If your SAP system uses S/4 HANA, Fiori, ECC, BW components or all the above, you can gain insight into performance and configuration with PowerConnect.

To help organise and understand the collated data, PowerConnect comes with preconfigured SAP ABAP and Splunk dashboards out of the box based on best practices and customer experiences:

Sample PowerConnect for SAP: SAP ABAP Dashboard

Sample PowerConnect for SAP: Splunk Dashboard

PowerConnect centralises all your operational data in one place, giving you a single view in real time that will help you make decisions, determine strategy, understand the end-user experience, spot trends, and report on SLAs. You can view global trends in your SAP system or drill down to concentrate on metrics from a specific server or user.

2. Set your data retention specifications

PowerConnect also gives you the ability to configure how, and how long, you store and visualise data. Using Splunk, you can generate reports from across your entire SAP landscape or focus on specific segment(s). You may have long data retention requirements or be more interested in day-to-day performance—either way, PowerConnect can take care of the unique reporting needs for your business when it comes to SAP data.

You have complete control over your data, allowing you to manage data coverage, retention, and access:

  • All data sets that are collected by PowerConnect can be turned off, so you only need to ingest data that interests you.
  • Fine grain control over ingested data is possible by disabling individual fields inside any data sets.
  • You can customise the collection interval for each data set to help manage the flow of data across your network.
  • Data sets can be directed to different indexes, allowing you to manage different data retention and archiving rates for different use cases.

3. Make dashboards for the data that matters to you

You have the full power of Splunk at your disposal to customise the default dashboards, or you can use them as a base to create your own. This means you can use custom visualisations, or pick from those available on Splunkbase to interpret and display the data you’re interested in.

Even better, you’re not limited to SAP data in your searches and on your dashboards; Splunk data from outside your SAP system can be correlated and referenced with your SAP data. For example, you may want to view firewall or load balancer metrics against user volumes, or track sales data with BW usage.

4. Compare your SAP data with other organisational data

It is also possible to ingest PowerConnect data with another Splunk app, such as IT Service Intelligence (ITSI) to create, configure, and measure Service Levels and Key Performance Indicators. SAP system metrics can feed into a centralised view of the health and key performance indicators of your IT services. ITSI can then help proactively identify issues and prioritise resolution of those affecting business-critical services. This out-of-the-box monitoring will give you a comprehensive view of how your SAP system is working.

The PowerConnect framework is extensible and can be adapted to collect metrics from custom developed function modules. Sample custom extractor code templates are provided to allow your developers to quickly extend the framework to capture your custom-developed modules. These are the modules you develop to address specific business needs that SAP doesn’t address natively. As with all custom code, the level of testing will vary, and gaining access to key metrics within these modules can help both analyse usage as well as expose any issues within the code.

5. Learn more at our PowerConnect event

If you are interested in taking advantage of PowerConnect for Splunk to understand how your SAP system is performing, come along to our PowerConnect information night in Sydney or Melbourne in May. Register below to ensure your place.

PowerConnect Explainer Video

Find out more

To find out more about how JDS can help you implement PowerConnect, contact our team today on 1300 780 432, or email [email protected].

Our team on the case

Our Splunk stories

How to maintain versatility throughout your SAP lifecycle

There are many use cases for deploying a tool to monitor your SAP system. Releasing your application between test environments, introducing additional users to your production system, or developing new functionality—all of these introduce an element of risk to your application and environment. Whether you are upgrading to SAP HANA, moving data centres, or expanding your use of ECC modules or mobile interfaces (Fiori), you can help mitigate the risk with the insights SAP PowerConnect for Splunk provides.

Upgrading SAP

Before you begin upgrades to your SAP landscape, you need to verify several prerequisites such as hardware and OS requirements, source release of the SAP system, and background process volumes. There are increased memory, session, and process requirements when performing the upgrade, which need to be managed. The SAP PowerConnect solution provides you with all key information about how your system is responding during the transition, with up-to-date process, database, and system usage information.

Triaging and correlating events or incidents is also easier than ever with PowerConnect through its ability to time series historic information. It means you can look back to a specific point in time and see what the health of the system or specific server was, the configuration settings, etc. This is a particularly useful feature for regression testing.

Supporting application and infrastructure migration

Migration poses risks. It’s critical to mitigate those risks through diligent preparation, whether it’s ensuring your current code works on the new platform or that the underlying infrastructure will be fit for purpose.

For example, when planning a migration from an ABAP-based system to an on-premise SAP HANA landscape, there are several migration strategies you can take, depending on how quickly you want to move and what data you want to bring across. With a greenfield deployment, you start from a clean setup and bring across only what you need. The other end of the spectrum is a one-step upgrade with a database migration option (DMO), where you migrate in-place.

Each option will have its own advantages and drawbacks; however, both benefit from the enhanced visibility that PowerConnect provides throughout the deployment and migration process. As code is deployed and patched, PowerConnect will highlight infrastructure resource utilisation issues, greedy processes, and errors from the NetWeaver layer. PowerConnect can also analyse custom ABAP code and investigate events through ABAP code dumps by ID or user.

Increasing user volumes

Deployments can be rolled out in stages, be it through end users or application functionality. This is an effective way to ease users onto the system and lessen the load on both end-user training and support desk tickets due to confusion. As user volume increases, you may find that people don’t behave like you thought they would—meaning your performance test results may not match up with real-world usage. In this case, PowerConnect provides the correlation between the end-user behaviour and the underlying SAP infrastructure performance. This gives you the confidence that if the system starts to experience increased load, you will know about it before it becomes an issue in production. You can also use PowerConnect to learn the new trends in user activity, and feed that information back into the testing cycle to make sure you’re testing as close to real-world scenarios as possible.

It may not be all bad news. PowerConnect can highlight unexpected user behaviour in a positive light, where you might find new users are introduced to the system, they don’t find a feature as popular as you thought they would. Hence you would then be able to turn off the feature to reduce licence usage or opt to promote the feature internally. PowerConnect will not only give you visibility into system resource usage, but also what users are doing on the system to cause that load.

Feedback across the development lifecycle

PowerConnect provides a constant feedback solution with correlation and insights throughout the application delivery lifecycle. Typically migrations, deployments, and upgrades follow a general lifecycle of planning, deploying, then business as usual, before making way for the next patch or version.

During planning and development, you want insights into user activity and the associated infrastructure performance to understand the growth of users over time.

  • With the data retention abilities of Splunk, PowerConnect can identify trends from the last hour right back to the last year and beyond. These usage trends can help define performance testing benchmarks by providing concurrent user volumes, peak periods, and what transactions the users are spending time on.
  • In the absence of response time SLAs, page load time goals can be defined based on current values from the SAP Web Page Response Times dashboard.
  • With the ability to compare parameters, PowerConnect can help you make sure your test and pre-prod environments have the same configuration as production. When the test team doesn’t have access to run RZ10 to view the parameters, a discrepancy can be easy to miss and cause unnecessary delays.

Once in production, PowerConnect also gives you client-centric and client-side insights.

  • You can view the different versions of SAP GUI that users have installed or see a world map showing the global distribution of users.
  • Splunk can even alert from a SecOps perspective, and notify you if someone logs in from a country outside your user base. You can view a list of audited logins and browse the status of user passwords.
  • The power of Splunk gives you the ability to alert or regularly report on trends in the collected data. You can be informed if multiple logins fail, or when the CPU vs Work processes is too high. Automated scripts can be triggered when searches return results so that, for example, a ServiceNow ticket can be raised along with an email alert.

Even after a feature has completed its lifecycle and is ready to be retired, PowerConnect remains rich with historical data describing usage, issues, and configuration settings in Splunk, even if that raw data disappears or has been aggregated from SAP.

Backed by the power of Splunk, and with the wealth of information being collected, the insights provided by PowerConnect will help you effectively manage your SAP system throughout your SAP lifecycle.

Want to find out more?

Find out more about SAP PowerConnect for Splunk and how it can be a key tool for your business in 2018 by attending our event, "Splunkify your SAP data with PowerConnect" in May.

Choose the most convenient location and date for you, and register below. We look forward to seeing you there.

Find out more

To find out more about how JDS can help you implement PowerConnect, contact our team today on 1300 780 432, or email [email protected].

Our team on the case

Read more Tech Tips

How to revitalise your performance testing in SAP

When you go live with a new system, feature, or application, you want to be confident in how your system will respond. Performance testing is a critical part of the development cycle for any new or updated application. Performance testing will push your SAP system to the limits, with the aim to find any performance or system bottlenecks before you go live. Enhancing your test with SAP PowerConnect for Splunk will not only help you identify bottlenecks but also get an understanding of how your underlying SAP system behaves under load.

Database metrics can be correlated with performance test data to understand how the system responds to load

There are many benefits to using SAP PowerConnect when testing SAP—starting with rapid deployment.

Rapid deployment

Whether you are using a production or test environment, PowerConnect can be quickly installed, with searchable data available in less than ten minutes. This is good news for performance testing, where deadlines are often tight. It is compatible with NetWeaver versions 7.0 through to 7.5, so if you have older components or are comparing performance after a migration to the latest version, you’ll be covered.

Shortly after installation, you will experience the next benefit—ease of access to data. With an intuitive and highly visual interface, both testing and development teams will have easy and instant access to data from all areas of your SAP system. This will speed up defect investigations, as testers will no longer need to gain special system access or need to submit requests to the Basis team to gather monitoring information.

Highly visual, real-time data

Performance testing may find issues in a variety of places. PowerConnect provides more than 50 dashboards with a visual representation of the data gathered from your SAP system. You can focus on a specific server or look for trends in a system-wide metric. The generated graphs can then be used in the test reports to illustrate resource usage or contention.

Custom code is often more likely to cause issues than out-of-the-box code, which is why it is targeted during performance testing. It can be difficult for testers to know what custom code is doing without working closely with the developers—which isn’t always feasible.

PowerConnect allows you to add monitoring of custom function modules (anything starting with Z_*) by extending its framework. Example extractor code templates are provided to let you collect your own specific metrics very easily. Monitoring your custom code gives your testers increased visibility and frees your developers to work on any bug fixes or changes.

Splunk is a data platform, which means you can mix and match data from other sources to find potential correlations. This can be particularly helpful if you onboard your performance test data. Having both sets of data lets you visualise SAP metrics correlated with test progression. For example, you could compare work process utilisation against user volume, and potentially find system limits in terms of number of concurrent users.

Splunk’s data retention is also a bonus. Inside SAP, the data is aggregated, so detail is not available after a few days or weeks. Knowing when your test started, you can view that time window in Splunk, and still have the same level of detail as the day of the test. This allows you to easily compare today’s performance with the performance from six months ago, greatly helping regression testing analysis.

Volume test optimisation

SAP performance testing will frequently occur alongside volume test optimisation (VTO). This is the process where an SAP expert will analyse the system while under load and make expert tuning recommendations. A common output of VTO is system-wide or server specific parameter changes. It’s sometimes hard for testers to know exactly what parameter changes have been applied, as they typically don’t have access to this config. PowerConnect has a dashboard in the similar vein of a “Gold master” report, which gives testers the ability to compare the current set of parameters across each server, so they can clearly see what has been applied. This dashboard can highlight discrepancies between parameter settings on different servers, which can save time trying to diagnose issues due to misconfiguration.

Parameter comparison between different SAP servers lets testers verify the right settings have been applied without needing system access.

All these benefits give you impressive insight into your SAP system in the test environment—but once testing has completed, it makes sense to have the same level of monitoring in Production. This will give you confidence that you can identify issues as they develop, or perhaps allow you to focus on known triggers so you can act before they become issues.

PowerConnect is the only SAP-Certified SAP to Splunk connector, so you can be sure it has met the stringent SAP development guidelines. You’ll benefit from all the above benefits in production, including rapid deployment, ease of use and fast access to data, a highly visual interface, and the ability to correlate with other Splunk data.

Want to find out more?

Find out more about SAP PowerConnect for Splunk and how it can be a key tool for your business in 2018 by attending our event, "Splunkify your SAP data with PowerConnect." JDS and Splunk are co-hosting three events across Australia between March and May 2018, starting with our event in Brisbane on 28 March. Register today to ensure your place!

Find out more

To find out more about how JDS can help you implement PowerConnect, contact our team today on 1300 780 432, or email [email protected].

Our team on the case

Read more Tech Tips

Reserve and import data through Micro Focus ALM

Micro Focus Application Lifecycle Management’s (ALM) workflow is one of its most powerful features, allowing you to extend ALM’s capabilities and integrate with third-party solutions.

JDS recently helped a client leverage this capability by integrating with a Test Data Management (TDM) solution to improve their Testing Lifecycle. The client wanted the ability to reserve data through ALM and import data into Micro Focus Unified Functional Test (UFT) scripts to run automated tests, where previously a manual import of test data was required at each test cycle. The new automated workflow prevented testers from using each other’s test data at the same time, and therefore potentially invalidating test results. The end result was a time saving of 20% from test data orchestration.

Below is a high-level overview of the workflow that was created within ALM.

A tester would first press a Reserve button in ALM to reserve data for a test, thus not allowing other testers to also use this data during this period. Subsequently, when the automated test was executed via UFT, the reserved test data was retrieved.

This tech tip focuses on implementing the Reservation function.

The first step in this process is creating a Reserve button. This is straightforward.

  1. Click on the cog, and then “Customize”. 
  2. Click on “Workflow” and then “Script Editor”.
  3. Click on “Toolbar Button Editor”, where we can create a new button.
  4. For the command bar, choose “TestLab”.
  5. Click “Add” to add a new button. Then, provide the button the following details:
  6. Click Apply.
  7. Close all the windows until you are back to the main ALM page. Then, browse to the Test Lab. You will see the new button created.

Next, you need to provide the Reservation code to the button.

  1. Click on the cog, and then “Customize”.
  2. Click on “Workflow”, and then “Script Editor”.
  3. Under Project Scripts > Test Lab Module Script, we put in the ReserveData code. Below is an example:
Sub ReserveData()

sUrl="http://x.x.x.x/api.php"

strPostData="{" &_

chr(34) & "user_id" & chr(34) & ":" & chr(34) & User.UserName & chr(34) & "," &_

chr(34) & "domain_id" & chr(34) & ":" & chr(34) & TDConnection.DomainName & chr(34) & ","  &_

chr(34) & "project_id" & chr(34) & ":" & chr(34) & TDConnection.ProjectName & chr(34) & ","  &_

chr(34) & "test_instance_id" & chr(34) & ":" & chr(34) & strTestInstanceID & chr(34) & "," &_

chr(34) & "action" & chr(34) & ":" & chr(34) & "reserve" & chr(34) &_

"}"

 

objPostData = Stream_StringToBinary(strPostData,"us-ascii")

 

set oHTTP = CreateObject("Microsoft.XMLHTTP")

oHTTP.open "POST",sUrl,false

oHTTP.setRequestHeader "Content-Type", "application/json"

oHTTP.setRequestHeader "Content-Length", Len(strPostData)

oHTTP.send strPostData

End Sub

The code above passes our TDM solution unique identifiers such as username, domain name, project name, and test instance ID to uniquely assign test data. The TDM API will dictate what parameters you will need to supply to reserve data. Similar code would then need to be implemented as part of your Automation Framework to retrieve the data for use by UFT to the automated solution described.

Conclusion

There are so many possibilities with ALM’s workflow for integration and automation! JDS is a Micro Focus Platinum Partner and implementation specialist with nearly 15 years of experience. Find out more on our Micro Focus partner page, and get in touch today to learn how we can help your business.

Our team on the case

Our Micro Focus stories

How to effectively manage your CMDB in ServiceNow

Configuration management is a popular topic with our customers, and JDS has been involved in a number of such projects over the years. Specifically, we deal regularly with configuration management databases, or CMDB. The CMDB is a foundational pillar in ITIL-based practices; properly implemented and maintained, it is the glue that holds together all IT operations.

A good CMDB keeps a system of logical records for all IT devices/components, known as Configuration Items (CIs). When all of your organisation’s assets and devices are logically presented, this gives you and your executives greater visibility and insight into your IT environment. As an example, all of the Windows servers are tracked in CMDB, and all of its version details are also tracked. This greatly aids any incident resolution or analysis tasks.

ServiceNow CMDB
ServiceNow CMDB

ServiceNow CMDB

The ServiceNow platform offers a fully integrated CMDB and JDS consultants are experts at implementing and populating the ServiceNow CMDB. The process of discovery can be an onerous and time-consuming task, as you search your entire organisation for each CI and enter it into your CMDB.

ServiceNow Incident Record
ServiceNow incident record referencing a CI

Our team of ServiceNow engineers not only help with the manual processes at the outset, but we also introduce automation to ensure that all new CIs are discovered and entered as soon as they are brought into the environment.

ServiceNow CMDB Health Dashboards
ServiceNow CMDB Health Dashboards

The ServiceNow CMDB, like any other CMDB, faces one particular challenge: as the CMDB grows, it’s increasingly difficult for administrators to keep track and keep data relevant. As complexity increases, it can be challenging to ensure your CMDB remains accurate and relevant.  

The complexity and dynamic nature of today’s IT environments means organisations move and upgrade continuously (e.g. server names and IP addresses are commonly repurposed when new projects enter the picture).

A common myth is that the CMDB is hands-free. It’s common for administrators to turn on the CMDB and populate it initially without giving enough thought to strategies for keeping the data up to date. In reality, it takes constant vigilance or its value to the organisation will quickly diminish. CMDBs require careful maintenance to stay relevant, and JDS can assist with this process.

One of the most important things an IT administrator can do is plan and have a clear scope and purpose when enabling a CMDB. The CMDB supports other critical platform features (e.g. incident, problem and change management, etc). JDS can further narrow down the types of CIs that should be retained (e.g. exclude non-managed assets such as desktop, laptops, and non-production servers). Excluding non-managed assets avoids paying unnecessary licensing costs while also keeping your CMDB focused and concise.

CMDB receiving data
CMDB receiving data from many integration points

It’s common for an organisation to use different vendor tools and often their capabilities overlap each other. In the image above, the customer is receiving CI information from integrations with different vendor products, with each tool reporting conflicting information. JDS specialises in integration across multiple data sources and ensuring the most relevant data is ingested into your CMDB.

A CMDB is fundamentally an operational tool. It needs to be concise and ultimately informational to be useful.

While it is tempting to keep all data in your CMDB, often having too much data introduces confusion and fatigue for users. The strength of ServiceNow is its ability to link to related records, allowing you to have a rich CMDB without it being cluttered.

The benefits of Service Mapping

Service Mapping is a key feature of ServiceNow ITOM, and it’s a different approach to the traditional CMDB population methods. Rather than populating all known assets, Service Mapping adopts a service-oriented approach where only relevant devices and components for a technology or business service are tracked and viewed.

This “lean” approach focuses on the CIs that matter while avoiding many pitfalls of the traditional CMDB.

Service Mapping can be further utilised to drive the event impact dashboard (available as part of ServiceNow ITOM offerings).

Service Mapping
Service Mapping in ServiceNow

Having fewer CIs also means that administrators avoid spending time keeping track of things that are of minimal relevance or importance.

In summary, the ServiceNow CMDB offers some great benefits for practitioners, but it’s important to have a clear understanding of the scope your organisation needs to avoid the pitfalls of a bloated CMDB. JDS recommends using ServiceNow ITOM to manage and make your CMDB more efficient. We have the breadth of skills and expertise to configure your CMDB for the needs of your organisation, as well as to work with you on the Service Mapping of your business. Contact one of our representatives today to find out more or set up an in-person discussion.

Conclusion

To find out more about how JDS can help you with your ServiceNow needs, contact our team today on 1300 780 432, or email [email protected].

Our team on the case

Our ServiceNow stories

Using Splunk to look for Spectre and Meltdown security breaches

Meltdown and Spectre are two security vulnerabilities that are currently impacting millions of businesses all over the world. Since the news broke about the flaw in Intel processor chips that opened the door to once-secure information, companies have been bulking up their system security and implementing patches to prevent a breach.

Want to make sure your system is protected from the recent outbreak of Spectre and Meltdown? One of our Splunk Architects, Andy Erskine, explains one of the ways JDS can leverage Splunk Enterprise Security to check that your environment has been successfully patched.

What are Spectre and Meltdown and what do I need to do?

According to the researchers who discovered the vulnerabilities, Spectre “breaks the isolation between different applications”, which allows attackers to expose data that was previously considered secure. Meltdown “breaks the most fundamental isolation between user applications and the operating system”.

Neither type of attack requires software vulnerabilities to be carried out. Labelled “side channel attacks”, they are not solely based on operating systems as they use side channels to acquire the breached information from the memory location.

The best way to lower the risk of your business’s sensitive information being hacked is to apply the newly created software patches as soon as possible.

Identifying affected systems

Operating system vendors are forgoing regular patch release cycles and publishing operating system patches to address this issue.

Tools such as Nessus/Tenable, Qualys, Tripwire IP360, etc. regularly scan their environments for vulnerabilities such as this and can identify affected systems by looking for the newly released patches.

Each plugin created for the Spectre and Meltdown vulnerabilities will be marked with at least one of the following CVEs:

Spectre:

CVE-2017-5753: bounds check bypass

CVE-2017-5715: branch target injection

Meltdown:

CVE-2017-5754: rogue data cache load

To analyse whether your environment has been successfully patched, you would want to ingest data from these traditional vulnerability management tools and present the data in Splunk Enterprise Security.

Most of these tools have a Splunk app that brings the data in and maps to the Common Information Model. From there, you can use searches that are listed to identify the specific CVEs associated with Spectre and Meltdown.

Once the data is coming into Splunk, we can then create a search to discover and then be proactive and notify on any vulnerable instances found within your environment, and then make them a priority for patching.

Here is an example search that customers using Splunk Enterprise Security can use to identify vulnerable endpoints:

tag=vulnerability (cve=" CVE-2017-5753" OR cve=" CVE-2017-5715" OR cve=" CVE-2017-5754")
| table src cve pluginName first_found last_found last_fixed
| dedup src
| fillnull value=NOT_FIXED last_fixed
| search last_fixed=NOT_FIXED
| stats count as total

Example Dashboard Mock-Up

JDS consultants are experts in IT security and proud partners with Splunk. If you are looking for advice from the experts to implement or enhance Splunk Enterprise Security or any other Splunk solution, get in touch with us today.

Conclusion

To find out more about how JDS can help you with your security needs, contact our team today on 1300 780 432, or email [email protected].

Our team on the case

Our Splunk stories

ServiceNow and single sign-on

More and more, organisations are opting to use Identity Providers (Idp) to allow their users to access multiple applications without the need to remember dozens of different user IDs and passwords.

ServiceNow supports single sign-on, but the process itself can be confusing, so this quick guide has been developed to show exactly what happens when a user authenticates against ServiceNow using SSO.

 

Single sign-on allows users to gain access to multiple systems without the need to setup and maintain multiple accounts, making it ideal for streamlining access to ServiceNow.

To setup single sign-on, customers need to provide detailed information on the configuration of their Identity Provider.

If you have any questions about ServiceNow or how to configure single sign-on, contact us at [email protected].

Conclusion

To find out more about how JDS can help you with your ServiceNow needs, contact our team today on 1300 780 432, or email [email protected].

Our team on the case

Our ServiceNow stories

How to customise the ServiceNow Service Portal

ServiceNow is a robust SaaS (Software-as-a-Service) platform that replaces unstructured work patterns with intelligent workflows. ServiceNow is designed to improve service levels, energise employees, and enable you to streamline repeatable processes. JDS can help you customise the ServiceNow Service Portal according to the needs of your business.


ServiceNow includes a business-facing portal using Angular JS and Bootstrap, technologies developed by Google and Twitter.

Each Service Portal page is made up of discrete components (called widgets) that are interchangeable and reusable. For example, in the screenshot below, one widget is re-used three times to provide access to knowledge, catalogue, and problems just by changing a few parameters.

ServiceNow ships with over a hundred and fifty of these pre-built widgets covering all aspects of portal design, allowing customers to tailor their Service Portal to suit their specific needs.

Expanding on existing widgets

Sometimes a customer will love the look and feel a pre-built widget but have a different idea on how it could be used.    

Take the example of the Current Status widget:

This is the out-of-the-box widget for the current status of services, and it provides a good visual indication of outages, planned maintenance and service degradation, but some customers need a little more detail.

JDS cloned the original widget, leaving it intact, and simply revised the copy to include more detail. The revised widget groups outages and maintenance by business service, allowing complex information to be read easily. It also includes the ability to list multiple affected locations, and notes the duration of the event.

New widgets

Sometimes customers have ideas which require the development of new widgets. In this example, a customer wanted to use Service Portal to access their internal phonebook.

When creating new widgets, JDS still looks to leverage the existing widgets published by ServiceNow in the out-of-the-box instance to shorten the development cycle.   

Instead of using old-style, outdated portal designs with complex/advanced search options, JDS re-used an existing “type ahead” widget to provide the same flexibility without any significant development complexity. 

The main improvement on the out-of-the-box widget was to implement a search across multiple fields as the user types, streamlining the way users can find contact information.

Entire applications

In addition to expanding on existing widgets and creating new widgets, JDS can develop custom single-page applications (SPAs) on top of the Service Portal.  These may integrate with modules in the base platform and can be designed using completely new widgets or as a mix of existing and new components.

Ordinarily, the creation of a custom application requires considerable architectural design to ensure the proposed solution is secure and scalable. But by providing a robust secure platform, ServiceNow allows organisations to build upon the already proven Service Portal to develop their own applications without investing heavily in cloud infrastructure.

In this example, a government agency needed to manage the recording of courtrooms, as well as the annotation of court transcripts in a single-page application that could be operated on a touch-screen device. JDS worked with the agency to develop a design that leveraged existing ServiceNow widgets where possible, and connected these to the core ServiceNow functionality to provide a flexible solution.

From small projects to large, JDS has the specialist capability to use ServiceNow to enhance your business activities and streamline work processes.

Our team on the case

To find out more about how JDS can help you with your ServiceNow needs, contact one of our consultants today on 1300 780 432, or email [email protected].

Our ServiceNow stories

Integrating a hand-signed signature to an Incident Form in ServiceNow

Sometimes a business needs to save their client's signature on an incident to prove that someone has come out to their place of work and rectified the incident. Kind of like when a postman delivers a package, they need your signature to prove that the package was delivered.

Previously, ServiceNow did not have this ability out of the box. A third-party plugin was needed for this to work.

However, the Jakarta version of ServiceNow has this capability now, and it’s easy to use. We’ll walk you through the process below.

Go to System Definition > Plugins

Ensure the following plugins are active:

If they're not, then these need to be activated.

Once activated, the UI Page that was created can be modified as required.

Navigate to System UI > UI Pages

The Name of the UI Page that needs to be edited is wo_accept_signature_mobile.

Customise the HTML as required (add logos, text, etc.)

In the Processing Script section, customise what should happen when submitted.

By default, the signature is saved into a table named "signature_image". It’s a good idea to create a new reference field in your incident record that references the signature_image table.

That way, once submitted, the incident record can be updated with the signature.

Here's some sample code to start with (within the OK section of the code):

            var sig = new GlideRecord('signature_image');

            sig.addQuery('document', document_id);

            sig.orderByDesc('signed_on');

            sig.query();

I did this here in case there were multiple signatures made; that way, it only displays the latest signature.

            var sigID = '';

            var sigTable = '';

            if (sig.next()) {

                            sig.signed_name=signee_name;

                            sigID = sig.sys_id;

                            sigTable=sig.table;

                            sig.update();

            }

            var inc = new GlideRecord(sigTable);

            inc.addQuery('sys_id', document_id);

            inc.query();

            if (inc.next()) {

                            inc.u_signature_reference = sigID;

                            inc.update();

            }

            response.sendRedirect(nav_url);

            var message = gs.getMessage('The signature has been accepted');

            gs.addInfoMessage(message);

Once the UI pages are ready, they need to be added as a UI action for the mobile device.

Navigate to System Mobile UI > UI Actions – Mobile

Click on New to create a new mobile UI action for an Incident.

In this example, it's been set so that once the action is clicked on, it saves whatever changes were made, then opens the venue_signature UI page that was defined before.

In this case, all that is required is the current.sys_id. However, to make it more flexible and work with other tables (aside from Incident), the table_name is also sent.

Note that this will work with desktop UI Actions too. So if a touchscreen is available, customer signatures can be added through there as well. Otherwise, a mouse can be used to get the customer's signature for the incident.

Need support for your ServiceNow instance? JDS specialises in providing support, both throughout the implementation stage and after implementation is complete. Contact our Support team for more information.

Our team on the case

Tech tips from JDS

Integrating OMi (Operations Manager i) with ServiceNow

We have delivered Micro Focus solutions and migrated "Operations Manager for Windows" (OMW) to the innovative new OMi for a number of customers. In most of these cases, the requirement was also to integrate ServiceNow—a request that has been growing in popularity. In each case where JDS has provided OMi to ServiceNow integration, it has proven successful and satisfying for our customers.

OMi has been tested over time, built on a firm foundation. It is robust in design and suitable for every known event and service model situation possible. The possibilities are endless and the GUI is customisable—and as for the designs provided out-of-the-box, they are a fully featured event and service model, driven to work well from the operations and support personnel perspectives.

The integration with ServiceNow is relatively straightforward and simple. It requires a little groovy script programming knowledge. Generally speaking, someone who has an intermediate breadth in JavaScripting can sufficiently develop a connector script. The script is set under the Connected Servers option in the managed scripts.

There are some examples provided; one in particular is the "LogFile Adapter", and with the use of the OMi extensibility guide, these examples can easily translate into useful real-world cases.

You will need to create an account in ServiceNow and enable the "Web service access" role to be allowed to make Web service API protocol calls. Additionally, you will need an account in OMi in order for ServiceNow to interact with OMi WebServices.

Once the OMi connected server to ServiceNow is enabled, Event Forwarding rules can be tailored to use simple event filtering. These filters are used to select and automate events for forwarding and synchronisation with the Connected Server. As an additional option, the integration allows you to right-click on an event and manually transfer it to ServiceNow for incident creation and synchronisation.

With this sample filter shown here, the selection is made when an event matches the filter as a critical event with any lifecycle state that has the priority either set to highest, high, or unknown will be forwarded.

In ServiceNow, it's good practice to have an import table where a transformation map is executed, thereby transforming the forwarded event values to matching values in the ServiceNow incident table. A ServiceNow Business Rule can also be applied to further shape the event data before it's inserted into the incident table.

An example of an import table containing the event data fields we want to transform is below:

Here is an example of a transformation map source field for “description”. The target is set for the incident table to match on the target field “description”.

The incident table can be modified to include the OMi event ID field and be transformed similarly as the description example as shown above. This is important so that the incident can be identified easily as originating from OMi. A Business Rule can check for this field if it contains a GUID value. If an event ID GUID exists in the incident omi_id field, the Business Rule advanced actions can be triggered based on the conditions to sync any changes to the incident back to the event in OMi.

The change of the incident Status, Priority, Assigned to, Description, Cause, Work Notes (Annotation), etc. can match that from the ServiceNow incident back to the event’s fields in OMi.

Once the incident in ServiceNow is closed, a Business Rule can trigger the closure of the event and provide:

  • The Work Notes to the event annotation
  • Resolution notes to the solutions field
  • Resolution code to the description field back to the event in OMi

In ServiceNow, when we solve the incident and close it, the incident that was created by the OMi integration will then note the state is set to Closed.

The incident Work Notes are required each time an update to the incident is made and are also added.

Upon incident closure, you are required to populate the incident resolution fields.

After the operator submits and updates the incident, the Business Rule for an OMi generated incident is triggered to then sync the incident details back to the Event and closes it.

The event is closed by an outgoing WebService request from ServiceNow Business Rule that call a REST Message with an xml payload in a REST POST to OMi.

The Solution here is updated by the incident’s resolution information.

Event annotation is updated by the incident’s Work Notes.

Event history details showing the flow of updates to the event that occurred.

In summary, keeping the event in sync with ServiceNow ticketing system is relatively simple. OMi can forward events to an external event processing server via Connected Servers. This clearly makes integrating event management systems with ticketing systems an all-round solid solution to tracking events and incidents. Some JavaScripting is required, along with an in-depth OMi and ServiceNow product knowledge.

JDS has several consultants on the team with this combination of skills and knowledge, and we would be happy to discuss implementing a similar solution for your organisation at any time.

Our team on the case

Tech tips from JDS

Implementing an electronic signature in ALM


 

This is where organisations like the FDA (Food and Drug Administration in the United States) and TGA (Therapeutic Goods Administration in Australia) come in, enforcing regulatory controls around all aspects of the manufacturing process to minimise risk and ensure a high level of quality.

These regulatory bodies understand that effective quality assurance goes much further than just regulating the raw materials and manufacturing process. Any software used to control or support the manufacturing process must also adhere to strict quality standards, because a bug in the software can lead to problems in manufacturing with real-world consequences for patients. Software quality assurance can therefore literally be a matter of life or death l.

To ensure that medical manufacturers conduct satisfactory software testing and maintain the required quality assurance standards, the FDA have published the General Principles of Software Validation document which “outlines general validation principles that the Food and Drug Administration (FDA) considers to be applicable to the validation of medical device software or the validation of software used to design, develop, or manufacture medical devices.”

The JDS solution

JDS Australia recently implemented HPE Application Lifecycle Management (ALM) for one of our clients, a manufacturer of medicine delivering more than 10,000 patient doses per week to hospitals in Australia and overseas. ALM is a software testing tool that assists with the end-to-end management of the software testing lifecycle. This includes defining functional and non-functional requirements for a given application and creating test cases to confirm those requirements are met. It also manages all aspects of test execution, the recording and managing of defects, and reporting across the entire testing effort. ALM enables an organisation to implement and enforce their test strategy in a controlled and structured manner, while providing a complete audit trail of all the testing that was performed.

To comply with the FDA requirements, our client required customisation of ALM to facilitate approvals and sign-off of various testing artefacts (test cases, test executions, and defects). The FDA stipulates that approvals must incorporate an electronic signature that validates the user’s identity to ensure the integrity of the process. As an out-of-the-box implementation, ALM does not provide users with the ability to electronically sign artefacts. JDS developed the eSignature add-in to provide this capability and ensure that our client conforms to the regulatory requirements of the FDA.

The JDS eSignature Add-in was developed in C# and consists of a small COM-aware dll file that is installed and registered on the client machine together with the standard ALM client. The eSignature component handles basic field-level checks and credential validation, while all other business rules are managed from the ALM Workflow Customisation. This gives JDS the ability to implement the electronic signature requirements as stipulated by the FDA, while giving us the flexibility to develop customer-specific customisations and implement future enhancements without the need to recompile and reinstall the eSignature component.

Let’s look at a simple test manager approval for a test case to see how it works.

To start with, new “approval” custom fields are added to the Test Plan module which may contain data such as the approval status, a reason/comment and the date and time that the approval was made. These fields are read-only with their values set through the eSignature Workflow customisation. A new “Approve Test” button is added to the toolbar. When the user clicks this button, the Test Approvals form is presented to the user who selects the appropriate approval status, provides a comment, and enters their credentials. When the OK button is clicked, the eSignature component authenticates the user in ALM using an API function from the HPE Open Test Architecture (OTA) COM library.

If the user is successfully authenticated, eSignature passes the relevant information to the ALM workflow script which sets the appropriate field values. The approvals functionality can be further customised to do things such as making the test case read-only or sending an email to the next approver in line to review and approve the test case.

As it currently stands, the eSignature has been implemented in three modules of ALM – Test Plan for test cases that require two levels of review and approval, Test Lab for test execution records that require three levels of review and approval, and Defects to manage the assignment and approvals of defects. This can easily be expanded to include other modules, such as the approvals of test requirements or software releases.

The JDS eSignature add-in has a very small footprint, is easy to install and configure, and provides our clients with the ability to effectively implement an electronic signature capability as part of their software testing strategy.

If you have compliance requirements or are seeking ways to automate your test management processes, contact our support team at JDS Australia. Our consultants are highly skilled across a range of software suites and IT platforms, and we will work with your business to develop custom solutions that work for you.

Contact us at:

T: 1300 780 432

E: [email protected]

Our team on the case

Why choose JDS?

At JDS, our purpose is to ensure your IT systems work wherever, however, and whenever they are needed. Our expert consultants will help you identify current or potential business issues, and then develop customised solutions to suit you.

JDS is different from other providers in the market. We offer 24/7 monitoring capabilities and support throughout the entire application lifecycle. We give your IT Operations team visibility into the health of your IT systems, enabling them to identify and resolve issues quickly.

We are passionate about what we do, working seamlessly with you to ensure you are getting the best possible performance from your environment. All products sold by JDS are backed by our local Tier One support desk, ensuring a stress-free solution for the entire product lifecycle.

Service portal simplicity

The introduction of the Service Portal, using AngularJS and Bootstrap, has given ServiceNow considerable flexibility, allowing customers to develop portals to suit their own specific needs.

While attribution depends on whether you subscribe to Voltaire, Winston Churchill, or Uncle Ben from Spiderman, “With great power comes great responsibility.” And this is especially true when it comes to the Service Portal. Customers should tread lightly so as to use the flexibility of the portal correctly.

A good example of this arose during a recent customer engagement, where the requirement arose to allow some self-service portal users the ability to see all the incidents within their company. This particular customer provides services across a range of other smaller companies, and wanted key personnel in those companies to see all their incidents (without being able to see those belonging to other companies, and without being able to update or change others’ incidents). 

The temptation was to build a custom widget from scratch using AngularJS, but simplicity won the day. Instead of coding, testing, and debugging a custom widget, JDS reused an existing widget, wrapping it with a simple security check, and reduced the amount of code required to implement this requirement down to one line of HTML and two lines of server-side code.

The JDS approach was to instantiate a simple list widget on-the-fly, but only if the customer’s security requirements were met.

Normally, portal pages are designed using Service Portal’s drag-and-drop page designer, visible when you navigate to portal configuration. In this case, we’re embedding a widget within a widget.

We’ll create a new widget called secure-list that dynamically adds the simple-list-widget only if needed when a user visits the page.

Let’s look at the code—all three lines of it:

HTML

<div><sp-widget widget="data.listWidget"></sp-widget></div>

By dynamically creating a widget only if it meets certain requirements, we can control when this particular list widget is built.

Server-Side Code

(function() {
                if(gs.getUser().isMemberOf('View all incidents')){
                                data.listWidget = $sp.getWidget('widget-simple-list', {table:'incident',display_field:'number',secondary_fields:'short_description,category,priority',glyph:'user',filter:'active=true^opened_by.company=javascript:gs.getUser().getCompanyID()',sp_page:'ticket',color:'primary',size:'medium'});
                }             
})();

This code will only execute if the current user is a member of the group View all incidents, but the magic occurs in the $sp.getWidget, as this is where ServiceNow builds a widget on-the-fly. The challenge is really, ‘where can we find all these options?’

How do you know what options are needed for any particular widget, given those available change from one widget to another? The answer is surprisingly simple, and can be applied to any widget.

When viewing the service portal, administrators can use ctrl-right-click to examine any widgets on the page. In this case, we’ll examine the way the simple list widget is used by selecting instance options.

Immediately, we can see there are a lot of useful values that could be set, but how do we know which are critical, and what additional options there are?

How do we know what we should reference in our code?

Is “Maximum entries” called max, or max_entry, or max_entries? Also, are there any other configuration options that might be useful to us?

By looking at the design of the widget itself in the ServiceNow platform (by navigating to portal-> widgets), we can scroll down and see both the mandatory and optional configuration values that can be set per widget, along with the correct names to use within our script.

Looking at the actual widget we want to build on-the-fly, we can see both required fields and a bunch of optional schema fields. All we need to do to make our widget work is supply these as they relate to our requirements. Simple.

{table:'incident',display_field:'number',secondary_fields:'short_description,category,priority',glyph:'user',filter:'active=true^opened_by.company=javascript:gs.getUser().getCompanyID()',sp_page:'ticket',color:'primary',size:'medium'}

Also, notice the filter in our widget settings, as that limits results shown according to the user’s company…

opened_by.company=javascript:gs.getUser().getCompanyID()

This could be replaced with a dynamic filter to allow even more flexibility, should requirements change over time, allowing the filter to be modified without changing any code.

The moral of the story is keep things simple and…

Don't reinvent the wheel

Our team on the case