Author: JDS

Understanding your Customer Journeys in Salesforce with AppDynamics

The Problem

JDS Australia works with numerous customers who utilise the force.com platform as the primary interface for their end users (internal and external) to execute business critical services. The flexibility and extensibility of the component based Lightning framework has allowed businesses to customise the platform to meet their specific requirements.

However, many of these companies struggle to monitor, quantify or pinpoint the impact of performance in the Salesforce platform to their end users and ultimately their business. Furthermore, there is limited capability to provide detailed Salesforce information for root cause analysis (e.g. is the problem with a particular lightning component(s) vs core Salesforce platform vs multiple pages?).

The Solution

Using AppDynamics Real User Browser Monitoring (RUM) coupled with advanced JavaScript configuration, we have created a solution.

Unlike traditional methods involving logfile or API based monitoring; real user monitoring collects rich metrics from the end users’ perspective. JDS has also further integrated  additional custom code to identify AJAX requests and inject page names into the stream and provide business context and make sense of the data.

Dashboards provide Salesforce Performance at a glance

Additionally, AppDynamics RUM is able to identify and dynamically visualise each step of Customer Journeys as they traverse Salesforce, in near real time. 

Using the collated metrics these businesses have been able to proactively alert support teams of issues, and also utilise the historic data to analyse customer behaviour to understand how customers are using the platform. For example, expected user journeys vs actual user journeys.

AppDynamics RUM captures detailed diagnostic information to help triage issues, including:

  • Single Page Application performance
  • Page Component load details, 
  • AJAX requests, 
  • Detailed Error Snapshots
  • Dynamic Business Transaction Baselining (Normal vs Slow Performance)
  • User browser version and device type,
  • Geographic location of users,
  • Connection method (e.g. browser vs mobile), and device type. 

AppDynamics RUM can also provide direct correlation to AppDynamics APM agents to combine the ‘front-end’ and ‘back-end’ of these user sessions where Salesforce may traverse additional down-stream applications and infrastructure.

Why JDS?

As experts in Application Performance Management (APM) and Observability, JDS have extensive experience in helping our customers determine the root cause of performance issues.

Contact us at [email protected] to discuss how monitoring Salesforce can be used to understand your end users and make informed decisions with quantifiable metrics. 

Virtual Agent Is Your Friend

Don’t underestimate the importance of user satisfaction

If there’s one defining characteristic of the social media revolution it’s “make life easy.”

Why did Facebook win out over MySpace? Facebook made it easy to connect, easy to post, easy to find people, easy to interact.

Amazon, Google, Twitter and Facebook have spent the last decade refining their technology to lower the barrier-to-entry for users, making their web applications highly accessible. Have you ever wondered why Google only shows the first ten entries for each search when it could show twenty, fifty or a hundred? Google found that 10 results returned in 0.4 sec, while 30 results took 0.9 sec, but that extra half a second lead to a loss of 20% of their traffic because users were impatient. User satisfaction is the golden rule of online services and so 10 results per page is now standard across all search engines regardless even though now days the difference is probably much less.

When it comes to ServiceNow, organisations should focus on user satisfaction as a way of increasing productivity. ServiceNow allows organisations to treat both their internal staff and their customers with respect, offering services that are snappy, intelligent and well designed. To this end, ServiceNow has developed a number of offerings including Virtual Agent.

What is Virtual Agent?

To say Virtual Agent is a chat-bot is disingenuous. Virtual Agent is a channel for users to quickly and easy get answers to their questions. It is a state-of-the-art system that leverages Natural Language Understanding (NLU) and a complex, decision-based response engine to meet a user’s expectations without wasting their time.

The Natural Language Understanding machine learning engine used by ServiceNow is trained to understand conversational chats using Wikipedia and The Washington Post, and can be enhanced with organisational specific words and phrases. Natural Language Understanding is the gateway for users to reach a catalogue of prebuilt workflows that resolve common issues.

The Virtual Agent Designer allows for sophisticated workflows with complex decision-making. Not only does this reduce the burden on first-level support it drastically reduces the resolution time for common issues, raising the satisfaction of users with the services provided by your organisation.

But the real genius behind Virtual Agent is it can be run from ANYWHERE

A common problem experienced by organisations with ServiceNow is managing multiple corporate websites. The ServiceNow self-service portal can be seen by some users as yet another corporate web instance and a bridge too far, reducing the adoption of self-service. To combat this, ServiceNow allows its Virtual Agent to be deployed ANYWHERE. As an example, it’s on this WordPress page! Go ahead, give it a try. As soon as you click on “chat”, you’re interacting with the JDSAustraliaDemo1 instance of ServiceNow!

By allowing the Virtual Agent to run from anywhere, customers can incorporate ServiceNow functionality into their other websites, giving users easy access to the services and offerings available through ServiceNow.

Keep your users happy. Start using Virtual Agent.

Introducing the LoadRunner family for 2020

Micro Focus LoadRunner has long been the industry standard and leading solution for performance testing solution with LoadRunner (for On-Premise), Performance Center (for Enterprise) and StormRunner Load (For Cloud). As we move into 2020, Micro Focus has standardised their solutions under the LoadRunner banner and re-architected some of the solutions. Meet the new family:

LoadRunner Professional which was earlier called LoadRunner. An On-Premise solution for small teams conducting performance tests.
LoadRunner Enterprise which was earlier called Performance Center. Primarily aimed at enterprise deployments for multiple performance tests to be done in collaboration.
LoadRunner Cloud which was earlier called StormRunner Load. A cloud based scalable solution for performance testing.

DevWeb renamed from TruWeb is now a fully supported Vugen Protocol.

 

Feature Highlights for 2020

LoadRunner Professional 2020

  • Improved Protocol support in DevWeb, TruClient, Web Services and SAP – Web
  • Modernized Controller Online Graphs

LoadRunner Enterprise 2020

  • Support for Elastic Cloud based load generators
  • SSO Authentication
  • ALM Database decoupling

LoadRunner Cloud 2019.12

  • New analysis module enhancements
  • Pause scheduling during a load test run
  • Goal oriented mode enhancements
  • Support for DevWeb scripts
  • Automatic syncs of GIT scripts
  • Enhanced Azure DevOps integration
  • New public API operations

 

LoadRunner 2019.12

With the LoadRunner Cloud team embracing a continuous delivery approach the version number will be named based on the year and month that the update is delivered. This version release is 2019.12 (December 2019). With planned quarterly product updates and releases, the next update is expected on 2020.02 (February 2020).

The latest release adds features to the runtime view for the tests and improving the DevOps integration while also formalising DevWeb (earlier known as TruWeb) as a supported protocol.

A brief summary is provided below:

 

A few of the most interesting and visual changes are

  • Transaction response time breakdown metrics are now available in the real-time dashboards

  • Azure DevOps Integration

LoadRunner Cloud now integrates into your Azure DevOps pipelines and provides summary report for quick view while a detailed report is also available to analyze. When using the LoadRunner Cloud integration with Azure DevOps, upon completion of a task you can view a new artifact that is published on the Summary tab and a brief report on the LoadRunner Cloud tab.

 

JDS consultants have more than 15+ years of experience working with LoadRunner, and Platinum partners of Micro Focus in Australia. Added to our strong experience in DevOps lifecycle management, JDS has the edge when it comes to performance testing that is unmatched by other performance testers. If you are in the process of introducing a new system or application, make sure you schedule a performance test with JDS.

Conclusion

Contact our Micro Focus & DevOps team today on 1300 780 432 or at [email protected].

Our team on the case

Other Micro Focus stories

Using Common Functions in the Service Catalog

ServiceNow’s service portal offers a lot of flexibility for customers wanting to offer complex and sophisticated offerings to their users. Catalog client scripts can run on load, on change and on submit, but often there’s a need for a common library of functions to be shared by these scripts (so they’re maintained in just one place and produce consistent results).

For example, in this case, if the start date, end date or the SAP position changes, the same script needs to run to calculate who the approvers are for a particular request.

Rather than having three separate versions of the same script, we want to be able to store our logic in one place. Here’s how we can do it.

 

Isolate Script

Although the latest versions of ServiceNow (London, Madrid, etc) allow for scripts to be isolated or not, giving ServiceNow admins either the option of protecting (isolating) their scripts or accessing broader libraries, in practice, this can be a little frustrating to implement, so in our example, we’ll use an alternative method to introduce external javascript libraries.

 

UI Scripts

UI scripts, like the one listed below, are very powerful, but they’re also very broad, being applied EVERYWHERE and ALWAYS, so we’ll tread lightly and simply add a function that sets up the DOM for access from our client scripts.

As you can see, we now have some variables we can reference to give us access to the document object, the window object and the angular object from anywhere within ServiceNow.

In theory, we could attach our SAP position changes script here and it would be accessible but it would also be loaded on EVERY page ServiceNow ever loads, which is not good. What we want is a global function accessible only from WITHIN our catalog item, so we’ll put this in an ON LOAD script using our new myWindow object.

The format we’re using is…

myWindow.functionName = function(){

console.log('this is an example')

};

This function can then be called from ANYWHERE within our catalog item (on change or on submit). Also, notice the semi-colon at the end of the window function. Don’t forget this as it is important as we’re altering an object.

Now, though, any time we want to call that common function, we can do so with a single line of code.

 

Following this approach makes maintenance of the logic used by the approval process easy to find and alter going forward.

Conclusion

To learn more about how JDS can optimize the performance of ServiceNow, contact our team today on 1300 780 432, or email [email protected].

Our team on the case

Our ServiceNow stories

Finding Exoplanets with Splunk

Splunk is a software platform designed to search, analyze and visualize machine-generated data, making sense of what, to most of us, looks like chaos.

Ordinarily, the machine data used by Splunk is gathered from websites, applications, servers, network equipment, sensors, IoT (internet-of-things) devices, etc, but there’s no limit to the complexity of data Splunk can consume.

Splunk specializes in Big Data, so why not use it to search the biggest data of all and find exoplanets?

What is an exoplanet?

An exoplanet is a planet in orbit around another star.

The first confirmed exoplanet was discovered in 1995 orbiting the star 51 Pegasi, which makes this an exciting new, emerging field of astronomy. Since then, Earth-based and space-based telescopes such as Kepler have been used to detect thousands of planets around other stars.

At first, the only planets we found were super-hot Jupiters, enormous gas giants orbiting close to their host stars. As techniques have been refined, thousands of exoplanets have been discovered at all sizes and out to distances comparable with planets in our own solar system. We have even discovered exomoons!

 

How do you find an exoplanet?

Imagine standing on stage at a rock concert, peering toward the back of the auditorium, staring straight at one of the spotlights. Now, try to figure out when a mosquito flies past that blinding light. In essence, that’s what telescopes like NASA’s TESS (Transiting Exoplanet Survey Satellite) are doing.

The dip in starlight intensity can be just a fraction of a percent, but it’s enough to signal that a planet is transiting the star.

Transits have been observed for hundreds of years in one form or another, but only recently has this idea been applied outside our solar system.

Australia has a long history of human exploration, starting some 60,000 years ago. In 1769 after (the then) Lieutenant James Cook sailed to Tahiti to observe the transit of Venus across the face of the our closest star, the Sun, he was ordered to begin a new search for the Great Southern Land which we know as Australia. Cook’s observation of the transit of Venus used largely the same technique as NASA’s Hubble, Kepler and TESS space telescopes but on a much simpler scale.

Our ability to monitor planetary transits has improved considerably since the 1700s.

NASA’s TESS orbiting telescope can cover an area 400 times as broad as NASA’s Kepler space telescope and is capable of monitoring a wider range of star types than Kepler, so we are on the verge of finding tens of thousands of exoplanets, some of which may contain life!

How can we use Splunk to find an exoplanet?

 Science thrives on open data.

All the raw information captured by both Earth-based and space-based telescopes like TESS are publicly available, but there’s a mountain of data to sift through and it’s difficult to spot needles in this celestial haystack, making this an ideal problem for Splunk to solve.

While playing with this over Christmas, I used the NASA Exoplanet Archive, and specifically the PhotoMetric data containing 642 light curves to look for exoplanets. I used wget in Linux to retrieve the raw data as text files, but it is possible to retrieve this data via web services.

MAST, the Mikulski Archive for Space Telescopes, has made available a web API that allows up to 500,000 records to be retrieved at a time using JSON format, making the data even more accessible to Splunk.

Some examples of API queries that can be run against the MAST are:

The raw data for a given observation appears as:

Information from the various telescopes does differ in format and structure, but it’s all stored in text files that can be interrogated by Splunk.

Values like the name of the star (in this case, Gliese 436) are identified in the header, while dates are stored either using HJD (Heliocentric Julian Dates) or BJD (Barycentric Julian Dates) centering on the Sun (with a difference of only 4 seconds between them).

Some observatories will use MJD which is the Modified Julian Date (being the Julian Date minus 2,400,000.5 which equates to November 17, 1858). Sounds complicated, but MJD is an attempt to simplify date calculations.

Think of HJD, BJD and MJD like UTC but for the entire solar system.

One of the challenges faced in gathering this data is that the column metadata is split over three lines, with the title, the data type and the measurement unit all appearing on separate lines.

The actual data captured by the telescope doesn’t start being displayed until line 138 (and this changes from file to file as various telescopes and observation sets have different amounts of associated metadata).

In this example, our columns are…

  • HJD - which is expressed as days, with the values beyond the decimal point being the fraction of that day when the observation occurred
  • Normalized Flux - which is the apparent brightness of the star
  • Normalized Flux Uncertainty - capturing any potential anomalies detected during the collection process that might cast doubt on the result (so long as this is insignificant it can be ignored).

Heliocentric Julian Dates (HJD) are measured from noon (instead of midnight) on 1 January 4713 BC and are represented by numbers into the millions, like 2,455,059.6261813 where the integer is the days elapsed since then, while the decimal fraction is the portion of the day. With a ratio of 0.00001 to 0.864 seconds, multiplying the fraction by 86400 will give us the seconds elapsed since noon on any given Julian Day. Confused? Well, your computer won’t be as it loves working in decimals and fractions, so although this system may seem counterintuitive, it makes date calculations simple math.

We can reverse engineer Epoch dates and regular dates from HJD/BJD, giving Splunk something to work with other than obscure heliocentric dates.

  • As Julian Dates start at noon rather than midnight, all our calculations are shifted by half a day to align with Epoch (Unix time)
  • The Julian date for the start of Epoch on CE 1970 January 1st 00:00:00.0 UT is 2440587.500000
  • Any-Julian-Date-minus-Epoch = 2455059.6261813 - 2440587.5 = 14472.12618
  • Epoch-Day = floor(Any-Julian-Date-minus-Epoch) * milliseconds-in-a-day = 14472 * 86400000 = 1250380800000
  • Epoch-Time = floor((Any-Julian-Date-minus-Epoch – floor(Any-Julian-Date-minus-Epoch)) * milliseconds-in-a-day = floor(0. 6261813 * 86400000) = 10902064
  • Observation-Epoch-Day-Time = Epoch-Day + Epoch-Time = 1250380800000 + 10902064 = 1250391702064

That might seem a little convoluted, but we now have a way of translating astronomical date/times into something Splunk can understand.

I added a bunch of date calculations like this to my props.conf file so dates would appear more naturally within Splunk.

[exoplanets]

SHOULD_LINEMERGE = false

LINE_BREAKER = ([\r\n]+)

EVAL-exo_observation_epoch = ((FLOOR(exo_HJD - 2440587.5) * 86400000) + FLOOR(((exo_HJD - 2440587.5) - FLOOR(exo_HJD - 2440587.5))  *  86400000))

EVAL-exo_observation_date = (strftime(((FLOOR(exo_HJD - 2440587.5) * 86400000) + FLOOR(((exo_HJD - 2440587.5) - FLOOR(exo_HJD - 2440587.5))  *  86400000)) / 1000,"%d/%m/%Y %H:%M:%S.%3N"))

EVAL-_time = strptime((strftime(((FLOOR(exo_HJD - 2440587.5) * 86400000) + FLOOR(((exo_HJD - 2440587.5) - FLOOR(exo_HJD - 2440587.5))  *  86400000)) / 1000,"%d/%m/%Y %H:%M:%S.%3N")),"%d/%m/%Y %H:%M:%S.%3N")

Once date conversions are in place, we can start crafting queries that map the relative flux of a star and allow us to observe exoplanets in another solar system.

Let’s look at a star with the unassuming ID 0300059.

sourcetype=exoplanets host="0300059"

| rex field=_raw "\s+(?P<exo_HJD>24\d+.\d+)\s+(?P<exo_flux>[-]?\d+.\d+)\s+(?P<exo_flux_uncertainty>[-]?\d+.\d+)" | timechart span=1s avg(exo_flux)

And there it is… an exoplanet blotting out a small fraction of starlight as it passes between us and its host star!

What about us?

While curating the Twitter account @RealScientists, Dr. Jessie Christiansen made the point that we only see planets transit stars like this if they’re orbiting on the same plane we’re observing. She also pointed out that “if you were an alien civilization looking at our solar system, and you were lined up just right, every 365 days you would see a (very tiny! 0.01%!!) dip in the brightness that would last for 10 hours or so. That would be Earth!”

There have even been direct observations of planets in orbit around stars, looking down from above (or up from beneath depending on your vantage point). With the next generation of space telescopes, like the James Webb, we’ll be able to see these in greater detail.

 

Image credit: NASA exoplanet exploration

Next steps

From here, the sky’s the limit—quite literally.

Now we’ve brought data into Splunk we can begin to examine trends over time.

Astronomy is BIG DATA in all caps. The Square Kilometer Array (SKA), which comes on line in 2020, will create more data each day than is produced on the Internet in a year!

Astronomical data is the biggest of the Big Data sets and that poses a problem for scientists. There’s so much data it is impossible to mine it all thoroughly. This has led to the emergence of citizen science, where regular people can contribute to scientific discoveries using tools like Splunk.

Most stars have multiple planets, so some complex math is required to distinguish between them, looking at the frequency, magnitude and duration of their transits to identify them individually. Over the course of billions of years, the motion of planets around a star fall into a pattern known as orbital resonance, which is something that can be predicted and tested by Splunk to distinguish between planets and even be used to predict undetected planets!

Then there’s the tantalizing possibility of exomoons orbiting exoplanets. These moons would appear as a slight dip in the transit line (similar to what’s seen above at the end of the exoplanet’s transit). But confirming the existence of an exomoon relies on repeated observations, clearly distinguished from the motion of other planets around that star. Once isolated, the transit lines should show a dip in different locations for different transits (revealing how the exomoon is swinging out to the side of the planet and increasing the amount of light being blocked at that point).

Given its strength with modelling data, predictive analytics and machine learning, Splunk is an ideal platform to support the search for exoplanets.

Find out more

If you’d like to learn more about how Splunk can help your organization reach for the stars, contact one of our account managers.

Our team on the case

Our Splunk stories

ServiceNow Archiving

Picture this: you've had ServiceNow in your organisation for a couple of years, and you’re starting to notice the number of older records accrue, perhaps to the point where performance isn’t what it used to be. Not all of us have the processing power of the Matrix to handle everything at once!

Forgotten records are not isolated problems for businesses, but it’s an easy issue to address especially if you want to improve instance speeds and efficiency. Unlike overflowing filing cabinets in your office, the impact is easily missed until problems arise. Now is as good a time as ever to embrace the bright start of the new year, and consider refreshing your data to improve system performance.

ServiceNow can help with the System Archiving application. This allows you to configure archival and disposal rules specific to your business recordkeeping.

Archiving

Archiving data simply moves data to an archive table using rules configured to meet your retention requirements. These rules automatically run, ensuring data refresh is ongoing.  

In the example below, a SME business has over 5000 incident records and they’re keen to automate archive incidents which are inactive and closed over a year ago.

 

The key point to remember is to not archive records which are likely to be required for reporting. While archived records can be restored to their original tables, they are not designed to be reported on nor are they optimised for searching.

Disposal

Now our SME has archival rules, their next step would be to review the disposal rules. As with hard copies, no one wants records sitting in archive for all eternity. That’s a lot of filing cabinets!

Ideally, our SME would work closely with record managers and data owners to come to an agreement on when records can safely be disposed of. For example, it could be agreed that 2 years after the archival date of an incident record, it can safely be disposed. Government regulations are often 5 to 7 years for sensitive data, and when that date rolls around, disposal rules can automatically rid you of the load.

Conclusion

Is 2019 the year you consider the automated refresh of ServiceNow records for your business? JDS can work with you to review your ongoing needs and help determine safe archival and disposal rules that suit.

Conclusion

It doesn't need to be complicated! Reach out to us and we can help you.

Our team on the case

Our ServiceNow stories

Browser Console

When working on ServiceNow portal widgets, etc, it can be useful to write out information to the browser’s console log.

You can display the browser console by pressing F12 but, as you’ll notice, the console is a bit noisy. Writing information to the console is useful, but it can be difficult to spot the exact information you’re looking for.

There are a number of console commands, but in this article we’ll only focus on the log action and how that can be used to simplify debugging a service portal widget in ServiceNow.

In javascript, all that’s needed to write to the log is…

console.log('this is important information')

But try finding that in your log when the log extends for a few pages.

There are a couple of tricks to simplify this, one is to add a dash of color.

console.log('%cthis is important information','color:red')

You can even add different colors to highlight different pieces of information by adding multiple styling breaks.

var thisObject = {'name':'John Smith', 'address':'123 Eagle St', 'company':'JDS Australia', 'occupation':'ServiceNow consultant'}

for (var thisField in thisObject){
console.log('%c' + thisField + ' = %c' + thisObject[thisField], 'color:green', 'color:red');
}

As you can see, it’s very easy to find the debugging information we’re looking for, but there’s one other tip that might come in handy and that’s using the console filter.

At the top of the console log there’s a filter that can allow you to isolate exactly what you’re looking for, allowing you to remove all the noise.

If we add a unique preface to all our log statements, we can then filter on that to see only the information that’s important to us. In this example, we’ll use a double colon (highlighted in yellow in the image below).

console.log('%c::this is important information','color:red');
var thisObject = {'name':'John Smith', 'address':'123 Eagle St', 'company':'JDS Australia', 'occupation':'ServiceNow consultant'}

for (var thisField in thisObject){
console.log('%c::' + thisField + ' = %c' + thisObject[thisField],'color:green','color:red');
}

The console log is a useful way of streamlining portal development so use it to the fullest by filtering and/or coloring your inputs so you can debug your widgets with ease.

Conclusion

It doesn't need to be complicated! Reach out to us and we can help you.

Our team on the case

Our ServiceNow stories

Glide Variables

ServiceNow uses a special type of super flexible variable to store information in what appears like a single field, but is actually a complex storage/management system with a database column type called glide_var.

As each record can have a different number of variables stored as key/value pairs, there’s no easy way of dot-walking to the name of the variable within the glide_var as the names can change from record to record within the same table! You can, however, detect and retrieve variables from a glide_var by treating the gliderecord field as an object.

In this example, from an automated test framework step, you can see each of the variables and their values from the database glide_var column inputs.

var gr = new GlideRecord('the table you are looking at')
gr.get('sys_id of the record you are looking at')

for(var eachVariable in gr.inputs){
gs.info(eachVariable + ' : ' + gr.inputs[eachVariable])
}

If you run this in a background script you’ll see precisely which variables exist and what their values are.

Conclusion

It doesn't need to be complicated! Reach out to us and we can help you.

Our team on the case

Our ServiceNow stories

Governance, Risk & Compliance

ServiceNow has implemented Governance, Risk and Compliance (GRC) based on the OCEG (Open Compliance & Ethics Group) GRP Capability Model.

What is GRC?

  • Governance allows an organisation to reliably achieve its objectives
  • Risk addresses uncertainty in a structured manner
  • Compliance ensures business activities are undertaken with integrity

Whether organisations formally recognize GRC or not, they all need to undertake some form of governance over their business activities or they will not be able to reliably achieve their goals.

When it comes to risk, recognising and addressing uncertainty ensures the durability of an organisation before it is placed in a position where it is under stress. Public and government expectations are that organisations will act with integrity; failure to do so may result in a loss of revenue, loss of social standing and possibly government fines or loss of licensing.

Governance, Risk and Compliance is built around the authority documents, policies and risks identified by the organisation as important.

Depending on the industry, there are a number of standards authorities and government regulations that form the basis for documents of authority, providing specific compliance regulations. ISO (the International Organisation for Standardization) has established quality assurance standards such as ISO 9000, and risk management frameworks such as ISO 31000, or ISO 27000 standards for information security management.

In addition to these, various governments may demand adherence to standards developed to protect the public, such as Sarbanes-Oxley (to protect investors against possible fraud), HIPAA (the US Health Insurance Portability and Accountability Act of 1996) and GDPR (the European Union’s General Data Protection Regulation). ServiceNow’s GRC allows organisations to manage these complex requirements and ensure they are compliant and operating efficiently.

The sheer number of documents and standards, along with the complexity of how they depend on and interact with each other, can make GRC daunting to administer. ServiceNow has simplified this process by structuring these activities in a logical framework.

Authority documents (like ISO 27000), internal policies and risk frameworks (like ISO 31000) represent a corporate library—the ideal state for the organisation. The question then becomes, how well does an organisation measure up to its ideals in terms of policies and risks?

ServiceNow addresses this by using profile types.

Profile types are a means of translating polices and risks into practice.

When policy types are applied to policy statements, they form the active controls for an organisation— that is, those controls from the library that are being actively monitored.

In the same way, when risks are applied to policy types, they form the risk register for the organization. This is the definitive list of those specific risks that are being actively measured and monitored, as opposed to all risks.

This approach allows organisations to accurately measure their governance model and understand which areas they need to focus on to improve.

The metrics supporting GRC profile types can be gathered manually via audit-styled surveys of employees and third-parties, or in an automated fashion using information stored elsewhere within ServiceNow (such as IT Service Management or Human Resources). In addition to this, GRC compliance metrics for the various profile types can be gathered using orchestration and automation, and by integrating with other systems to provide an accurate view of governance, risk and compliance.

If you would like to learn more about how ServiceNow can support your organisation manage the complexity of GRC, please speak to one of our account executives.

Conclusion

It doesn't need to be complicated! Reach out to us and we can help you manage your organizational risks.

Our team on the case

Our ServiceNow stories

Asset Management in ServiceNow

Effective ICT departments are increasingly reliant on solid software and hardware asset management, yet the concept can often strike fear into the hearts of organisations as the complexity and work involved can seem endless. Indeed, Asset Management can be like trying to reach the moon with a step ladder! New advances in ServiceNow seek to change that ladder into a rocket - here’s how.


Business Goals (Launch Pad): Truly understanding your business goals and processes is an important and often underrated start to successful asset management. Clarifying business requirements allows us here at JDS to recommend suitable approaches to customers and help realise benefits faster. A recurring challenge we address is reducing unnecessary costs in over-licensed software. Through the power of ServiceNow technology, we can help you automate the removal and management of licenses.

Accurate Data (Rocket Fuel): Accurate data is the fuel behind asset management. With asset data commonly scattered across multiple systems, trusted data sources are paramount; often with organisations reliant on these to track and extract information for reports and often crucial decisions. JDS is experienced in tools such as ServiceNow Discovery and integrations with third-party providers like MicroFocus uCMDB, Microsoft SCCM and Ivanti LANDESK – proven solutions to assist management teams with data for business strategy.Using ServiceNow, we can help plan data imports/transformations to ensure information is accurate. This means asset info can be normalised automatically to improve data quality and ensure accurate reporting.

 

Asset Management on ServiceNow (Rocket Engine): With clear business goals and a focus on accurate data, ServiceNow also has further capabilities to propel your asset management process. Customers can now manage the lifecycle of asset management with refined expertise. JDS are champions of this automation process, particularly as proven benefits include streamlining and having an entire lifecycle consolidated and managed from one location, to greatly improve visibility and overall management.

 

Ongoing Management (Control Panel): With robust asset management now implemented, customers need a suitable control panel to help maintain momentum and drive continuous process improvement. Utilising a mix of existing ServiceNow and customised reports and dashboards, organisations are now able to easily digest data like never before.

Example of Software Asset Management

Example of Hardware Asset Management

Our team here are experienced in assisting customers setup ongoing asset reviews and audits on these platforms. One example of how we’ve customised this process, is by automating regular asset stocktake tasks, which can then be monitored and reported on the ServiceNow platform.

Conclusion

Successful Asset Management can be a journey, yet well worth the effort by significantly improving on processes and reducing operational costs. Our team are experts in devising solutions with customers, to realise and maximise the value of efficient asset management; and help firmly plant your organisation’s foot and flag on the moon!

Conclusion

Need help in planning and implementing a robust asset management solution? Reach out to us with your current challenges, we would love to help.

Our team on the case

Our ServiceNow stories

Understanding Database Indexes in ServiceNow

ServiceNow uses the MySQL database to manage its information, so whenever users are looking at a list of records they’re looking at the results of a database query.

Like all databases, MySQL is designed to handle large volumes of information in the most efficient manner possible. To do this, MySQL has a concept known as indexes. These work very much like the index in the back of a book. Instead of flipping through every page looking for something, a quick trip to the index can let you find all the references for a subject in one spot. In the same way, table indexes increase the speed of database queries.

Real world example

Here’s a real world example of a user waiting upwards of a hundred seconds for the following query on a table with 200,000 rows.

On investigation, it became apparent that there was a database index that should cover this scenario, but it wasn’t being automatically picked up by ServiceNow because of the way indexes work.

Indexes are ONLY used if the columns within them are used seqentially, and this is an important point to understand.

Understanding index order

It doesn’t matter which order the columns are used within the query itself, but they MUST occur based on the order in which they have been put into the index as read from left to right. For example, when it comes to this particular index, the following holds true.

Also, it’s important to note that key columns (like Assignment Group) probably occur in several indexes, so just because this particular index isn’t used doesn’t mean no index will be used. Another index might pick up where this one leaves off.

When a query or report is run, the database engine will examine all the indexes on the table to determine which is most efficient index to use.

All the reference fields used in ServiceNow have an index applied to them, but these aren’t always the most efficient way to query the database. For example, consider these two indexes.

  1. Assigned To, Active
  2. Active, Assignment Group, Assigned To, Number

The first index is the default index for the Assigned To column as a reference field. Although it filters on Active, that is only applied AFTER the Assigned To.

The second index is the one we’re examining, but it can only be used if Assignment Group is also used (following the order of the index from left to right).

If you have a table with a million rows but only 20,000 of them are active, then which of these approaches will be more effective?

a) Select EVERY entry assigned to a person and only then filter on active records
b) Select ONLY active records and then filter on that person

As 98% of the records are inactive, option (b) will produce the best results as it will completely ignore 980,000 records before it starts filtering by name.

So in this case, querying by Active, Assignment Group and Assigned To will be considerably quicker than querying by Assigned To and then Active.

Going back to our real world example, we can see that our index is NOT being used because the very first column in the index hasn’t been included so none of the other parts of the index are being applied. By adding Active we’ll bring this index into play.

And the results?

We’ve gone from 100 seconds down to less than one second!

In this particular case, the results were exactly the same as this person was only actually interested in active records anyway but forgot to include Active in their query, but what if we wanted both active and inactive results? Ah, this is where it gets interesting…

Again, by including Active we’re allowing the MySQL database to use this index and improve performance, and look at the results.

Rather than waiting almost two minutes, we have our results in under a second.

In theory, this query is now exactly the same as the original (which didn’t specify whether a record was active or not) and yet look at how much faster it is because the database can use this index.

Also, it’s interesting to note, that in the absence of this index, the database would have used index (2) above, but because active was applied second it was grossly inefficient.

Trying this same approach on a table with 2.7 million rows, retrieving ALL the rows (with no query) took 21 seconds. Using the seemingly counterintuitive approach of "active=true or active = false" consistently reduced that to 13 seconds, and still returned all 2.7 million rows! That’s a reduction of 40% in the query time!

JDS doesn’t recommend using this "active=true or active=false" query. It’s included only as an example to help make the point. All list views should have a query behind them, and those queries should be based on the underlying database indexes.

The lessons from this are...

  • Don’t underestimate the importance of your database indexes.
  • Be sure to conduct an audit of your commonly used application modules and slow running transactions to make sure the queries in them leverage database indexes properly.

Find out more

To learn more about how JDS can optimize the performance of ServiceNow, contact our team today on 1300 780 432, or email [email protected].

Our team on the case

Our ServiceNow stories

Fast-track ServiceNow upgrades with Automated Testing Framework (ATF)

Why automate?

ServiceNow provides two releases a year, delivering new features to consistently support best practice processes. ServiceNow has flagged they will move towards “N-1” upgrades in 2019, meaning every customer will need to accept each release in the future. To keep up, ServiceNow customers should consider automated testing. This can provide regression testing, completed consistently for each release, and reduce the time and effort needed per upgrade. Effective test automation on ServiceNow is provided by the Automated Testing Framework (ATF), which is included in the ServiceNow platform for all applications. It enables no-code and low-code users to create automated tests with ease. Aided by complementary processes and methodology, ATF reduces upgrade pain by cutting back on manual tests, reducing business impact and accelerating development efficiency.

What is the Automated Test Framework?

Testing frameworks can be viewed as a set of guidelines used for creating and designing test cases. Test automation frameworks take this a step further by using a range of automation tools designed to make the practice of quality assurance more manageable for testing teams.

ServiceNow’s ATF application combines the above into an easy to use and implement solution specifically for ServiceNow applications and functionality. Think of ATF as a tool to streamline your upgrade and QA processes by building tests to check if software or configuration changes have potentially ‘broken’ any existing functionality. It also means developers would no longer be required to start invasive activities like code refactoring to generate new test cases.

Overview of test creation in ServiceNow ATF - click to enlarge

ATF’s unique features include:

  • Codeless setup of test steps and templates (reusable across multiple tests)
  • Testing Service Catalog from end-to-end including submitting Catalog forms and request fulfilment
  • Module testing e.g. ITSM (Incident, Problem & Change Management)
  • Testing forms in the Service Portal (Included in London release)
  • Batching with test suites e.g. Execute multiple tests in a defined order and trigger based on a schedule
  • Custom step configurations & unit testing using the Jasmine test framework
       

Using the codeless test step configuration in ATF to set a Variable value on a Service Catalog form

Simplify your upgrades

For most software upgrades, testing is a burdensome and complicated process, with poorly defined test practices often leading to compromised software quality and potentially, failed projects. Without automation, teams are forced to manually test upgrades - a costly, time-consuming and often ineffective exercise. In most cases, this can be attributed to a lack of testing resources and missing test processes, leading to inconsistent execution.

To address these, ATF introduces structure and enforces consistency across different tests applied to common ServiceNow use cases. Test consistency is important, as this forms a baseline of instance stability based on existing issues, meaning defects caused by upgrades are more reliably determined. ATF allows for full automation of test suites, where the run order can be configured based on tests that depend on each other.  

The following illustrates a simple test dependency created in ATF:

An example ‘hierarchical’ test structure implemented within ATF

A common issue resolved by automated testing is the impact on Agile development cycles. Traditionally, developers would run unit tests by hand, based on steps also written by the developer, potentially leading to missed steps and incomplete scenarios. ATF can be used to run automated unit tests between each build cycle, saving time through early detection and remediation of software issues along with shortened development cycles. The fewer issues present before the upgrade, the fewer issues introduced during the upgrade.

A common requirement of post-upgrade activities is to execute a full regression test against the platform. This means accounting for countless business logic, client scripts, UI policies and web forms that may have been affected. Rather than factoring all of these scenarios into lengthy UAT/Regression cycles, ATF can reduce the load on the business by running those common and repetitive test cases involving multiple user interactions and data entry.

The below example shows how a common use case for testing, field ‘state’ validation on a Service Portal form, is applied using the test step module:

Validating visible & mandatory states of variables with ATF against a Record Producer in Service Portal

Unfortunately, not everything can be automated with ATF out of the box, there are some gaps that are a result of complex UI components, portal widgets and custom development work. It is also important to note custom functionality will add overhead to the framework implementation, and will require specialised scripting knowledge and making use of the ‘Step Configurations’ module to create custom test steps.

When configured properly, it’s possible to automate between 40-60% of test cases with ATF depending on environment complexity and timeframes. The benefits can largely be seen during regression testing post-upgrade, and unit testing during development projects.

In summary, implementing an ATF is a great way of delivering value to ServiceNow upgrade projects and enabling development teams to be more agile. Assessing the scope and depth of testing using an agreed methodology is a great way to determine what is required, and to achieve ‘buy-in’ from others, rather than taking the ‘automate everything’ approach.

JDS is here to help

Recognised as experts in the local Test Automation space for over 12 years, JDS' specialist team has adapted this experience to provide a proven framework for ServiceNow ATF implementation.

We have developed a Rapid Automated Testing tool which can use your existing data to take up to 80% of the work out of building your automated tests. Contact us today to find out how we can build test cases based on real data and automate the development of your testing suite in a fraction of the time that manual test creation would require.

We can help you get started with ServiceNow ATF. In just a few days, a ServiceNow ATF “Kick Start” engagement will provide you with the detail you need to scope and plan ATF on your platform.

Conclusion

Want to know more? Email [email protected] or call 1300 780 432 to reach our team.

Our team on the case

Our ServiceNow stories