Category: Blog

Virtual Agent Is Your Friend

Don’t underestimate the importance of user satisfaction

If there’s one defining characteristic of the social media revolution it’s “make life easy.”

Why did Facebook win out over MySpace? Facebook made it easy to connect, easy to post, easy to find people, easy to interact.

Amazon, Google, Twitter and Facebook have spent the last decade refining their technology to lower the barrier-to-entry for users, making their web applications highly accessible. Have you ever wondered why Google only shows the first ten entries for each search when it could show twenty, fifty or a hundred? Google found that 10 results returned in 0.4 sec, while 30 results took 0.9 sec, but that extra half a second lead to a loss of 20% of their traffic because users were impatient. User satisfaction is the golden rule of online services and so 10 results per page is now standard across all search engines regardless even though now days the difference is probably much less.

When it comes to ServiceNow, organisations should focus on user satisfaction as a way of increasing productivity. ServiceNow allows organisations to treat both their internal staff and their customers with respect, offering services that are snappy, intelligent and well designed. To this end, ServiceNow has developed a number of offerings including Virtual Agent.

What is Virtual Agent?

To say Virtual Agent is a chat-bot is disingenuous. Virtual Agent is a channel for users to quickly and easy get answers to their questions. It is a state-of-the-art system that leverages Natural Language Understanding (NLU) and a complex, decision-based response engine to meet a user’s expectations without wasting their time.

The Natural Language Understanding machine learning engine used by ServiceNow is trained to understand conversational chats using Wikipedia and The Washington Post, and can be enhanced with organisational specific words and phrases. Natural Language Understanding is the gateway for users to reach a catalogue of prebuilt workflows that resolve common issues.

The Virtual Agent Designer allows for sophisticated workflows with complex decision-making. Not only does this reduce the burden on first-level support it drastically reduces the resolution time for common issues, raising the satisfaction of users with the services provided by your organisation.

But the real genius behind Virtual Agent is it can be run from ANYWHERE

A common problem experienced by organisations with ServiceNow is managing multiple corporate websites. The ServiceNow self-service portal can be seen by some users as yet another corporate web instance and a bridge too far, reducing the adoption of self-service. To combat this, ServiceNow allows its Virtual Agent to be deployed ANYWHERE. As an example, it’s on this WordPress page! Go ahead, give it a try. As soon as you click on “chat”, you’re interacting with the JDSAustraliaDemo1 instance of ServiceNow!

By allowing the Virtual Agent to run from anywhere, customers can incorporate ServiceNow functionality into their other websites, giving users easy access to the services and offerings available through ServiceNow.

Keep your users happy. Start using Virtual Agent.

Introducing the LoadRunner family for 2020

Micro Focus LoadRunner has long been the industry standard and leading solution for performance testing solution with LoadRunner (for On-Premise), Performance Center (for Enterprise) and StormRunner Load (For Cloud). As we move into 2020, Micro Focus has standardised their solutions under the LoadRunner banner and re-architected some of the solutions. Meet the new family:

LoadRunner Professional which was earlier called LoadRunner. An On-Premise solution for small teams conducting performance tests.
LoadRunner Enterprise which was earlier called Performance Center. Primarily aimed at enterprise deployments for multiple performance tests to be done in collaboration.
LoadRunner Cloud which was earlier called StormRunner Load. A cloud based scalable solution for performance testing.

DevWeb renamed from TruWeb is now a fully supported Vugen Protocol.

 

Feature Highlights for 2020

LoadRunner Professional 2020

  • Improved Protocol support in DevWeb, TruClient, Web Services and SAP – Web
  • Modernized Controller Online Graphs

LoadRunner Enterprise 2020

  • Support for Elastic Cloud based load generators
  • SSO Authentication
  • ALM Database decoupling

LoadRunner Cloud 2019.12

  • New analysis module enhancements
  • Pause scheduling during a load test run
  • Goal oriented mode enhancements
  • Support for DevWeb scripts
  • Automatic syncs of GIT scripts
  • Enhanced Azure DevOps integration
  • New public API operations

 

LoadRunner 2019.12

With the LoadRunner Cloud team embracing a continuous delivery approach the version number will be named based on the year and month that the update is delivered. This version release is 2019.12 (December 2019). With planned quarterly product updates and releases, the next update is expected on 2020.02 (February 2020).

The latest release adds features to the runtime view for the tests and improving the DevOps integration while also formalising DevWeb (earlier known as TruWeb) as a supported protocol.

A brief summary is provided below:

 

A few of the most interesting and visual changes are

  • Transaction response time breakdown metrics are now available in the real-time dashboards

  • Azure DevOps Integration

LoadRunner Cloud now integrates into your Azure DevOps pipelines and provides summary report for quick view while a detailed report is also available to analyze. When using the LoadRunner Cloud integration with Azure DevOps, upon completion of a task you can view a new artifact that is published on the Summary tab and a brief report on the LoadRunner Cloud tab.

 

JDS consultants have more than 15+ years of experience working with LoadRunner, and Platinum partners of Micro Focus in Australia. Added to our strong experience in DevOps lifecycle management, JDS has the edge when it comes to performance testing that is unmatched by other performance testers. If you are in the process of introducing a new system or application, make sure you schedule a performance test with JDS.

Conclusion

Contact our Micro Focus & DevOps team today on 1300 780 432 or at [email protected].

Our team on the case

Other Micro Focus stories

Using Common Functions in the Service Catalog

ServiceNow’s service portal offers a lot of flexibility for customers wanting to offer complex and sophisticated offerings to their users. Catalog client scripts can run on load, on change and on submit, but often there’s a need for a common library of functions to be shared by these scripts (so they’re maintained in just one place and produce consistent results).

For example, in this case, if the start date, end date or the SAP position changes, the same script needs to run to calculate who the approvers are for a particular request.

Rather than having three separate versions of the same script, we want to be able to store our logic in one place. Here’s how we can do it.

 

Isolate Script

Although the latest versions of ServiceNow (London, Madrid, etc) allow for scripts to be isolated or not, giving ServiceNow admins either the option of protecting (isolating) their scripts or accessing broader libraries, in practice, this can be a little frustrating to implement, so in our example, we’ll use an alternative method to introduce external javascript libraries.

 

UI Scripts

UI scripts, like the one listed below, are very powerful, but they’re also very broad, being applied EVERYWHERE and ALWAYS, so we’ll tread lightly and simply add a function that sets up the DOM for access from our client scripts.

As you can see, we now have some variables we can reference to give us access to the document object, the window object and the angular object from anywhere within ServiceNow.

In theory, we could attach our SAP position changes script here and it would be accessible but it would also be loaded on EVERY page ServiceNow ever loads, which is not good. What we want is a global function accessible only from WITHIN our catalog item, so we’ll put this in an ON LOAD script using our new myWindow object.

The format we’re using is…

myWindow.functionName = function(){

console.log('this is an example')

};

This function can then be called from ANYWHERE within our catalog item (on change or on submit). Also, notice the semi-colon at the end of the window function. Don’t forget this as it is important as we’re altering an object.

Now, though, any time we want to call that common function, we can do so with a single line of code.

 

Following this approach makes maintenance of the logic used by the approval process easy to find and alter going forward.

Conclusion

To learn more about how JDS can optimize the performance of ServiceNow, contact our team today on 1300 780 432, or email [email protected].

Our team on the case

Our ServiceNow stories

Finding Exoplanets with Splunk

Splunk is a software platform designed to search, analyze and visualize machine-generated data, making sense of what, to most of us, looks like chaos.

Ordinarily, the machine data used by Splunk is gathered from websites, applications, servers, network equipment, sensors, IoT (internet-of-things) devices, etc, but there’s no limit to the complexity of data Splunk can consume.

Splunk specializes in Big Data, so why not use it to search the biggest data of all and find exoplanets?

What is an exoplanet?

An exoplanet is a planet in orbit around another star.

The first confirmed exoplanet was discovered in 1995 orbiting the star 51 Pegasi, which makes this an exciting new, emerging field of astronomy. Since then, Earth-based and space-based telescopes such as Kepler have been used to detect thousands of planets around other stars.

At first, the only planets we found were super-hot Jupiters, enormous gas giants orbiting close to their host stars. As techniques have been refined, thousands of exoplanets have been discovered at all sizes and out to distances comparable with planets in our own solar system. We have even discovered exomoons!

 

How do you find an exoplanet?

Imagine standing on stage at a rock concert, peering toward the back of the auditorium, staring straight at one of the spotlights. Now, try to figure out when a mosquito flies past that blinding light. In essence, that’s what telescopes like NASA’s TESS (Transiting Exoplanet Survey Satellite) are doing.

The dip in starlight intensity can be just a fraction of a percent, but it’s enough to signal that a planet is transiting the star.

Transits have been observed for hundreds of years in one form or another, but only recently has this idea been applied outside our solar system.

Australia has a long history of human exploration, starting some 60,000 years ago. In 1769 after (the then) Lieutenant James Cook sailed to Tahiti to observe the transit of Venus across the face of the our closest star, the Sun, he was ordered to begin a new search for the Great Southern Land which we know as Australia. Cook’s observation of the transit of Venus used largely the same technique as NASA’s Hubble, Kepler and TESS space telescopes but on a much simpler scale.

Our ability to monitor planetary transits has improved considerably since the 1700s.

NASA’s TESS orbiting telescope can cover an area 400 times as broad as NASA’s Kepler space telescope and is capable of monitoring a wider range of star types than Kepler, so we are on the verge of finding tens of thousands of exoplanets, some of which may contain life!

How can we use Splunk to find an exoplanet?

 Science thrives on open data.

All the raw information captured by both Earth-based and space-based telescopes like TESS are publicly available, but there’s a mountain of data to sift through and it’s difficult to spot needles in this celestial haystack, making this an ideal problem for Splunk to solve.

While playing with this over Christmas, I used the NASA Exoplanet Archive, and specifically the PhotoMetric data containing 642 light curves to look for exoplanets. I used wget in Linux to retrieve the raw data as text files, but it is possible to retrieve this data via web services.

MAST, the Mikulski Archive for Space Telescopes, has made available a web API that allows up to 500,000 records to be retrieved at a time using JSON format, making the data even more accessible to Splunk.

Some examples of API queries that can be run against the MAST are:

The raw data for a given observation appears as:

Information from the various telescopes does differ in format and structure, but it’s all stored in text files that can be interrogated by Splunk.

Values like the name of the star (in this case, Gliese 436) are identified in the header, while dates are stored either using HJD (Heliocentric Julian Dates) or BJD (Barycentric Julian Dates) centering on the Sun (with a difference of only 4 seconds between them).

Some observatories will use MJD which is the Modified Julian Date (being the Julian Date minus 2,400,000.5 which equates to November 17, 1858). Sounds complicated, but MJD is an attempt to simplify date calculations.

Think of HJD, BJD and MJD like UTC but for the entire solar system.

One of the challenges faced in gathering this data is that the column metadata is split over three lines, with the title, the data type and the measurement unit all appearing on separate lines.

The actual data captured by the telescope doesn’t start being displayed until line 138 (and this changes from file to file as various telescopes and observation sets have different amounts of associated metadata).

In this example, our columns are…

  • HJD - which is expressed as days, with the values beyond the decimal point being the fraction of that day when the observation occurred
  • Normalized Flux - which is the apparent brightness of the star
  • Normalized Flux Uncertainty - capturing any potential anomalies detected during the collection process that might cast doubt on the result (so long as this is insignificant it can be ignored).

Heliocentric Julian Dates (HJD) are measured from noon (instead of midnight) on 1 January 4713 BC and are represented by numbers into the millions, like 2,455,059.6261813 where the integer is the days elapsed since then, while the decimal fraction is the portion of the day. With a ratio of 0.00001 to 0.864 seconds, multiplying the fraction by 86400 will give us the seconds elapsed since noon on any given Julian Day. Confused? Well, your computer won’t be as it loves working in decimals and fractions, so although this system may seem counterintuitive, it makes date calculations simple math.

We can reverse engineer Epoch dates and regular dates from HJD/BJD, giving Splunk something to work with other than obscure heliocentric dates.

  • As Julian Dates start at noon rather than midnight, all our calculations are shifted by half a day to align with Epoch (Unix time)
  • The Julian date for the start of Epoch on CE 1970 January 1st 00:00:00.0 UT is 2440587.500000
  • Any-Julian-Date-minus-Epoch = 2455059.6261813 - 2440587.5 = 14472.12618
  • Epoch-Day = floor(Any-Julian-Date-minus-Epoch) * milliseconds-in-a-day = 14472 * 86400000 = 1250380800000
  • Epoch-Time = floor((Any-Julian-Date-minus-Epoch – floor(Any-Julian-Date-minus-Epoch)) * milliseconds-in-a-day = floor(0. 6261813 * 86400000) = 10902064
  • Observation-Epoch-Day-Time = Epoch-Day + Epoch-Time = 1250380800000 + 10902064 = 1250391702064

That might seem a little convoluted, but we now have a way of translating astronomical date/times into something Splunk can understand.

I added a bunch of date calculations like this to my props.conf file so dates would appear more naturally within Splunk.

[exoplanets]

SHOULD_LINEMERGE = false

LINE_BREAKER = ([\r\n]+)

EVAL-exo_observation_epoch = ((FLOOR(exo_HJD - 2440587.5) * 86400000) + FLOOR(((exo_HJD - 2440587.5) - FLOOR(exo_HJD - 2440587.5))  *  86400000))

EVAL-exo_observation_date = (strftime(((FLOOR(exo_HJD - 2440587.5) * 86400000) + FLOOR(((exo_HJD - 2440587.5) - FLOOR(exo_HJD - 2440587.5))  *  86400000)) / 1000,"%d/%m/%Y %H:%M:%S.%3N"))

EVAL-_time = strptime((strftime(((FLOOR(exo_HJD - 2440587.5) * 86400000) + FLOOR(((exo_HJD - 2440587.5) - FLOOR(exo_HJD - 2440587.5))  *  86400000)) / 1000,"%d/%m/%Y %H:%M:%S.%3N")),"%d/%m/%Y %H:%M:%S.%3N")

Once date conversions are in place, we can start crafting queries that map the relative flux of a star and allow us to observe exoplanets in another solar system.

Let’s look at a star with the unassuming ID 0300059.

sourcetype=exoplanets host="0300059"

| rex field=_raw "\s+(?P<exo_HJD>24\d+.\d+)\s+(?P<exo_flux>[-]?\d+.\d+)\s+(?P<exo_flux_uncertainty>[-]?\d+.\d+)" | timechart span=1s avg(exo_flux)

And there it is… an exoplanet blotting out a small fraction of starlight as it passes between us and its host star!

What about us?

While curating the Twitter account @RealScientists, Dr. Jessie Christiansen made the point that we only see planets transit stars like this if they’re orbiting on the same plane we’re observing. She also pointed out that “if you were an alien civilization looking at our solar system, and you were lined up just right, every 365 days you would see a (very tiny! 0.01%!!) dip in the brightness that would last for 10 hours or so. That would be Earth!”

There have even been direct observations of planets in orbit around stars, looking down from above (or up from beneath depending on your vantage point). With the next generation of space telescopes, like the James Webb, we’ll be able to see these in greater detail.

 

Image credit: NASA exoplanet exploration

Next steps

From here, the sky’s the limit—quite literally.

Now we’ve brought data into Splunk we can begin to examine trends over time.

Astronomy is BIG DATA in all caps. The Square Kilometer Array (SKA), which comes on line in 2020, will create more data each day than is produced on the Internet in a year!

Astronomical data is the biggest of the Big Data sets and that poses a problem for scientists. There’s so much data it is impossible to mine it all thoroughly. This has led to the emergence of citizen science, where regular people can contribute to scientific discoveries using tools like Splunk.

Most stars have multiple planets, so some complex math is required to distinguish between them, looking at the frequency, magnitude and duration of their transits to identify them individually. Over the course of billions of years, the motion of planets around a star fall into a pattern known as orbital resonance, which is something that can be predicted and tested by Splunk to distinguish between planets and even be used to predict undetected planets!

Then there’s the tantalizing possibility of exomoons orbiting exoplanets. These moons would appear as a slight dip in the transit line (similar to what’s seen above at the end of the exoplanet’s transit). But confirming the existence of an exomoon relies on repeated observations, clearly distinguished from the motion of other planets around that star. Once isolated, the transit lines should show a dip in different locations for different transits (revealing how the exomoon is swinging out to the side of the planet and increasing the amount of light being blocked at that point).

Given its strength with modelling data, predictive analytics and machine learning, Splunk is an ideal platform to support the search for exoplanets.

Find out more

If you’d like to learn more about how Splunk can help your organization reach for the stars, contact one of our account managers.

Our team on the case

Our Splunk stories

ServiceNow Archiving

Picture this: you've had ServiceNow in your organisation for a couple of years, and you’re starting to notice the number of older records accrue, perhaps to the point where performance isn’t what it used to be. Not all of us have the processing power of the Matrix to handle everything at once!

Forgotten records are not isolated problems for businesses, but it’s an easy issue to address especially if you want to improve instance speeds and efficiency. Unlike overflowing filing cabinets in your office, the impact is easily missed until problems arise. Now is as good a time as ever to embrace the bright start of the new year, and consider refreshing your data to improve system performance.

ServiceNow can help with the System Archiving application. This allows you to configure archival and disposal rules specific to your business recordkeeping.

Archiving

Archiving data simply moves data to an archive table using rules configured to meet your retention requirements. These rules automatically run, ensuring data refresh is ongoing.  

In the example below, a SME business has over 5000 incident records and they’re keen to automate archive incidents which are inactive and closed over a year ago.

 

The key point to remember is to not archive records which are likely to be required for reporting. While archived records can be restored to their original tables, they are not designed to be reported on nor are they optimised for searching.

Disposal

Now our SME has archival rules, their next step would be to review the disposal rules. As with hard copies, no one wants records sitting in archive for all eternity. That’s a lot of filing cabinets!

Ideally, our SME would work closely with record managers and data owners to come to an agreement on when records can safely be disposed of. For example, it could be agreed that 2 years after the archival date of an incident record, it can safely be disposed. Government regulations are often 5 to 7 years for sensitive data, and when that date rolls around, disposal rules can automatically rid you of the load.

Conclusion

Is 2019 the year you consider the automated refresh of ServiceNow records for your business? JDS can work with you to review your ongoing needs and help determine safe archival and disposal rules that suit.

Conclusion

It doesn't need to be complicated! Reach out to us and we can help you.

Our team on the case

Our ServiceNow stories

Governance, Risk & Compliance

ServiceNow has implemented Governance, Risk and Compliance (GRC) based on the OCEG (Open Compliance & Ethics Group) GRP Capability Model.

What is GRC?

  • Governance allows an organisation to reliably achieve its objectives
  • Risk addresses uncertainty in a structured manner
  • Compliance ensures business activities are undertaken with integrity

Whether organisations formally recognize GRC or not, they all need to undertake some form of governance over their business activities or they will not be able to reliably achieve their goals.

When it comes to risk, recognising and addressing uncertainty ensures the durability of an organisation before it is placed in a position where it is under stress. Public and government expectations are that organisations will act with integrity; failure to do so may result in a loss of revenue, loss of social standing and possibly government fines or loss of licensing.

Governance, Risk and Compliance is built around the authority documents, policies and risks identified by the organisation as important.

Depending on the industry, there are a number of standards authorities and government regulations that form the basis for documents of authority, providing specific compliance regulations. ISO (the International Organisation for Standardization) has established quality assurance standards such as ISO 9000, and risk management frameworks such as ISO 31000, or ISO 27000 standards for information security management.

In addition to these, various governments may demand adherence to standards developed to protect the public, such as Sarbanes-Oxley (to protect investors against possible fraud), HIPAA (the US Health Insurance Portability and Accountability Act of 1996) and GDPR (the European Union’s General Data Protection Regulation). ServiceNow’s GRC allows organisations to manage these complex requirements and ensure they are compliant and operating efficiently.

The sheer number of documents and standards, along with the complexity of how they depend on and interact with each other, can make GRC daunting to administer. ServiceNow has simplified this process by structuring these activities in a logical framework.

Authority documents (like ISO 27000), internal policies and risk frameworks (like ISO 31000) represent a corporate library—the ideal state for the organisation. The question then becomes, how well does an organisation measure up to its ideals in terms of policies and risks?

ServiceNow addresses this by using profile types.

Profile types are a means of translating polices and risks into practice.

When policy types are applied to policy statements, they form the active controls for an organisation— that is, those controls from the library that are being actively monitored.

In the same way, when risks are applied to policy types, they form the risk register for the organization. This is the definitive list of those specific risks that are being actively measured and monitored, as opposed to all risks.

This approach allows organisations to accurately measure their governance model and understand which areas they need to focus on to improve.

The metrics supporting GRC profile types can be gathered manually via audit-styled surveys of employees and third-parties, or in an automated fashion using information stored elsewhere within ServiceNow (such as IT Service Management or Human Resources). In addition to this, GRC compliance metrics for the various profile types can be gathered using orchestration and automation, and by integrating with other systems to provide an accurate view of governance, risk and compliance.

If you would like to learn more about how ServiceNow can support your organisation manage the complexity of GRC, please speak to one of our account executives.

Conclusion

It doesn't need to be complicated! Reach out to us and we can help you manage your organizational risks.

Our team on the case

Our ServiceNow stories

Asset Management in ServiceNow

Effective ICT departments are increasingly reliant on solid software and hardware asset management, yet the concept can often strike fear into the hearts of organisations as the complexity and work involved can seem endless. Indeed, Asset Management can be like trying to reach the moon with a step ladder! New advances in ServiceNow seek to change that ladder into a rocket - here’s how.


Business Goals (Launch Pad): Truly understanding your business goals and processes is an important and often underrated start to successful asset management. Clarifying business requirements allows us here at JDS to recommend suitable approaches to customers and help realise benefits faster. A recurring challenge we address is reducing unnecessary costs in over-licensed software. Through the power of ServiceNow technology, we can help you automate the removal and management of licenses.

Accurate Data (Rocket Fuel): Accurate data is the fuel behind asset management. With asset data commonly scattered across multiple systems, trusted data sources are paramount; often with organisations reliant on these to track and extract information for reports and often crucial decisions. JDS is experienced in tools such as ServiceNow Discovery and integrations with third-party providers like MicroFocus uCMDB, Microsoft SCCM and Ivanti LANDESK – proven solutions to assist management teams with data for business strategy.Using ServiceNow, we can help plan data imports/transformations to ensure information is accurate. This means asset info can be normalised automatically to improve data quality and ensure accurate reporting.

 

Asset Management on ServiceNow (Rocket Engine): With clear business goals and a focus on accurate data, ServiceNow also has further capabilities to propel your asset management process. Customers can now manage the lifecycle of asset management with refined expertise. JDS are champions of this automation process, particularly as proven benefits include streamlining and having an entire lifecycle consolidated and managed from one location, to greatly improve visibility and overall management.

 

Ongoing Management (Control Panel): With robust asset management now implemented, customers need a suitable control panel to help maintain momentum and drive continuous process improvement. Utilising a mix of existing ServiceNow and customised reports and dashboards, organisations are now able to easily digest data like never before.

Example of Software Asset Management

Example of Hardware Asset Management

Our team here are experienced in assisting customers setup ongoing asset reviews and audits on these platforms. One example of how we’ve customised this process, is by automating regular asset stocktake tasks, which can then be monitored and reported on the ServiceNow platform.

Conclusion

Successful Asset Management can be a journey, yet well worth the effort by significantly improving on processes and reducing operational costs. Our team are experts in devising solutions with customers, to realise and maximise the value of efficient asset management; and help firmly plant your organisation’s foot and flag on the moon!

Conclusion

Need help in planning and implementing a robust asset management solution? Reach out to us with your current challenges, we would love to help.

Our team on the case

Our ServiceNow stories

5 simple tips for staying healthy on call

Like it or not, 100% availability is the expectation of most customers when accessing web services in 2018. There have never been as many people online as there are now, and people are less patient than ever if there is a delay or disruption in service.

This demand for fast and uninterrupted access means that IT consultants and DevOps engineers are often on call outside of business hours to ensure that any major incident is responded to immediately—even when it happens in the middle of the night.

Being on call is part of life for techies all around the world, but there are potential health and wellbeing issues that go along with this practice that managers and employees should be aware of. Here are a few statistics from the recently released State of IT Work-Life Balance report. We’ll break some of these down more in the tips section below:

  • More than half of IT professionals surveyed experience sleep and/or personal life interruptions...more than 10 times per week.
  • More than 94 percent of IT professionals believe personal life and sleep interruptions impact their work time productivity.
  • Ninety-four percent of IT professionals said the responsibility for the management of always-on digital services impacts their family life—one in four said it was enough to make their job unmanageable at times.
  • Seventy-two percent of IT professionals indicated their managers have little to no visibility in knowing when they are experiencing a difficult on-call period.

One of our technology partners, PagerDuty, specialises in getting the right on-call employees on a major incident at the right time. They have also compiled an open-source collection of resources for IT managers who want to gauge the health and wellbeing of their IT staff, particularly those who do shifts on call.

From their website:

PagerDuty’s Operations Health Management Service is the first industry offering that provides telemetry about the health and well-being of people in ITOps. Business and technical leaders gain a profound understanding of their operations infrastructure and specific recommendations for improvement as seen through the lens of their people’s health. Using this service, enterprises that implement these recommendations can achieve true HybridOps—resulting in happier employees, higher retention rates, and measurably improved digital service delivery.

This is a fantastic resource for IT managers, business managers, and HR managers who want to track and act on data about their employees’ wellbeing and work-life balance. But if you’re an on-call employee yourself, what are some practical steps you can take to improve and maintain good wellbeing?

Here are some of our top tips:

1. Eat a healthy diet

It can be easy to grab fast food instead of cooking when you’re wondering if the phone will ring at any moment, but eating healthy food makes a massive difference in your health and wellbeing. Everybody has a different interpretation of what a healthy diet looks like, but if you try to skip the local drive-through and eat your greens, you are bound to notice a difference.

2. Minimise alcohol and caffeine intake

In addition to eating healthy, don’t forget to stay hydrated. Late-night soft drink, coffee, tea, beer, wine—while enjoyable to drink, too much when you’re on-call can make your shift more painful. Over-indulging can also prevent you from doing your best work in a major incident, so it’s best to keep the water flowing and your head clear, and save the other drinks for your night off.

3. If you’re on call overnight, go to bed early

Everyone dreads that 3 a.m. wake-up call, letting you know there’s a service outage that demands your immediate attention. More than half the people surveyed for the State of IT Work-Life Balance report said their sleep was interrupted more than ten times per week to address an outage or disruption. If you know your sleep is likely to be interrupted, make sure you wind down for the day and turn the lights off early to give yourself the best chance of getting solid rest.

4. Make sure you’re happy with the policies at your company

Before you start an on-call position, find out what the policies are for on-call employees in your company. Being on call is not the same thing as being at your desk. You’ll want to make sure you’re being remunerated appropriately for on-call work, whether that means time in lieu, additional pay, or late starts. You should also be able to make time for your personal life, even when you’re on call, and know where you can draw the line if you’re being asked to do too much. If there are no policies in place, ask your boss or HR manager if you can set boundaries of your own.

5. Provide visibility of your work to your management

Often, when you’re going through a particularly tough period of on-call work, it can be quite isolating. Nearly three-quarters of the IT professionals surveyed indicated their management had little or no visibility of whether they were experiencing a difficult on-call period—and the response from IT managers was much the same.

PagerDuty provides a simple Operations Health Management Service to give IT managers better visibility of the wellbeing of their on-call employees. If you’re interested in implementing this service at your organisation, contact JDS today to find out how we can assist as one of the leading PagerDuty partners in ANZ.

Try it for free

Haven’t yet tried PagerDuty? Register for a two-week free trial today and we’ll help ensure you embrace all the functionality this excellent major incident response tool has to offer.

Our PagerDuty stories

Improve customer experience with Modern Incident Response

Time is the enemy. This could be said for many things in life, but for businesses that are experiencing a disruption or degradation that impacts their ability to operate, every second can feel like a nightmare. When you’re trying to get digital or physical goods in the hands of your customers, experiencing a business-critical IT issue can have significant impacts on your bottom line—not to mention your Service Level Agreements.

Having a modern incident response system in place can turn days of mean time to repair into minutes—that’s PagerDuty’s specialty.

More than ever, organisations need a way to instantly spin up a precise multi-team, business-wide response for any type of incident. When issues requiring real-time action aren’t responded to in an optimal way, it leads to a lack of ownership, prioritisation, and alignment during critical response, where every second counts.

You need a solution that will accelerate the speed of resolution for both unexpected disruptions and opportunities.

By automating the process and effectively orchestrating only the right individuals required for a response, as well as other tasks, teams are empowered to focus on the most meaningful work and minimise errors when it matters most for the business.

PagerDuty incident response notifies the right people in your team immediately so they can quickly assess, triage, and resolve issues when they occur. Implementing this tool with the expert assistance of JDS consultants—who leverage their cross-industry experience to knowledge of ITSM and ITOM—will ensure you meet SLAs and best practice standards for IT management.

But don’t just take our word for it. We partnered with the online betting platform, William Hill, to achieve a 100% availability rate of its online betting services, putting them ahead of the competition in ensuring the highest levels of customer service by preventing downtime. You can read more about that here.

For most organisations, 99.9% availability is adequate, but not for us, we want 100% uptime and that’s why we’ve put PagerDuty at the heart of our digital transformation and cloud migration strategy.

Alan AldersonHead of Infrastructure and Operations, William Hill Australia

With the capabilities of Modern Incident Response delivered from JDS and PagerDuty, IT teams benefit from sophisticated response automation capabilities, integrated triage and ITSM workflows, stakeholder communication, streamlined learning through integrated postmortems, and much more. By identifying and automating best practices, teams eliminate chaos in resolving and preventing future issues, while reducing MTTA by up to 90% and MTTR between 50% and 75%.

Try it for free

To learn more about Modern Incident Response, contact us today. Want to try it out for yourself? Sign up for a free 14-day trial of PagerDuty here! We’ll even talk you through how to set it up so you can make the most out of your two-week trial.

Our PagerDuty stories

Visualise and consolidate your data with SAP PowerConnect for Splunk

SAP systems can be complex. If you have multiple servers and operating systems across different data centres and the cloud, it’s often difficult to know what’s happening under the hood. SAP PowerConnect for Splunk makes it easier than ever to get valuable insight into your SAP system, using its intuitive visual interface and consolidated system data.

Leverage the power of Splunk’s intuitive interface

Traditionally, visualising data using SAP has been a challenge. Inside SAP the data is aggregated, so detail is not available after a few days or weeks—assuming historical records are even being kept. The data that is available is often displayed in tabular format, which needs to be manipulated to provide any meaningful analysis. Visualisations within SAP are limited, and inconsistent across platforms.

A key benefit of SAP PowerConnect is its highly visual interface, which allows your IT operators and executives to see what’s going on in your SAP instance at a glance. Once the data is ingested, you have the full visualisation power of Splunk at your disposal. You can also combine other data from Splunk to create correlations and see visual trends between different systems.

The PowerConnect solution provides more than 50 pre-built dashboards showing you the status of everything from individual metrics to global system trends. You also have the option to collect and display any custom metrics via a function module that you develop.

Whether you want to visualise HANA DB performance or global work processes, you can visually track your data to clearly see trends, spikes, and issues over time.

To illustrate the improved visualisation PowerConnect has over native SAP, compare the following:

This screen shows standard performance metrics from within SAP:

SAP Data PowerConnect

SAP PowerConnect for Splunk converts this tabular data and presents it on a highly visual dashboard:

PowerConnect for Splunk

Bring all your data into one place

By using PowerConnect to consolidate your SAP system data within Splunk, you will discover a wealth of information. Detailed SAP data relating to specific servers, processes, and users will be presented in a range of targeted dashboards. You can see live performance metrics, or look back at trends with a high level of granularity from the past months or years on demand.

With your SAP data available in Splunk, you can:

  • Configure custom alerting based on known triggers, or a combination of metrics from multiple servers or processes
  • Be alerted when you hit a certain user load, or when a custom module encounters an error
  • Schedule custom reports to provide a summary of performance from high-level trends right down to low level CPU metrics
  • Monitor and regularly report on key performance indicators, visualising historical trends over time, including important events during the year

Often different teams within an enterprise will each have a tool of choice to monitor their part of the landscape. This may let the team work efficiently, but can hamper knowledge sharing across the organisation. This is especially true with complex tools, where unintended misuse can cause issues so access is often held back. PowerConnect opens up access to your SAP data by making it available to everyone from your basis team to your executives, and removes the silos.

Furthermore, you can further enrich your SAP PowerConnect data by combining data from non-SAP or third-party applications within your organisation. For example, if your SAP system makes calls to a web service that you manage, you can use Splunk to consume available data to help triage during issues, or complete the picture of what your users are doing on the system. This provides an even deeper level of correlation between applications and business services.

Leveraging the power of Splunk and SAP PowerConnect, you will have a consolidated view of your enterprise operational data, with open and accessible information displayed in a highly visual interface.

Want to find out more?

Find out more about SAP PowerConnect for Splunk and how it can be a key enabler for your business in 2018 by attending our event, "Splunkify your SAP data with PowerConnect." JDS and Splunk are co-hosting three events across Australia throughout the month of March. Each event will open at 5pm, with the presentation to take place from 5.30-6.30pm. Light canapés and beverages will be served.

Choose the most convenient location and date for you, and register below. We look forward to seeing you there.

Find out more

To find out more about how JDS can help you implement PowerConnect, contact our team today on 1300 780 432, or email [email protected].

Our team on the case

Our Splunk stories

Breaking down silos to create an enterprise capability

Traditionally, IT manages services in silos (networks, servers, databases, security, etc), which inevitably leads to an incomplete picture when things go wrong. Each team has a piece of the puzzle, but no one can see the picture on the box. Lack of visibility is a significant risk for companies, particularly those businesses with complex solution stacks and thousands of users. Breaking down silos in your business is an essential part of good IT management.

Correctly implemented, IT Operations Management allows for the rapid triage of outages and service degradation. According to Monash University, regardless of industry, workers with six to ten years experience in a given field will spend most of their time trying to understand the cause of a problem. Few if any IT organisations calculate their Mean Time To Resolution (MTTR), but if they did, the research from Monash and RMIT suggests 45% of that time would be spent simply trying to isolate the cause of the problem.

A variety of solutions exist to help with this issue. Dashboard technology continues to advance each year, providing executives with a single pane of glass view of their business enabling IT services in real time. Artificial intelligence in IT operations is on the rise, and can be leveraged both with your internal IT users and your customers to anticipate behaviour and provide solutions in the event of an error.

The best IT management solutions focus on leveraging your existing software investments and integrating with your business processes. ServiceNow ITOM does exactly that, seamlessly integrating event management, service mapping, and orchestration with your current systems and environment. ITOM complements your current portfolio of solutions and seeks to orchestrate disparate processes and systems into a single workflow.

The ServiceNow ITOM dashboard gives executives a full view of each service in their environment, with colours associated with incident priority (red = critical; yellow = minor; blue = warning). By integrating event management into your processes, you will have an at-a-glance view of what is working and needs attention across your whole IT environment.

ITOM dashboard

System outages are inevitable, no matter how sophisticated your system and skilled your team. Implementing a solution such as ITOM helps you protect your brand by ensuring your team can proactively identify the root cause of an issue, and resolve it before it negatively impacts your users.

When you implement service mapping and orchestration as part of your ITOM solution, you can identify which services in your environment are impacted when one server fails, and put automation in place to resolve regular issues.

service mapping

If you are interested in seeing how ITOM service mapping and orchestration works, register your interest using the form below and one of our local account executives will be in touch with you to schedule an on-site demo.

Conclusion

To find out more about how JDS can help you with your ServiceNow needs, contact our team today on 1300 780 432, or email [email protected].

Our team on the case

Our ServiceNow stories

What is AIOps?

Gartner has coined another buzz word to describe the next evolution of ITOM solutions. AIOps uses the power of Machine Learning and big data to provide pattern discovery and predictive analysis for IT Ops.

What is the need for AIOps?

Organisations undergoing digital transformation are facing a lot of challenges that they wouldn’t have faced even ten years ago. Digital transformation represents a change in organisation structure, processes, and abilities, all driven by technology. As technology changes, organisations need to change with it.

This change comes with challenges. The old ITOps solutions now need to manage micro services, public and private APIs, and Internet-of-Things devices.

As consumers, IT managers are used to personalised movie recommendations from Netflix, or preemptive traffic warnings from Google. However, their IT management systems typically lack this sort of smarts—reverting to traffic light dashboards.

There is an opportunity in the market to combine big data and machine learning with IT operations.

Gartner has coined this concept AIOps: Artificial Intelligence for IT Ops.

AIOps platforms utilize big data, modern machine learning and other advanced analytics technologies to directly and indirectly enhance IT operations (monitoring, automation and service desk) functions with proactive, personal and dynamic insight. AIOps platforms enable the concurrent use of multiple data sources, data collection methods, analytical (real-time and deep) technologies, and presentation technologies.” – Colin Fletcher, Gartner

AIOps 1

Current State – Gartner Report

Gartner coined the term AIOps in 2016, although they originally called it Algorithmic IT Operations. They don’t yet produce a magic quadrant for AIOps, but that is likely coming.

Gartner has produced a report which summarises both what AIOps is hoping to solve, and which vendors are providing solutions.

Eleven core capabilities are identified, with only four vendors able to do all 11: HPE, IBM, ITRS, and Moogsoft.

How does Splunk do AIOps?

Splunk is well positioned to be a leader in the AIOps field. Their AIOps solution is outlined on their website. Splunk AIOps relies heavily on the Machine Learning Toolkit, which provides Splunk with about 30 different machine learning algorithms.

Splunk provides an enterprise machine learning and big data platform which will help AIOps managers:

  • Get answers and insights for everyone: Through the Splunk Query Language, users can predict past, present, and future patterns of IT systems and service performance.
  • Find and solve problems faster: Detect patterns to identify indicators of incidents and reduce irrelevant alerts.
  • Automate incident response and resolution: Splunk can automate manual tasks, which are triggered based on predictive analytics.
  • Predict future outcomes: Forecast on IT costs and learn from historical analysis. Better predict points of failure to proactively improve the operational environment.
  • Continually learn to take more informed decisions: Detect outliers, adapt thresholds, alert on anomalous patterns, and improve learning over time.

Current offerings like ITSI and Enterprise Security also implement machine learning, which take advantage of anomaly detection and predictive algorithms.

As the complexity in IT systems increases, so too will the need to analyse and interpret the large amount of data generated. Humans have been doing a good job to date, but there will come a point where the complexity will be too great. Organisations which can complement their IT Ops with machine learning will have a strategic advantage over those who rely on people alone.

Conclusion

To find out more about how JDS can help you implement AIOps, contact our team today on 1300 780 432, or email [email protected].

Our team on the case