Blog

How Contract Management Can Help Your Customers

How Contract Management Can Help Your Customers

 

Contract Management

In the ServiceNow platform, contracts contain detailed information such as contract number, start and end dates, active status, terms and conditions statements, documents, renewal information, and financial terms.

Contract Management is active by default for all ITSM subscribers and was initially seen as a means of managing the following:

  • Software licensing
  • Certificates and their expiration
  • Asset fleet management

Contract Management and its usability continues to grow, with many organisations citing it as a solution for both IT and non-IT contracts, including:

  • Employee contracts, probation agreements, super annuation / 401K (HRSD)
  • Customer warranty and rental agreements (CSM)
  • Vendor and partnership documents (ITBM & VRM)

Value Propositions

By choosing to utilize the Contract Management application, your organization will be able to:

  • Reduce risk by utilizing a contract lifecycle management (CLM) solution.
  • Enhance your asset management capabilities by linking contracts to your CMDB.
  • Leverage a single platform to manage contracts both within and outside of your service management ecosystem
  • Improve operational efficiency via document and signature digitization

Improve End User Satisfaction

  • Submit contracts via both the Desktop and mobile interfaces
  • Expedite your signatory process by integrating with Adobe Sign
  • Reduce physical storage requirements by housing your contracts within ServiceNow
  • Create repeatable processes thru workflow automation and the use of contract template documents

Configure eSignature Capability

One of the many perks of managing your organization’s contracts through the ServiceNow platform, is its ability to seamlessly integrate with enterprise-grade eSignature products, specifically, DocuSign and Adobe Sign.

Both DocuSign and Adobe Sign have collaborated with ServiceNow to create Integration Spokes for their respective products.

The pre-packaged workflows, roles and connectors make the initial configuration of these products a streamlined and enjoyable experience, with the added benefit of both vendors providing technical support if required.

 

Conclusion

To learn more about how JDS can optimize your customer's contract lifecycle management, contact our team today on 1300 780 432, or email contactus@jds.net.au.

Our team on the case

Ensure IT works

Cameron Chambers

Senior Consultant

I am an accomplished IT Service Delivery professional (in both the private and public sectors), with a strong focus on client & customer satisfaction, innovation, all elements of ITSM, project delivery and continuous service improvement.

I’m passionate about identifying process and technical inefficiencies within a team, application, project or business and then closing those gaps via the enhancement of existing technologies, the introduction of new solutions and further developing the people responsible for delivering them.

As a Certified Premier Sales & Services Partner, JDS is rapidly expanding its ServiceNow footprint throughout Australia, alongside our AppDynamics, Atlassian and Splunk practices and I’m proud to be working alongside some of the best and brightest in our space!

Posted by Cameron Chambers in Blog, ServiceNow
Manipulating Service Portal Widgets Without Modifying Them

Manipulating Service Portal Widgets Without Modifying Them

 

It’s common for organisations to want something a little bit more than what is on offer by ServiceNow in its service portal, but without breaking any core functionality. In this article, we’ll look at how you can manipulate an existing out-of-the-box widget WITHOUT modifying it.

One option is to clone the widget and change it but that causes your cloned version to become locked in time, so it won’t benefit from any enhancements or bug-fixes applied to the original widget by ServiceNow as versions upgrade.

A better approach is to embed the original widget INSIDE another widget and make your modifications there. In this way, you get the best of both worlds. Any changes to the widget will be automatically inherited, while you can change the behaviour of that widget at ease.

This article assumes you are confident in developing custom widgets. If you need more information on what service portal widgets are and how they work, please refer to:

Here’s how it can be done (with this code sample provided at the bottom of the article)

First, notice we’re using the sp-widget directive to embed an OOB widget, but we’re going to use an angular data object (essentially a variable) to hold the name of that widget. This gives us the flexibility to add HTML/Angular before and after the widget.

We populate this angular object in the server script. This gives us the ability to set any properties the widget might be expecting. In this case, we’re going to use the ServiceNow catalogue item widget (v2)

Now, in our client script, we can refer to data in BOTH our widget and the OOB widget, something that is extremely handy!

Finally, in this example, we’re interested to add some extra functionality when the original OOB widget is submitted. Looking at the client script for the OOB widget, we can see that ServiceNow are using broadcast events to transmit (emit) various actions. This is what we’ll intercept.

As you can see there are several events we could intercept and augment, like when a submission fails. Once we know what we’re looking for we can simply listen in the background, waiting for that event to occur.

Once that event fires, we can then choose to do something in addition to what the OOB widget is doing using both client and server code in our custom widget (and importantly, acting on information gathered by the OOB widget itself).

$scope.server.get allows us to send an action the server where it is processed and the response is returned.

In this way, we can manipulate an out-of-the-box widget provided by ServiceNow without modifying or cloning it.

Please find example XML here: sp_widget example

Posted by Sam Lindsay in Blog, ServiceNow
Virtual Agent Is Your Friend

Virtual Agent Is Your Friend

If there’s one defining characteristic of the social media revolution it’s “make life easy.”

Don’t underestimate the importance of user satisfaction.

Why did Facebook win out over MySpace? Facebook made it easy to connect, easy to post, easy to find people, easy to interact.

Amazon, Google, Twitter and Facebook have spent the last decade refining their technology to lower the barrier-to-entry for users, making their web applications highly accessible. Have you ever wondered why Google only shows the first ten entries for each search when it could show twenty, fifty or a hundred? Google found that 10 results returned in 0.4 sec, while 30 results took 0.9 sec, but that extra half a second lead to a loss of 20% of their traffic because users were impatient. User satisfaction is the golden rule of online services and so 10 results per page is now standard across all search engines regardless even though now days the difference is probably much less.

When it comes to ServiceNow, organisations should focus on user satisfaction as a way of increasing productivity. ServiceNow allows organisations to treat both their internal staff and their customers with respect, offering services that are snappy, intelligent and well designed. To this end, ServiceNow has developed a number of offerings including Virtual Agent.

What is Virtual Agent?

To say Virtual Agent is a chat-bot is disingenuous. Virtual Agent is a channel for users to quickly and easy get answers to their questions. It is a state-of-the-art system that leverages Natural Language Understanding (NLU) and a complex, decision-based response engine to meet a user’s expectations without wasting their time.

The Natural Language Understanding machine learning engine used by ServiceNow is trained to understand conversational chats using Wikipedia and The Washington Post, and can be enhanced with organisational specific words and phrases. Natural Language Understanding is the gateway for users to reach a catalogue of prebuilt workflows that resolve common issues.

The Virtual Agent Designer allows for sophisticated workflows with complex decision-making. Not only does this reduce the burden on first-level support it drastically reduces the resolution time for common issues, raising the satisfaction of users with the services provided by your organisation.

But the real genius behind Virtual Agent is it can be run from ANYWHERE.

A common problem experienced by organisations with ServiceNow is managing multiple corporate websites. The ServiceNow self-service portal can be seen by some users as yet another corporate web instance and a bridge too far, reducing the adoption of self-service. To combat this, ServiceNow allows its Virtual Agent to be deployed ANYWHERE. As an example, it’s on this WordPress page! Go ahead, give it a try. As soon as you click on “chat”, you’re interacting with the JDSAustraliaDemo1 instance of ServiceNow!

By allowing the Virtual Agent to run from anywhere, customers can incorporate ServiceNow functionality into their other websites, giving users easy access to the services and offerings available through ServiceNow.

Keep your users happy. Start using Virtual Agent.

Posted by Jillian Hunter in Blog, ServiceNow
Introducing the LoadRunner family for 2020

Introducing the LoadRunner family for 2020

Micro Focus LoadRunner has long been the industry standard and leading solution for performance testing solution with LoadRunner (for On-Premise), Performance Center (for Enterprise) and StormRunner Load (For Cloud). As we move into 2020, Micro Focus has standardised their solutions under the LoadRunner banner and re-architected some of the solutions. Meet the new family:

LoadRunner Professional which was earlier called LoadRunner. An On-Premise solution for small teams conducting performance tests.
LoadRunner Enterprise which was earlier called Performance Center. Primarily aimed at enterprise deployments for multiple performance tests to be done in collaboration.
LoadRunner Cloud which was earlier called StormRunner Load. A cloud based scalable solution for performance testing.

DevWeb renamed from TruWeb is now a fully supported Vugen Protocol.

 

Feature Highlights for 2020

LoadRunner Professional 2020

  • Improved Protocol support in DevWeb, TruClient, Web Services and SAP – Web
  • Modernized Controller Online Graphs

LoadRunner Enterprise 2020

  • Support for Elastic Cloud based load generators
  • SSO Authentication
  • ALM Database decoupling

LoadRunner Cloud 2019.12

  • New analysis module enhancements
  • Pause scheduling during a load test run
  • Goal oriented mode enhancements
  • Support for DevWeb scripts
  • Automatic syncs of GIT scripts
  • Enhanced Azure DevOps integration
  • New public API operations

 

LoadRunner 2019.12

With the LoadRunner Cloud team embracing a continuous delivery approach the version number will be named based on the year and month that the update is delivered. This version release is 2019.12 (December 2019). With planned quarterly product updates and releases, the next update is expected on 2020.02 (February 2020).

The latest release adds features to the runtime view for the tests and improving the DevOps integration while also formalising DevWeb (earlier known as TruWeb) as a supported protocol.

A brief summary is provided below:

 

A few of the most interesting and visual changes are

  • Transaction response time breakdown metrics are now available in the real-time dashboards

  • Azure DevOps Integration

LoadRunner Cloud now integrates into your Azure DevOps pipelines and provides summary report for quick view while a detailed report is also available to analyze. When using the LoadRunner Cloud integration with Azure DevOps, upon completion of a task you can view a new artifact that is published on the Summary tab and a brief report on the LoadRunner Cloud tab.

 

JDS consultants have more than 15+ years of experience working with LoadRunner, and Platinum partners of Micro Focus in Australia. Added to our strong experience in DevOps lifecycle management, JDS has the edge when it comes to performance testing that is unmatched by other performance testers. If you are in the process of introducing a new system or application, make sure you schedule a performance test with JDS.

Conclusion

Contact our Micro Focus & DevOps team today on 1300 780 432 or at DevOps@jds.net.au.

Our team on the case

Work smarter, not harder

Pradeep Ramdas

Consultant

Pradeep Ramdas is an experienced Technical Consultant for the JDS DevOps practice.  Pradeep has over 13 years experience, specialising in performance testing and engineering,  including experience with Atlassian Jira, AppDynamics and Splunk.

Other Micro Focus stories

Posted by Jillian Hunter in Blog, Micro Focus, News
Using Common Functions in the Service Catalog

Using Common Functions in the Service Catalog

ServiceNow’s service portal offers a lot of flexibility for customers wanting to offer complex and sophisticated offerings to their users. Catalog client scripts can run on load, on change and on submit, but often there’s a need for a common library of functions to be shared by these scripts (so they’re maintained in just one place and produce consistent results).

For example, in this case, if the start date, end date or the SAP position changes, the same script needs to run to calculate who the approvers are for a particular request.

Rather than having three separate versions of the same script, we want to be able to store our logic in one place. Here’s how we can do it.

 

Isolate Script

Although the latest versions of ServiceNow (London, Madrid, etc) allow for scripts to be isolated or not, giving ServiceNow admins either the option of protecting (isolating) their scripts or accessing broader libraries, in practice, this can be a little frustrating to implement, so in our example, we’ll use an alternative method to introduce external javascript libraries.

 

UI Scripts

UI scripts, like the one listed below, are very powerful, but they’re also very broad, being applied EVERYWHERE and ALWAYS, so we’ll tread lightly and simply add a function that sets up the DOM for access from our client scripts.

As you can see, we now have some variables we can reference to give us access to the document object, the window object and the angular object from anywhere within ServiceNow.

In theory, we could attach our SAP position changes script here and it would be accessible but it would also be loaded on EVERY page ServiceNow ever loads, which is not good. What we want is a global function accessible only from WITHIN our catalog item, so we’ll put this in an ON LOAD script using our new myWindow object.

The format we’re using is…

myWindow.functionName = function(){

console.log('this is an example')

};

This function can then be called from ANYWHERE within our catalog item (on change or on submit). Also, notice the semi-colon at the end of the window function. Don’t forget this as it is important as we’re altering an object.

Now, though, any time we want to call that common function, we can do so with a single line of code.

 

Following this approach makes maintenance of the logic used by the approval process easy to find and alter going forward.

Conclusion

To learn more about how JDS can optimize the performance of ServiceNow, contact our team today on 1300 780 432, or email contactus@jds.net.au.

Our team on the case

Document as you go.

Peter Cawdron

Consultant

Length of Time at JDS

5 years

Skills

ServiceNow, Loadrunner, HP BSM, Splunk.

Workplace Passion

I enjoy working with the new AngularJS portal in ServiceNow.

Our ServiceNow stories

Posted by Jillian Hunter in Blog, Micro Focus, News
Finding Exoplanets with Splunk

Finding Exoplanets with Splunk

Splunk is a software platform designed to search, analyze and visualize machine-generated data, making sense of what, to most of us, looks like chaos.

Ordinarily, the machine data used by Splunk is gathered from websites, applications, servers, network equipment, sensors, IoT (internet-of-things) devices, etc, but there’s no limit to the complexity of data Splunk can consume.

Splunk specializes in Big Data, so why not use it to search the biggest data of all and find exoplanets?

What is an exoplanet?

An exoplanet is a planet in orbit around another star.

The first confirmed exoplanet was discovered in 1995 orbiting the star 51 Pegasi, which makes this an exciting new, emerging field of astronomy. Since then, Earth-based and space-based telescopes such as Kepler have been used to detect thousands of planets around other stars.

At first, the only planets we found were super-hot Jupiters, enormous gas giants orbiting close to their host stars. As techniques have been refined, thousands of exoplanets have been discovered at all sizes and out to distances comparable with planets in our own solar system. We have even discovered exomoons!

 

How do you find an exoplanet?

Imagine standing on stage at a rock concert, peering toward the back of the auditorium, staring straight at one of the spotlights. Now, try to figure out when a mosquito flies past that blinding light. In essence, that’s what telescopes like NASA’s TESS (Transiting Exoplanet Survey Satellite) are doing.

The dip in starlight intensity can be just a fraction of a percent, but it’s enough to signal that a planet is transiting the star.

Transits have been observed for hundreds of years in one form or another, but only recently has this idea been applied outside our solar system.

Australia has a long history of human exploration, starting some 60,000 years ago. In 1769 after (the then) Lieutenant James Cook sailed to Tahiti to observe the transit of Venus across the face of the our closest star, the Sun, he was ordered to begin a new search for the Great Southern Land which we know as Australia. Cook’s observation of the transit of Venus used largely the same technique as NASA’s Hubble, Kepler and TESS space telescopes but on a much simpler scale.

Our ability to monitor planetary transits has improved considerably since the 1700s.

NASA’s TESS orbiting telescope can cover an area 400 times as broad as NASA’s Kepler space telescope and is capable of monitoring a wider range of star types than Kepler, so we are on the verge of finding tens of thousands of exoplanets, some of which may contain life!

How can we use Splunk to find an exoplanet?

 Science thrives on open data.

All the raw information captured by both Earth-based and space-based telescopes like TESS are publicly available, but there’s a mountain of data to sift through and it’s difficult to spot needles in this celestial haystack, making this an ideal problem for Splunk to solve.

While playing with this over Christmas, I used the NASA Exoplanet Archive, and specifically the PhotoMetric data containing 642 light curves to look for exoplanets. I used wget in Linux to retrieve the raw data as text files, but it is possible to retrieve this data via web services.

MAST, the Mikulski Archive for Space Telescopes, has made available a web API that allows up to 500,000 records to be retrieved at a time using JSON format, making the data even more accessible to Splunk.

Some examples of API queries that can be run against the MAST are:

The raw data for a given observation appears as:

Information from the various telescopes does differ in format and structure, but it’s all stored in text files that can be interrogated by Splunk.

Values like the name of the star (in this case, Gliese 436) are identified in the header, while dates are stored either using HJD (Heliocentric Julian Dates) or BJD (Barycentric Julian Dates) centering on the Sun (with a difference of only 4 seconds between them).

Some observatories will use MJD which is the Modified Julian Date (being the Julian Date minus 2,400,000.5 which equates to November 17, 1858). Sounds complicated, but MJD is an attempt to simplify date calculations.

Think of HJD, BJD and MJD like UTC but for the entire solar system.

One of the challenges faced in gathering this data is that the column metadata is split over three lines, with the title, the data type and the measurement unit all appearing on separate lines.

The actual data captured by the telescope doesn’t start being displayed until line 138 (and this changes from file to file as various telescopes and observation sets have different amounts of associated metadata).

In this example, our columns are…

  • HJD - which is expressed as days, with the values beyond the decimal point being the fraction of that day when the observation occurred
  • Normalized Flux - which is the apparent brightness of the star
  • Normalized Flux Uncertainty - capturing any potential anomalies detected during the collection process that might cast doubt on the result (so long as this is insignificant it can be ignored).

Heliocentric Julian Dates (HJD) are measured from noon (instead of midnight) on 1 January 4713 BC and are represented by numbers into the millions, like 2,455,059.6261813 where the integer is the days elapsed since then, while the decimal fraction is the portion of the day. With a ratio of 0.00001 to 0.864 seconds, multiplying the fraction by 86400 will give us the seconds elapsed since noon on any given Julian Day. Confused? Well, your computer won’t be as it loves working in decimals and fractions, so although this system may seem counterintuitive, it makes date calculations simple math.

We can reverse engineer Epoch dates and regular dates from HJD/BJD, giving Splunk something to work with other than obscure heliocentric dates.

  • As Julian Dates start at noon rather than midnight, all our calculations are shifted by half a day to align with Epoch (Unix time)
  • The Julian date for the start of Epoch on CE 1970 January 1st 00:00:00.0 UT is 2440587.500000
  • Any-Julian-Date-minus-Epoch = 2455059.6261813 - 2440587.5 = 14472.12618
  • Epoch-Day = floor(Any-Julian-Date-minus-Epoch) * milliseconds-in-a-day = 14472 * 86400000 = 1250380800000
  • Epoch-Time = floor((Any-Julian-Date-minus-Epoch – floor(Any-Julian-Date-minus-Epoch)) * milliseconds-in-a-day = floor(0. 6261813 * 86400000) = 10902064
  • Observation-Epoch-Day-Time = Epoch-Day + Epoch-Time = 1250380800000 + 10902064 = 1250391702064

That might seem a little convoluted, but we now have a way of translating astronomical date/times into something Splunk can understand.

I added a bunch of date calculations like this to my props.conf file so dates would appear more naturally within Splunk.

[exoplanets]

SHOULD_LINEMERGE = false

LINE_BREAKER = ([\r\n]+)

EVAL-exo_observation_epoch = ((FLOOR(exo_HJD - 2440587.5) * 86400000) + FLOOR(((exo_HJD - 2440587.5) - FLOOR(exo_HJD - 2440587.5))  *  86400000))

EVAL-exo_observation_date = (strftime(((FLOOR(exo_HJD - 2440587.5) * 86400000) + FLOOR(((exo_HJD - 2440587.5) - FLOOR(exo_HJD - 2440587.5))  *  86400000)) / 1000,"%d/%m/%Y %H:%M:%S.%3N"))

EVAL-_time = strptime((strftime(((FLOOR(exo_HJD - 2440587.5) * 86400000) + FLOOR(((exo_HJD - 2440587.5) - FLOOR(exo_HJD - 2440587.5))  *  86400000)) / 1000,"%d/%m/%Y %H:%M:%S.%3N")),"%d/%m/%Y %H:%M:%S.%3N")

Once date conversions are in place, we can start crafting queries that map the relative flux of a star and allow us to observe exoplanets in another solar system.

Let’s look at a star with the unassuming ID 0300059.

sourcetype=exoplanets host="0300059"

| rex field=_raw "\s+(?P<exo_HJD>24\d+.\d+)\s+(?P<exo_flux>[-]?\d+.\d+)\s+(?P<exo_flux_uncertainty>[-]?\d+.\d+)" | timechart span=1s avg(exo_flux)

And there it is… an exoplanet blotting out a small fraction of starlight as it passes between us and its host star!

What about us?

While curating the Twitter account @RealScientists, Dr. Jessie Christiansen made the point that we only see planets transit stars like this if they’re orbiting on the same plane we’re observing. She also pointed out that “if you were an alien civilization looking at our solar system, and you were lined up just right, every 365 days you would see a (very tiny! 0.01%!!) dip in the brightness that would last for 10 hours or so. That would be Earth!”

There have even been direct observations of planets in orbit around stars, looking down from above (or up from beneath depending on your vantage point). With the next generation of space telescopes, like the James Webb, we’ll be able to see these in greater detail.

 

Image credit: NASA exoplanet exploration

Next steps

From here, the sky’s the limit—quite literally.

Now we’ve brought data into Splunk we can begin to examine trends over time.

Astronomy is BIG DATA in all caps. The Square Kilometer Array (SKA), which comes on line in 2020, will create more data each day than is produced on the Internet in a year!

Astronomical data is the biggest of the Big Data sets and that poses a problem for scientists. There’s so much data it is impossible to mine it all thoroughly. This has led to the emergence of citizen science, where regular people can contribute to scientific discoveries using tools like Splunk.

Most stars have multiple planets, so some complex math is required to distinguish between them, looking at the frequency, magnitude and duration of their transits to identify them individually. Over the course of billions of years, the motion of planets around a star fall into a pattern known as orbital resonance, which is something that can be predicted and tested by Splunk to distinguish between planets and even be used to predict undetected planets!

Then there’s the tantalizing possibility of exomoons orbiting exoplanets. These moons would appear as a slight dip in the transit line (similar to what’s seen above at the end of the exoplanet’s transit). But confirming the existence of an exomoon relies on repeated observations, clearly distinguished from the motion of other planets around that star. Once isolated, the transit lines should show a dip in different locations for different transits (revealing how the exomoon is swinging out to the side of the planet and increasing the amount of light being blocked at that point).

Given its strength with modelling data, predictive analytics and machine learning, Splunk is an ideal platform to support the search for exoplanets.

Find out more

If you’d like to learn more about how Splunk can help your organization reach for the stars, contact one of our account managers.

Our team on the case

Document as you go.

Peter Cawdron

Consultant

Length of Time at JDS

5 years

Skills

ServiceNow, Loadrunner, HP BSM, Splunk.

Workplace Passion

I enjoy working with the new AngularJS portal in ServiceNow.

Our Splunk stories

Posted by Jillian Hunter in Blog, Micro Focus, News
ServiceNow Archiving

ServiceNow Archiving

Picture this: you've had ServiceNow in your organisation for a couple of years, and you’re starting to notice the number of older records accrue, perhaps to the point where performance isn’t what it used to be. Not all of us have the processing power of the Matrix to handle everything at once!

Forgotten records are not isolated problems for businesses, but it’s an easy issue to address especially if you want to improve instance speeds and efficiency. Unlike overflowing filing cabinets in your office, the impact is easily missed until problems arise. Now is as good a time as ever to embrace the bright start of the new year, and consider refreshing your data to improve system performance.

ServiceNow can help with the System Archiving application. This allows you to configure archival and disposal rules specific to your business recordkeeping.

Archiving

Archiving data simply moves data to an archive table using rules configured to meet your retention requirements. These rules automatically run, ensuring data refresh is ongoing.  

In the example below, a SME business has over 5000 incident records and they’re keen to automate archive incidents which are inactive and closed over a year ago.

 

The key point to remember is to not archive records which are likely to be required for reporting. While archived records can be restored to their original tables, they are not designed to be reported on nor are they optimised for searching.

Disposal

Now our SME has archival rules, their next step would be to review the disposal rules. As with hard copies, no one wants records sitting in archive for all eternity. That’s a lot of filing cabinets!

Ideally, our SME would work closely with record managers and data owners to come to an agreement on when records can safely be disposed of. For example, it could be agreed that 2 years after the archival date of an incident record, it can safely be disposed. Government regulations are often 5 to 7 years for sensitive data, and when that date rolls around, disposal rules can automatically rid you of the load.

Conclusion

Is 2019 the year you consider the automated refresh of ServiceNow records for your business? JDS can work with you to review your ongoing needs and help determine safe archival and disposal rules that suit.

Conclusion

It doesn't need to be complicated! Reach out to us and we can help you.

Our team on the case

Nicole Harvey

Nicole Harvey

Length of Time at JDS

Since July 2018

Skills

Workplace Solutions

 

Workplace Passion

 

Our ServiceNow stories

Posted by Jillian Hunter in Blog, Micro Focus, News
Governance, Risk & Compliance

Governance, Risk & Compliance

ServiceNow has implemented Governance, Risk and Compliance (GRC) based on the OCEG (Open Compliance & Ethics Group) GRP Capability Model.

What is GRC?

  • Governance allows an organisation to reliably achieve its objectives
  • Risk addresses uncertainty in a structured manner
  • Compliance ensures business activities are undertaken with integrity

Whether organisations formally recognize GRC or not, they all need to undertake some form of governance over their business activities or they will not be able to reliably achieve their goals.

When it comes to risk, recognising and addressing uncertainty ensures the durability of an organisation before it is placed in a position where it is under stress. Public and government expectations are that organisations will act with integrity; failure to do so may result in a loss of revenue, loss of social standing and possibly government fines or loss of licensing.

Governance, Risk and Compliance is built around the authority documents, policies and risks identified by the organisation as important.

Depending on the industry, there are a number of standards authorities and government regulations that form the basis for documents of authority, providing specific compliance regulations. ISO (the International Organisation for Standardization) has established quality assurance standards such as ISO 9000, and risk management frameworks such as ISO 31000, or ISO 27000 standards for information security management.

In addition to these, various governments may demand adherence to standards developed to protect the public, such as Sarbanes-Oxley (to protect investors against possible fraud), HIPAA (the US Health Insurance Portability and Accountability Act of 1996) and GDPR (the European Union’s General Data Protection Regulation). ServiceNow’s GRC allows organisations to manage these complex requirements and ensure they are compliant and operating efficiently.

The sheer number of documents and standards, along with the complexity of how they depend on and interact with each other, can make GRC daunting to administer. ServiceNow has simplified this process by structuring these activities in a logical framework.

Authority documents (like ISO 27000), internal policies and risk frameworks (like ISO 31000) represent a corporate library—the ideal state for the organisation. The question then becomes, how well does an organisation measure up to its ideals in terms of policies and risks?

ServiceNow addresses this by using profile types.

Profile types are a means of translating polices and risks into practice.

When policy types are applied to policy statements, they form the active controls for an organisation— that is, those controls from the library that are being actively monitored.

In the same way, when risks are applied to policy types, they form the risk register for the organization. This is the definitive list of those specific risks that are being actively measured and monitored, as opposed to all risks.

This approach allows organisations to accurately measure their governance model and understand which areas they need to focus on to improve.

The metrics supporting GRC profile types can be gathered manually via audit-styled surveys of employees and third-parties, or in an automated fashion using information stored elsewhere within ServiceNow (such as IT Service Management or Human Resources). In addition to this, GRC compliance metrics for the various profile types can be gathered using orchestration and automation, and by integrating with other systems to provide an accurate view of governance, risk and compliance.

If you would like to learn more about how ServiceNow can support your organisation manage the complexity of GRC, please speak to one of our account executives.

Conclusion

It doesn't need to be complicated! Reach out to us and we can help you manage your organizational risks.

Our team on the case

Document as you go.

Peter Cawdron

Consultant

Length of Time at JDS

5 years

Skills

ServiceNow, Loadrunner, HP BSM, Splunk.

Workplace Passion

I enjoy working with the new AngularJS portal in ServiceNow.

Our ServiceNow stories

Posted by Jillian Hunter in Blog, Micro Focus, News
Asset Management in ServiceNow

Asset Management in ServiceNow

Effective ICT departments are increasingly reliant on solid software and hardware asset management, yet the concept can often strike fear into the hearts of organisations as the complexity and work involved can seem endless. Indeed, Asset Management can be like trying to reach the moon with a step ladder! New advances in ServiceNow seek to change that ladder into a rocket - here’s how.


Business Goals (Launch Pad): Truly understanding your business goals and processes is an important and often underrated start to successful asset management. Clarifying business requirements allows us here at JDS to recommend suitable approaches to customers and help realise benefits faster. A recurring challenge we address is reducing unnecessary costs in over-licensed software. Through the power of ServiceNow technology, we can help you automate the removal and management of licenses.

Accurate Data (Rocket Fuel): Accurate data is the fuel behind asset management. With asset data commonly scattered across multiple systems, trusted data sources are paramount; often with organisations reliant on these to track and extract information for reports and often crucial decisions. JDS is experienced in tools such as ServiceNow Discovery and integrations with third-party providers like MicroFocus uCMDB, Microsoft SCCM and Ivanti LANDESK – proven solutions to assist management teams with data for business strategy.Using ServiceNow, we can help plan data imports/transformations to ensure information is accurate. This means asset info can be normalised automatically to improve data quality and ensure accurate reporting.

 

Asset Management on ServiceNow (Rocket Engine): With clear business goals and a focus on accurate data, ServiceNow also has further capabilities to propel your asset management process. Customers can now manage the lifecycle of asset management with refined expertise. JDS are champions of this automation process, particularly as proven benefits include streamlining and having an entire lifecycle consolidated and managed from one location, to greatly improve visibility and overall management.

 

Ongoing Management (Control Panel): With robust asset management now implemented, customers need a suitable control panel to help maintain momentum and drive continuous process improvement. Utilising a mix of existing ServiceNow and customised reports and dashboards, organisations are now able to easily digest data like never before.

Example of Software Asset Management

Example of Hardware Asset Management

Our team here are experienced in assisting customers setup ongoing asset reviews and audits on these platforms. One example of how we’ve customised this process, is by automating regular asset stocktake tasks, which can then be monitored and reported on the ServiceNow platform.

Conclusion

Successful Asset Management can be a journey, yet well worth the effort by significantly improving on processes and reducing operational costs. Our team are experts in devising solutions with customers, to realise and maximise the value of efficient asset management; and help firmly plant your organisation’s foot and flag on the moon!

Conclusion

Need help in planning and implementing a robust asset management solution? Reach out to us with your current challenges, we would love to help.

Our team on the case

Nicole Harvey

Nicole Harvey

Length of Time at JDS

Since July 2018

Skills

Workplace Solutions

 

Workplace Passion

 

Our ServiceNow stories

Posted by Jillian Hunter in Blog, Micro Focus, News
Micro Focus Technical Bootcamp 2018

Micro Focus Technical Bootcamp 2018

What a welcome to Bangkok! A bustling metropolis of 8.2 million people, known for its ornate shrines and vibrant street life, the city hosted myself and my colleagues Jim T and Andrew P for this year’s highly anticipated Micro Focus APJ Technical Bootcamp. Held in the first week of December, it consisted of 13 technical streams covering Application Delivery Management (ADM – my chosen area of interest and what we’ll deep dive here), Security, Information Management and IT Operations Management (ITOM).

Sunday evening saw event attendees converging from all parts of the APJ region and checking into the impressive Hilton Hotel – lucky us! It was fantastic to meet and get to know other techies not just from Australia but India, Singapore, Taiwan, China, Philippines, Malaysia, and of course Thailand.

Micro Focus went above and beyond with the information, demonstrations and labs covered in the span of the time we had, so to summarise the bootcamp in a blog post is much like experiencing a cold night in Bangkok – practically impossible! However, there are six ADM highlights we are excited to share.

 

1. Latest integrations

It’s widely accepted developing, delivering and supporting business applications is growing increasingly complex tool, application and platform-wise. A single business transaction typically requires integration between multiple systems, with release cycles becoming ever shorter (Amazon deploys a new release to production somewhere in their ecosystem every 11.6 seconds!). Customers are demanding better performance from more secure applications with fewer defects. Oh, and they want to consume that app on their chosen device running a particular version of a given operating system, using the browser of their choice that may not be on the latest version. To put this into perspective, it’s been estimated that the number of unique Device/OS/Browser version combinations operating in the world today is in excess of 28,000! That’s a lot of cocktails!

Micro Focus is uniquely positioned to tackle these complexity challenges with a suite of integrated technologies. But not only do they provide ‘best of breed’ solutions covering all aspects of DevOps, they also fully embrace and integrate with a myriad of open source applications. The result is customers can implement the best solution for their needs, using open source tools, licenced applications or a combination. Examples of what that can potentially look like below.

2. ALM Octane – providing full lifecycle management for Agile projects

Main features and what’s new:

  • Single point of insight into your CI system - full integration with CI tools (Jenkins and Bamboo) to drive packaging, building, automated testing and deployment of applications
  • Integration to numerous automated testing tools and platforms – UFT, Selenium, LeanFT, Mobile Center, StormRunner Functional, to name a few
  • Integration with Fortify to execute automated security tests, both static (code scans) and dynamic penetration testing
  • Providing full support for the SAFe methodology while providing full integrating with Jira, a more team-centric tool
  • Provides full audit compliance

 

3. Mobile Center – the single gateway that expands MF products to mobile technology

Mobile Center (MC) can be integrated with UFT allowing functional regression testing against an iOS or Android mobile device. Integrate MC with Loadrunner to execute a performance test or with Fortify to perform a security scan, or even manually test on an actual or virtual mobile device by integrating with Sprinter.

4. UFT (Unified Functional Testing) – the industry leader in functional test automation

UFT remains a core focus for MF. With quarterly releases, MF ensures the product is continually enhanced to keep up with the ever-changing market demands and emerging technologies.

  • Support for over 40 technologies and environments
  • Full cross browser testing coverage
  • Full headless API testing
  • Integration with MF Mobile Center and MF StormRunner Functional, allowing automated tests to be executed against virtually any OS platform, browser or mobile device
  • Integration with Git repositories with powerful code comparison capabilities between current test components and the previous revision. Tests stored in the Git repository can in turn be accessed by Jenkins or Bamboo CI tools to be run as part of a Continuous Integration pipeline through ALM Octane

 

5. LeanFT – moving automated regression testing to the left of your application development lifecycle

LeanFT is a powerful and lightweight functional testing solution built specifically for continuous testing, allowing developers to build and run automated unit regression tests directly from their chosen IDE. Key features include:

  • Integrates with common IDEs (including Visual Studio, Eclipse, IntelliJ, Android Studio)
  • Cross-platform and cross-browser support. Full mobile support (using Mobile Center)
  • Easily create Selenium tests with Lean FT for Selenium
  • CI/CD-ready Docker image to allow execution of tests within containers
  • Built-in Cucumber BDD Template

6. StormRunner Functional – Complete on-demand digital lab in the cloud

SRF provides a lab consisting of multiple browsers running on various Windows, Mac and Linux versions across a selection of resolutions. In addition, iOS and Android devices are available with a selection of browsers. This allows developers and test engineers to automatically execute parallel test cases across multiple platforms, on-demand in the cloud. SRF can be integrated with various MF and open source automated testing tools. Alternatively, you can record and maintain your automated test scripts directly in SRF. SRF tests can also be run from a Bamboo or Jenkins CI pipeline. Lastly, SRF provides comprehensive reporting and defect management with integration into ALM, Octane and Jira.

So what’s new in SRF 1.61?

Conclusion

Micro Focus has truly demonstrated maturity in the ADM space, with an unrivalled breadth and depth of tools simultaneously enabling Continuous Integration and Continuous Delivery of applications, whilst ensuring a level of quality customers can rely on.

  

Find out more

Interested to know more about these new ADM capabilities? Our specialist team is here to help you improve your business application development, delivery and support.

Our team on the case

'Do what you can, with what you have, where you are.' - Theodore Roosevelt.

Reinhardt Moller

Technical Consultant

Length of Time at JDS

9.5 years

Skills

Products: Primary: HPE ALM and UFT, Secondary: HPE BSM and Loadrunner, ServiceNow

Personal: Photography

Workplace Passion

Helping customers build solutions to tackle testing and monitoring challenges

Our Micro Focus stories

Posted by Jillian Hunter in Blog, Micro Focus, News
 

 

Splunk .conf18 – Splunk Next: 10 Innovations

As part of .conf18 and in the balmy Florida weather surrounded by theme parks, JDS was keen to hear more about what’s coming next from Splunk – namely Splunk Next.

Splunk CEO Doug Merritt announced a lot of new features released with Splunk 7.2 which you can read about in our earlier post (Splunk .conf recap). He also talked about Splunk’s vision of creating products that reduce the barriers to getting the most out of your data. As part of that vision he revealed Splunk Next which comprises a series of innovations that are still in the beta phase.

Being in beta, these features haven’t been finalised yet, but they showcase some of the exciting things Splunk is working towards. Here are the Top 10 innovations that will help you get the most out of your data:

  1. SplunkDeveloper Cloud – develop data-driven apps in the cloud, using the power of Splunk to provide rich data and analytics.
  2. SplunkBusiness Flow – an analytics-driven approach to users’ interactions and identify ways to optimise and troubleshoot. This feature generates a process flow diagram based solely on your index data, and shows you what users are doing, and where you can optimise the system to make smarter decisions.
  3. SplunkData Fabric Search – with the addition of an Apache Spark cluster, you can now search over multiple disparate Splunk instances with surprising speed. This federated search will allow you to search trillions of events and metrics across all your Splunk environments.
  4. SplunkData Stream Processor - a GUI interface to allow you to test your data ingestion in real-time without relying on config files. You can mask data, send it to various indexes or even different Splunk instances, all from the GUI.
  5. SplunkCloud Gateway – a new gateway for the Splunk Mobile App, get Splunk delivered to your mobile device securely.
  6. SplunkMobile – a new mobile interface for Splunk, which shows dashboards in a mobile-friendly format. Plays nicely with the Cloud Gateway.
  7. SplunkAugmented Reality – if you have a VR headset, you can pin glass-table style KPI metrics onto real-world devices. It’s designed so you can walk around a factory floor and see IoT data metrics from the various sensors installed. Also works with QR codes and your smart phone. Think Terminator vision!
  8. SplunkNatural Language Processor – lets you integrate an AI assistant like Alexa and ask English-language based questions and get English-language responses – all from Splunk. e.g. “Alexa, what was the highest selling product last month?” It would be a great addition to your organisation’s ChatOps.
  9. SplunkInsights for Web and Mobile Apps – helps your developers and operators improve the quality of experience delivered by your applications.
  10. SplunkTV – an Apple TV app which rotates through Splunk You no longer need to have a full PC running next to your TV display – just Apple TV.

To participate in any of the above betas go here:

https://www.splunk.com/en_us/software/splunk-next.html

Find out more

Interested to know more about these new Splunk capabilities? We’d love to hear from you. Whether it’s ChatOps, driving operational insight with ITSI, or leveraging Machine Learning - our team can take you through new ways of getting the most out of your data.

Our team on the case

Work smarter, not harder. (I didn't even come up with that. That's smart.)

Daniel Spavin

Performance Test Lead

Length of Time at JDS

7 years

Skills

IT: HPE Load Runner, HPE Performance Center, HPE SiteScope, HPE BSM, Splunk

Personal: Problem solving, Analytical thinking

Workplace Solutions

I care about quality and helping organisations get the best performance out of their IT projects.

Organisations spend a great deal of time and resources developing IT solutions. You want IT speeding up the process, not holding it up. Ensuring performance is built in means you spend less time fixing your IT solutions, and more time on the problems they solve.

I solve problems in our customers’ solutions, so customers can use their solutions to solve problems.

Our Splunk stories

Posted by Jillian Hunter in Blog, Micro Focus, News
Splunk .conf18

Splunk .conf18

Splunk’s annual conference took place in Orlando, Florida this year, and JDS was there to soak up sun and the tech on offer.

Three days went by quickly, with exciting announcements (dark mode anyone?), interesting discussion and the chance to mingle with customers and Splunkers alike. We also enjoyed the chance to meet up with the US distributors of PowerConnect, and time spent with the uberAgent team.

Splunk CEO Doug Merritt kicked off the keynote, announcing a raft of features to Splunk 7.2 along with advancements released in beta – dubbed Splunk Next (but more of that to come, so stay tuned). Here’s a rundown of what’s new to 7.2:

  • SmartStore– some smarts behind using S3 for storage, allowing you to scale your indexer compute and storage separately. Great news if you want to expand your indexers, but don’t want the associated costs of SSD storage. SmartStore also gives you access to the impressive durability and availability of S3, simplifying your backup requirements.
  • Metrics Workspace– a new GUI interface for exploring metrics. You can drag and drop both standard events and metrics to create graphs over time and easily save them directly to dashboards.
  • Dark Mode– as simple as it sounds, with the crowd going wild for this one. You can now have your NOC display dark themed dashboards at the click of a mouse.
  • Official Docker support– Splunk Enterprise 7.2 now officially supports Docker containers, letting you quickly scale up and down based on user demands.
  • Machine Learning Tool Kit 4.0– now easier to train, test and validate your Machine Learning use cases. Includes the announcement of GitHub based solutions to share with fellow Splunkers.
  • ITSI 4.0– this latest version includes predictive KPIs, so your glass tables can show the current state, and the predicted state 30 minutes in the future. There’s also predictive cause analysis – to drill down and find out what will likely cause issues in the future. Metrics can now also feed into KPIs, allowing for closer integration with Splunk Insights for Infrastructure.
  • ES 5.1.1– introduces event sequencing to help with investigations, a Use Case Library to help with adoption, and the Investigation Workbench for incident investigation.
  • Health Report– in addition to the monitoring console, the health report shows the health of the platform, including disk, CPU, memory, and Splunk specific checks. It’s accessible via a new icon next to your login name.
  • Guided Data Onboarding– guides now available to help you onboard data, like those you can find in Enterprise Security. They include diagrams, high-level steps, and documentation links to help set up and configure your data source.
  • Logs to Metrics– a new GUI feature to help configure and convert logs into metric indexes.
  • Workload Management– prioritise users’ searches based on your own criteria – like a QoS for Searching.

 

If you weren’t lucky enough to go in person, or want to catch up on a missed presentation, the sessions are now available online:

https://conf.splunk.com/conf-online.html

Find out more

Interested to know more about these new Splunk capabilities? We’d love to hear from you. Whether it’s ChatOps, driving operational insight with ITSI, or leveraging Machine Learning - our team can take you through new ways of getting the most out of your data.

Our team on the case

Work smarter, not harder. (I didn't even come up with that. That's smart.)

Daniel Spavin

Performance Test Lead

Length of Time at JDS

7 years

Skills

IT: HPE Load Runner, HPE Performance Center, HPE SiteScope, HPE BSM, Splunk

Personal: Problem solving, Analytical thinking

Workplace Solutions

I care about quality and helping organisations get the best performance out of their IT projects.

Organisations spend a great deal of time and resources developing IT solutions. You want IT speeding up the process, not holding it up. Ensuring performance is built in means you spend less time fixing your IT solutions, and more time on the problems they solve.

I solve problems in our customers’ solutions, so customers can use their solutions to solve problems.

Our Splunk stories

Posted by Jillian Hunter in Blog, Micro Focus, News