Tech Tips

Finding Exoplanets with Splunk

Finding Exoplanets with Splunk

Splunk is a software platform designed to search, analyze and visualize machine-generated data, making sense of what, to most of us, looks like chaos.

Ordinarily, the machine data used by Splunk is gathered from websites, applications, servers, network equipment, sensors, IoT (internet-of-things) devices, etc, but there’s no limit to the complexity of data Splunk can consume.

Splunk specializes in Big Data, so why not use it to search the biggest data of all and find exoplanets?

What is an exoplanet?

An exoplanet is a planet in orbit around another star.

The first confirmed exoplanet was discovered in 1995 orbiting the star 51 Pegasi, which makes this an exciting new, emerging field of astronomy. Since then, Earth-based and space-based telescopes such as Kepler have been used to detect thousands of planets around other stars.

At first, the only planets we found were super-hot Jupiters, enormous gas giants orbiting close to their host stars. As techniques have been refined, thousands of exoplanets have been discovered at all sizes and out to distances comparable with planets in our own solar system. We have even discovered exomoons!

 

How do you find an exoplanet?

Imagine standing on stage at a rock concert, peering toward the back of the auditorium, staring straight at one of the spotlights. Now, try to figure out when a mosquito flies past that blinding light. In essence, that’s what telescopes like NASA’s TESS (Transiting Exoplanet Survey Satellite) are doing.

The dip in starlight intensity can be just a fraction of a percent, but it’s enough to signal that a planet is transiting the star.

Transits have been observed for hundreds of years in one form or another, but only recently has this idea been applied outside our solar system.

Australia has a long history of human exploration, starting some 60,000 years ago. In 1769 after (the then) Lieutenant James Cook sailed to Tahiti to observe the transit of Venus across the face of the our closest star, the Sun, he was ordered to begin a new search for the Great Southern Land which we know as Australia. Cook’s observation of the transit of Venus used largely the same technique as NASA’s Hubble, Kepler and TESS space telescopes but on a much simpler scale.

Our ability to monitor planetary transits has improved considerably since the 1700s.

NASA’s TESS orbiting telescope can cover an area 400 times as broad as NASA’s Kepler space telescope and is capable of monitoring a wider range of star types than Kepler, so we are on the verge of finding tens of thousands of exoplanets, some of which may contain life!

How can we use Splunk to find an exoplanet?

 Science thrives on open data.

All the raw information captured by both Earth-based and space-based telescopes like TESS are publicly available, but there’s a mountain of data to sift through and it’s difficult to spot needles in this celestial haystack, making this an ideal problem for Splunk to solve.

While playing with this over Christmas, I used the NASA Exoplanet Archive, and specifically the PhotoMetric data containing 642 light curves to look for exoplanets. I used wget in Linux to retrieve the raw data as text files, but it is possible to retrieve this data via web services.

MAST, the Mikulski Archive for Space Telescopes, has made available a web API that allows up to 500,000 records to be retrieved at a time using JSON format, making the data even more accessible to Splunk.

Some examples of API queries that can be run against the MAST are:

The raw data for a given observation appears as:

Information from the various telescopes does differ in format and structure, but it’s all stored in text files that can be interrogated by Splunk.

Values like the name of the star (in this case, Gliese 436) are identified in the header, while dates are stored either using HJD (Heliocentric Julian Dates) or BJD (Barycentric Julian Dates) centering on the Sun (with a difference of only 4 seconds between them).

Some observatories will use MJD which is the Modified Julian Date (being the Julian Date minus 2,400,000.5 which equates to November 17, 1858). Sounds complicated, but MJD is an attempt to simplify date calculations.

Think of HJD, BJD and MJD like UTC but for the entire solar system.

One of the challenges faced in gathering this data is that the column metadata is split over three lines, with the title, the data type and the measurement unit all appearing on separate lines.

The actual data captured by the telescope doesn’t start being displayed until line 138 (and this changes from file to file as various telescopes and observation sets have different amounts of associated metadata).

In this example, our columns are…

  • HJD - which is expressed as days, with the values beyond the decimal point being the fraction of that day when the observation occurred
  • Normalized Flux - which is the apparent brightness of the star
  • Normalized Flux Uncertainty - capturing any potential anomalies detected during the collection process that might cast doubt on the result (so long as this is insignificant it can be ignored).

Heliocentric Julian Dates (HJD) are measured from noon (instead of midnight) on 1 January 4713 BC and are represented by numbers into the millions, like 2,455,059.6261813 where the integer is the days elapsed since then, while the decimal fraction is the portion of the day. With a ratio of 0.00001 to 0.864 seconds, multiplying the fraction by 86400 will give us the seconds elapsed since noon on any given Julian Day. Confused? Well, your computer won’t be as it loves working in decimals and fractions, so although this system may seem counterintuitive, it makes date calculations simple math.

We can reverse engineer Epoch dates and regular dates from HJD/BJD, giving Splunk something to work with other than obscure heliocentric dates.

  • As Julian Dates start at noon rather than midnight, all our calculations are shifted by half a day to align with Epoch (Unix time)
  • The Julian date for the start of Epoch on CE 1970 January 1st 00:00:00.0 UT is 2440587.500000
  • Any-Julian-Date-minus-Epoch = 2455059.6261813 - 2440587.5 = 14472.12618
  • Epoch-Day = floor(Any-Julian-Date-minus-Epoch) * milliseconds-in-a-day = 14472 * 86400000 = 1250380800000
  • Epoch-Time = floor((Any-Julian-Date-minus-Epoch – floor(Any-Julian-Date-minus-Epoch)) * milliseconds-in-a-day = floor(0. 6261813 * 86400000) = 10902064
  • Observation-Epoch-Day-Time = Epoch-Day + Epoch-Time = 1250380800000 + 10902064 = 1250391702064

That might seem a little convoluted, but we now have a way of translating astronomical date/times into something Splunk can understand.

I added a bunch of date calculations like this to my props.conf file so dates would appear more naturally within Splunk.

[exoplanets]

SHOULD_LINEMERGE = false

LINE_BREAKER = ([\r\n]+)

EVAL-exo_observation_epoch = ((FLOOR(exo_HJD - 2440587.5) * 86400000) + FLOOR(((exo_HJD - 2440587.5) - FLOOR(exo_HJD - 2440587.5))  *  86400000))

EVAL-exo_observation_date = (strftime(((FLOOR(exo_HJD - 2440587.5) * 86400000) + FLOOR(((exo_HJD - 2440587.5) - FLOOR(exo_HJD - 2440587.5))  *  86400000)) / 1000,"%d/%m/%Y %H:%M:%S.%3N"))

EVAL-_time = strptime((strftime(((FLOOR(exo_HJD - 2440587.5) * 86400000) + FLOOR(((exo_HJD - 2440587.5) - FLOOR(exo_HJD - 2440587.5))  *  86400000)) / 1000,"%d/%m/%Y %H:%M:%S.%3N")),"%d/%m/%Y %H:%M:%S.%3N")

Once date conversions are in place, we can start crafting queries that map the relative flux of a star and allow us to observe exoplanets in another solar system.

Let’s look at a star with the unassuming ID 0300059.

sourcetype=exoplanets host="0300059"

| rex field=_raw "\s+(?P<exo_HJD>24\d+.\d+)\s+(?P<exo_flux>[-]?\d+.\d+)\s+(?P<exo_flux_uncertainty>[-]?\d+.\d+)" | timechart span=1s avg(exo_flux)

And there it is… an exoplanet blotting out a small fraction of starlight as it passes between us and its host star!

What about us?

While curating the Twitter account @RealScientists, Dr. Jessie Christiansen made the point that we only see planets transit stars like this if they’re orbiting on the same plane we’re observing. She also pointed out that “if you were an alien civilization looking at our solar system, and you were lined up just right, every 365 days you would see a (very tiny! 0.01%!!) dip in the brightness that would last for 10 hours or so. That would be Earth!”

There have even been direct observations of planets in orbit around stars, looking down from above (or up from beneath depending on your vantage point). With the next generation of space telescopes, like the James Webb, we’ll be able to see these in greater detail.

 

Image credit: NASA exoplanet exploration

Next steps

From here, the sky’s the limit—quite literally.

Now we’ve brought data into Splunk we can begin to examine trends over time.

Astronomy is BIG DATA in all caps. The Square Kilometer Array (SKA), which comes on line in 2020, will create more data each day than is produced on the Internet in a year!

Astronomical data is the biggest of the Big Data sets and that poses a problem for scientists. There’s so much data it is impossible to mine it all thoroughly. This has led to the emergence of citizen science, where regular people can contribute to scientific discoveries using tools like Splunk.

Most stars have multiple planets, so some complex math is required to distinguish between them, looking at the frequency, magnitude and duration of their transits to identify them individually. Over the course of billions of years, the motion of planets around a star fall into a pattern known as orbital resonance, which is something that can be predicted and tested by Splunk to distinguish between planets and even be used to predict undetected planets!

Then there’s the tantalizing possibility of exomoons orbiting exoplanets. These moons would appear as a slight dip in the transit line (similar to what’s seen above at the end of the exoplanet’s transit). But confirming the existence of an exomoon relies on repeated observations, clearly distinguished from the motion of other planets around that star. Once isolated, the transit lines should show a dip in different locations for different transits (revealing how the exomoon is swinging out to the side of the planet and increasing the amount of light being blocked at that point).

Given its strength with modelling data, predictive analytics and machine learning, Splunk is an ideal platform to support the search for exoplanets.

Find out more

If you’d like to learn more about how Splunk can help your organization reach for the stars, contact one of our account managers.

Our team on the case

Document as you go.

Peter Cawdron

Consultant

Length of Time at JDS

5 years

Skills

ServiceNow, Loadrunner, HP BSM, Splunk.

Workplace Passion

I enjoy working with the new AngularJS portal in ServiceNow.

Our Splunk stories

Posted by JDS Admin in Blog, Splunk, Tech Tips
Browser Console

Browser Console

When working on ServiceNow portal widgets, etc, it can be useful to write out information to the browser’s console log.

You can display the browser console by pressing F12 but, as you’ll notice, the console is a bit noisy. Writing information to the console is useful, but it can be difficult to spot the exact information you’re looking for.

There are a number of console commands, but in this article we’ll only focus on the log action and how that can be used to simplify debugging a service portal widget in ServiceNow.

In javascript, all that’s needed to write to the log is…

console.log('this is important information')

But try finding that in your log when the log extends for a few pages.

There are a couple of tricks to simplify this, one is to add a dash of color.

console.log('%cthis is important information','color:red')

You can even add different colors to highlight different pieces of information by adding multiple styling breaks.

var thisObject = {'name':'John Smith', 'address':'123 Eagle St', 'company':'JDS Australia', 'occupation':'ServiceNow consultant'}

for (var thisField in thisObject){
console.log('%c' + thisField + ' = %c' + thisObject[thisField], 'color:green', 'color:red');
}

As you can see, it’s very easy to find the debugging information we’re looking for, but there’s one other tip that might come in handy and that’s using the console filter.

At the top of the console log there’s a filter that can allow you to isolate exactly what you’re looking for, allowing you to remove all the noise.

If we add a unique preface to all our log statements, we can then filter on that to see only the information that’s important to us. In this example, we’ll use a double colon (highlighted in yellow in the image below).

console.log('%c::this is important information','color:red');
var thisObject = {'name':'John Smith', 'address':'123 Eagle St', 'company':'JDS Australia', 'occupation':'ServiceNow consultant'}

for (var thisField in thisObject){
console.log('%c::' + thisField + ' = %c' + thisObject[thisField],'color:green','color:red');
}

The console log is a useful way of streamlining portal development so use it to the fullest by filtering and/or coloring your inputs so you can debug your widgets with ease.

Conclusion

It doesn't need to be complicated! Reach out to us and we can help you.

Our team on the case

Document as you go.

Peter Cawdron

Consultant

Length of Time at JDS

5 years

Skills

ServiceNow, Loadrunner, HP BSM, Splunk.

Workplace Passion

I enjoy working with the new AngularJS portal in ServiceNow.

Our ServiceNow stories

Posted by JDS Admin in ServiceNow, Tech Tips
Glide Variables

Glide Variables

ServiceNow uses a special type of super flexible variable to store information in what appears like a single field, but is actually a complex storage/management system with a database column type called glide_var.

As each record can have a different number of variables stored as key/value pairs, there’s no easy way of dot-walking to the name of the variable within the glide_var as the names can change from record to record within the same table! You can, however, detect and retrieve variables from a glide_var by treating the gliderecord field as an object.

In this example, from an automated test framework step, you can see each of the variables and their values from the database glide_var column inputs.

var gr = new GlideRecord('the table you are looking at')
gr.get('sys_id of the record you are looking at')

for(var eachVariable in gr.inputs){
gs.info(eachVariable + ' : ' + gr.inputs[eachVariable])
}

If you run this in a background script you’ll see precisely which variables exist and what their values are.

Conclusion

It doesn't need to be complicated! Reach out to us and we can help you.

Our team on the case

Document as you go.

Peter Cawdron

Consultant

Length of Time at JDS

5 years

Skills

ServiceNow, Loadrunner, HP BSM, Splunk.

Workplace Passion

I enjoy working with the new AngularJS portal in ServiceNow.

Our ServiceNow stories

Posted by JDS Admin in ServiceNow, Tech Tips
Understanding Database Indexes in ServiceNow

Understanding Database Indexes in ServiceNow

ServiceNow uses the MySQL database to manage its information, so whenever users are looking at a list of records they’re looking at the results of a database query.

Like all databases, MySQL is designed to handle large volumes of information in the most efficient manner possible. To do this, MySQL has a concept known as indexes. These work very much like the index in the back of a book. Instead of flipping through every page looking for something, a quick trip to the index can let you find all the references for a subject in one spot. In the same way, table indexes increase the speed of database queries.

Real world example

Here’s a real world example of a user waiting upwards of a hundred seconds for the following query on a table with 200,000 rows.

On investigation, it became apparent that there was a database index that should cover this scenario, but it wasn’t being automatically picked up by ServiceNow because of the way indexes work.

Indexes are ONLY used if the columns within them are used seqentially, and this is an important point to understand.

Understanding index order

It doesn’t matter which order the columns are used within the query itself, but they MUST occur based on the order in which they have been put into the index as read from left to right. For example, when it comes to this particular index, the following holds true.

Also, it’s important to note that key columns (like Assignment Group) probably occur in several indexes, so just because this particular index isn’t used doesn’t mean no index will be used. Another index might pick up where this one leaves off.

When a query or report is run, the database engine will examine all the indexes on the table to determine which is most efficient index to use.

All the reference fields used in ServiceNow have an index applied to them, but these aren’t always the most efficient way to query the database. For example, consider these two indexes.

  1. Assigned To, Active
  2. Active, Assignment Group, Assigned To, Number

The first index is the default index for the Assigned To column as a reference field. Although it filters on Active, that is only applied AFTER the Assigned To.

The second index is the one we’re examining, but it can only be used if Assignment Group is also used (following the order of the index from left to right).

If you have a table with a million rows but only 20,000 of them are active, then which of these approaches will be more effective?

a) Select EVERY entry assigned to a person and only then filter on active records
b) Select ONLY active records and then filter on that person

As 98% of the records are inactive, option (b) will produce the best results as it will completely ignore 980,000 records before it starts filtering by name.

So in this case, querying by Active, Assignment Group and Assigned To will be considerably quicker than querying by Assigned To and then Active.

Going back to our real world example, we can see that our index is NOT being used because the very first column in the index hasn’t been included so none of the other parts of the index are being applied. By adding Active we’ll bring this index into play.

And the results?

We’ve gone from 100 seconds down to less than one second!

In this particular case, the results were exactly the same as this person was only actually interested in active records anyway but forgot to include Active in their query, but what if we wanted both active and inactive results? Ah, this is where it gets interesting…

Again, by including Active we’re allowing the MySQL database to use this index and improve performance, and look at the results.

Rather than waiting almost two minutes, we have our results in under a second.

In theory, this query is now exactly the same as the original (which didn’t specify whether a record was active or not) and yet look at how much faster it is because the database can use this index.

Also, it’s interesting to note, that in the absence of this index, the database would have used index (2) above, but because active was applied second it was grossly inefficient.

Trying this same approach on a table with 2.7 million rows, retrieving ALL the rows (with no query) took 21 seconds. Using the seemingly counterintuitive approach of "active=true or active = false" consistently reduced that to 13 seconds, and still returned all 2.7 million rows! That’s a reduction of 40% in the query time!

JDS doesn’t recommend using this "active=true or active=false" query. It’s included only as an example to help make the point. All list views should have a query behind them, and those queries should be based on the underlying database indexes.

The lessons from this are...

  • Don’t underestimate the importance of your database indexes.
  • Be sure to conduct an audit of your commonly used application modules and slow running transactions to make sure the queries in them leverage database indexes properly.

Find out more

To learn more about how JDS can optimize the performance of ServiceNow, contact our team today on 1300 780 432, or email contactus@jds.net.au.

Our team on the case

Document as you go.

Peter Cawdron

Consultant

Length of Time at JDS

5 years

Skills

ServiceNow, Loadrunner, HP BSM, Splunk.

Workplace Passion

I enjoy working with the new AngularJS portal in ServiceNow.

Our ServiceNow stories

Posted by JDS Admin in ServiceNow, Tech Tips
Fast-track ServiceNow upgrades with Automated Testing Framework (ATF)

Fast-track ServiceNow upgrades with Automated Testing Framework (ATF)

Why automate?

ServiceNow provides two releases a year, delivering new features to consistently support best practice processes. ServiceNow has flagged they will move towards “N-1” upgrades in 2019, meaning every customer will need to accept each release in the future. To keep up, ServiceNow customers should consider automated testing. This can provide regression testing, completed consistently for each release, and reduce the time and effort needed per upgrade. Effective test automation on ServiceNow is provided by the Automated Testing Framework (ATF), which is included in the ServiceNow platform for all applications. It enables no-code and low-code users to create automated tests with ease. Aided by complementary processes and methodology, ATF reduces upgrade pain by cutting back on manual tests, reducing business impact and accelerating development efficiency.

What is the Automated Test Framework?

Testing frameworks can be viewed as a set of guidelines used for creating and designing test cases. Test automation frameworks take this a step further by using a range of automation tools designed to make the practice of quality assurance more manageable for testing teams.

ServiceNow’s ATF application combines the above into an easy to use and implement solution specifically for ServiceNow applications and functionality. Think of ATF as a tool to streamline your upgrade and QA processes by building tests to check if software or configuration changes have potentially ‘broken’ any existing functionality. It also means developers would no longer be required to start invasive activities like code refactoring to generate new test cases.

Overview of test creation in ServiceNow ATF - click to enlarge

ATF’s unique features include:

  • Codeless setup of test steps and templates (reusable across multiple tests)
  • Testing Service Catalog from end-to-end including submitting Catalog forms and request fulfilment
  • Module testing e.g. ITSM (Incident, Problem & Change Management)
  • Testing forms in the Service Portal (Included in London release)
  • Batching with test suites e.g. Execute multiple tests in a defined order and trigger based on a schedule
  • Custom step configurations & unit testing using the Jasmine test framework
       

Using the codeless test step configuration in ATF to set a Variable value on a Service Catalog form

Simplify your upgrades

For most software upgrades, testing is a burdensome and complicated process, with poorly defined test practices often leading to compromised software quality and potentially, failed projects. Without automation, teams are forced to manually test upgrades - a costly, time-consuming and often ineffective exercise. In most cases, this can be attributed to a lack of testing resources and missing test processes, leading to inconsistent execution.

To address these, ATF introduces structure and enforces consistency across different tests applied to common ServiceNow use cases. Test consistency is important, as this forms a baseline of instance stability based on existing issues, meaning defects caused by upgrades are more reliably determined. ATF allows for full automation of test suites, where the run order can be configured based on tests that depend on each other.  

The following illustrates a simple test dependency created in ATF:

An example ‘hierarchical’ test structure implemented within ATF

A common issue resolved by automated testing is the impact on Agile development cycles. Traditionally, developers would run unit tests by hand, based on steps also written by the developer, potentially leading to missed steps and incomplete scenarios. ATF can be used to run automated unit tests between each build cycle, saving time through early detection and remediation of software issues along with shortened development cycles. The fewer issues present before the upgrade, the fewer issues introduced during the upgrade.

A common requirement of post-upgrade activities is to execute a full regression test against the platform. This means accounting for countless business logic, client scripts, UI policies and web forms that may have been affected. Rather than factoring all of these scenarios into lengthy UAT/Regression cycles, ATF can reduce the load on the business by running those common and repetitive test cases involving multiple user interactions and data entry.

The below example shows how a common use case for testing, field ‘state’ validation on a Service Portal form, is applied using the test step module:

Validating visible & mandatory states of variables with ATF against a Record Producer in Service Portal

Unfortunately, not everything can be automated with ATF out of the box, there are some gaps that are a result of complex UI components, portal widgets and custom development work. It is also important to note custom functionality will add overhead to the framework implementation, and will require specialised scripting knowledge and making use of the ‘Step Configurations’ module to create custom test steps.

When configured properly, it’s possible to automate between 40-60% of test cases with ATF depending on environment complexity and timeframes. The benefits can largely be seen during regression testing post-upgrade, and unit testing during development projects.

In summary, implementing an ATF is a great way of delivering value to ServiceNow upgrade projects and enabling development teams to be more agile. Assessing the scope and depth of testing using an agreed methodology is a great way to determine what is required, and to achieve ‘buy-in’ from others, rather than taking the ‘automate everything’ approach.

JDS is here to help

Recognised as experts in the local Test Automation space for over 12 years, JDS' specialist team has adapted this experience to provide a proven framework for ServiceNow ATF implementation.

We have developed a Rapid Automated Testing tool which can use your existing data to take up to 80% of the work out of building your automated tests. Contact us today to find out how we can build test cases based on real data and automate the development of your testing suite in a fraction of the time that manual test creation would require.

We can help you get started with ServiceNow ATF. In just a few days, a ServiceNow ATF “Kick Start” engagement will provide you with the detail you need to scope and plan ATF on your platform.

Conclusion

Want to know more? Email servicenow@jds.net.au or call 1300 780 432 to reach our team.

Our team on the case

'Please accept change'—parking ticket machine at the shopping centre near me.

Hayden Knight

ServiceNow Technical Consultant

Length of Time at JDS

Since June 2017

Skills

  • ServiceNow Configuration and System Administration
  • ITIL (ITSM)
  • Web development/IT systems administration duties
  • Business process modelling (BPMN) and analysis
  • Programming knowledge/experience—JavaScript, AngularJS, PHP
  • PostgreSQL/MySQL query and command line database management
  • Linux-based virtual machines – build, configuration and system administration
  • Experience integrating automation software within existing software development environments
  • Software versioning and development collaboration using Gitlab/Github
  • Managing remote and local Linux-based virtual machines over secure networks

Workplace Passion

As a technical consultant at JDS, I assist end-users and client organisations in achieving their IT service management goals through the integration and technical support of ServiceNow ITSM products. I’m passionate about the ways in which process automation driven by software solutions, can assist organisations and individuals in delivering a better all-around experience for both their internal and external customers. In my view, technology should be there to assist us and make work more engaging by streamlining repetitive tasks rather than becoming a burdensome addition to the everyday.

Our ServiceNow stories

Posted by JDS Admin in ServiceNow, Tech Tips
 

 

Splunk .conf18 – Splunk Next: 10 Innovations

As part of .conf18 and in the balmy Florida weather surrounded by theme parks, JDS was keen to hear more about what’s coming next from Splunk – namely Splunk Next.

Splunk CEO Doug Merritt announced a lot of new features released with Splunk 7.2 which you can read about in our earlier post (Splunk .conf recap). He also talked about Splunk’s vision of creating products that reduce the barriers to getting the most out of your data. As part of that vision he revealed Splunk Next which comprises a series of innovations that are still in the beta phase.

Being in beta, these features haven’t been finalised yet, but they showcase some of the exciting things Splunk is working towards. Here are the Top 10 innovations that will help you get the most out of your data:

  1. SplunkDeveloper Cloud – develop data-driven apps in the cloud, using the power of Splunk to provide rich data and analytics.
  2. SplunkBusiness Flow – an analytics-driven approach to users’ interactions and identify ways to optimise and troubleshoot. This feature generates a process flow diagram based solely on your index data, and shows you what users are doing, and where you can optimise the system to make smarter decisions.
  3. SplunkData Fabric Search – with the addition of an Apache Spark cluster, you can now search over multiple disparate Splunk instances with surprising speed. This federated search will allow you to search trillions of events and metrics across all your Splunk environments.
  4. SplunkData Stream Processor - a GUI interface to allow you to test your data ingestion in real-time without relying on config files. You can mask data, send it to various indexes or even different Splunk instances, all from the GUI.
  5. SplunkCloud Gateway – a new gateway for the Splunk Mobile App, get Splunk delivered to your mobile device securely.
  6. SplunkMobile – a new mobile interface for Splunk, which shows dashboards in a mobile-friendly format. Plays nicely with the Cloud Gateway.
  7. SplunkAugmented Reality – if you have a VR headset, you can pin glass-table style KPI metrics onto real-world devices. It’s designed so you can walk around a factory floor and see IoT data metrics from the various sensors installed. Also works with QR codes and your smart phone. Think Terminator vision!
  8. SplunkNatural Language Processor – lets you integrate an AI assistant like Alexa and ask English-language based questions and get English-language responses – all from Splunk. e.g. “Alexa, what was the highest selling product last month?” It would be a great addition to your organisation’s ChatOps.
  9. SplunkInsights for Web and Mobile Apps – helps your developers and operators improve the quality of experience delivered by your applications.
  10. SplunkTV – an Apple TV app which rotates through Splunk You no longer need to have a full PC running next to your TV display – just Apple TV.

To participate in any of the above betas go here:

https://www.splunk.com/en_us/software/splunk-next.html

Find out more

Interested to know more about these new Splunk capabilities? We’d love to hear from you. Whether it’s ChatOps, driving operational insight with ITSI, or leveraging Machine Learning - our team can take you through new ways of getting the most out of your data.

Our team on the case

Work smarter, not harder. (I didn't even come up with that. That's smart.)

Daniel Spavin

Performance Test Lead

Length of Time at JDS

7 years

Skills

IT: HPE Load Runner, HPE Performance Center, HPE SiteScope, HPE BSM, Splunk

Personal: Problem solving, Analytical thinking

Workplace Solutions

I care about quality and helping organisations get the best performance out of their IT projects.

Organisations spend a great deal of time and resources developing IT solutions. You want IT speeding up the process, not holding it up. Ensuring performance is built in means you spend less time fixing your IT solutions, and more time on the problems they solve.

I solve problems in our customers’ solutions, so customers can use their solutions to solve problems.

Our Splunk stories

Posted by JDS Admin in Blog, Splunk, Tech Tips
Splunk .conf18

Splunk .conf18

Splunk’s annual conference took place in Orlando, Florida this year, and JDS was there to soak up sun and the tech on offer.

Three days went by quickly, with exciting announcements (dark mode anyone?), interesting discussion and the chance to mingle with customers and Splunkers alike. We also enjoyed the chance to meet up with the US distributors of PowerConnect, and time spent with the uberAgent team.

Splunk CEO Doug Merritt kicked off the keynote, announcing a raft of features to Splunk 7.2 along with advancements released in beta – dubbed Splunk Next (but more of that to come, so stay tuned). Here’s a rundown of what’s new to 7.2:

  • SmartStore– some smarts behind using S3 for storage, allowing you to scale your indexer compute and storage separately. Great news if you want to expand your indexers, but don’t want the associated costs of SSD storage. SmartStore also gives you access to the impressive durability and availability of S3, simplifying your backup requirements.
  • Metrics Workspace– a new GUI interface for exploring metrics. You can drag and drop both standard events and metrics to create graphs over time and easily save them directly to dashboards.
  • Dark Mode– as simple as it sounds, with the crowd going wild for this one. You can now have your NOC display dark themed dashboards at the click of a mouse.
  • Official Docker support– Splunk Enterprise 7.2 now officially supports Docker containers, letting you quickly scale up and down based on user demands.
  • Machine Learning Tool Kit 4.0– now easier to train, test and validate your Machine Learning use cases. Includes the announcement of GitHub based solutions to share with fellow Splunkers.
  • ITSI 4.0– this latest version includes predictive KPIs, so your glass tables can show the current state, and the predicted state 30 minutes in the future. There’s also predictive cause analysis – to drill down and find out what will likely cause issues in the future. Metrics can now also feed into KPIs, allowing for closer integration with Splunk Insights for Infrastructure.
  • ES 5.1.1– introduces event sequencing to help with investigations, a Use Case Library to help with adoption, and the Investigation Workbench for incident investigation.
  • Health Report– in addition to the monitoring console, the health report shows the health of the platform, including disk, CPU, memory, and Splunk specific checks. It’s accessible via a new icon next to your login name.
  • Guided Data Onboarding– guides now available to help you onboard data, like those you can find in Enterprise Security. They include diagrams, high-level steps, and documentation links to help set up and configure your data source.
  • Logs to Metrics– a new GUI feature to help configure and convert logs into metric indexes.
  • Workload Management– prioritise users’ searches based on your own criteria – like a QoS for Searching.

 

If you weren’t lucky enough to go in person, or want to catch up on a missed presentation, the sessions are now available online:

https://conf.splunk.com/conf-online.html

Find out more

Interested to know more about these new Splunk capabilities? We’d love to hear from you. Whether it’s ChatOps, driving operational insight with ITSI, or leveraging Machine Learning - our team can take you through new ways of getting the most out of your data.

Our team on the case

Work smarter, not harder. (I didn't even come up with that. That's smart.)

Daniel Spavin

Performance Test Lead

Length of Time at JDS

7 years

Skills

IT: HPE Load Runner, HPE Performance Center, HPE SiteScope, HPE BSM, Splunk

Personal: Problem solving, Analytical thinking

Workplace Solutions

I care about quality and helping organisations get the best performance out of their IT projects.

Organisations spend a great deal of time and resources developing IT solutions. You want IT speeding up the process, not holding it up. Ensuring performance is built in means you spend less time fixing your IT solutions, and more time on the problems they solve.

I solve problems in our customers’ solutions, so customers can use their solutions to solve problems.

Our Splunk stories

Posted by JDS Admin in Blog, SAP, Splunk, Tech Tips
ServiceNow Catalog Client Scripts:  G_Form Clear Values

ServiceNow Catalog Client Scripts: G_Form Clear Values

ServiceNow’s Service Portal allows businesses to interact with their users/customers with a catalog of various items, giving users easy access to services that help them conduct their business activities.

Recently, JDS was engaged to assist a customer with a Human Resources (HR) onboarding form that integrated with SAP. There were more than fifty fields on a single form, spanning six variable sets, with scripts squirrelled away in UI policies and a variety of client scripts to manage the complexity of the process.

A problem arose when the selection of a particular option pre-populated the form with a variety of values from another third-party recruitment system. If the user changed their mind and chose another option, fields were hidden that still retained values. This caused erroneous values to be posted to the backend.

As the form was large, scrolling well off the screen, it was difficult to troubleshoot problems, but JDS realised that the ability to reset the form to its default values whenever key choices changed or UI policies were switched would drastically simplify the form. Unfortunately, the g_form.clearValues() didn’t work quite as expected.

Behind the scenes, there are a number of special functions that allow these catalog items to respond to a user’s input. In this article, we’re going to examine the behaviour of the g_form.clearValue(), which is used to clear values when switching between options.

Depending on the requirements of a particular service catalog item, there can be a need to clear the values on a form and start again (such as when switching between users with different privileges, etc.), but this is where the g_form.clearValue() can be a little tricky.

Consider this: the code below seems simple enough—clear each value, but look at what happens when we try that…

The first two fields respond as you’d expect, clearing values from a reference field and a plain text field, but look at the colour option: it switches from pink to black, while the option of having 4G switches from no to yes.

The problem lies in how ServiceNow handles fields. A selection control, like the one used for cover colours, switches to ‘none,’ but as this particular list of choices doesn’t have ‘none’ as an option, ServiceNow takes the FIRST option, which in this case is black.

We could simply make sure all our selection controls have ‘none’ as an option, but in a large catalog with complex forms, there’s a good chance we’ll miss some of them. Besides, what we really need is a way to reset to our default values. Having ‘none’ as an option won’t solve that issue.

When it comes to Boolean fields, like our 4G field above, the clear values function switches options, reversing them!

On complex forms with lots of combinations, especially those that don’t fit on a single page, this can cause unpredictable behaviour that confuses users. The solution JDS developed is to introduce the ability to reset to default.

Rather than trying to guess which selection a user should have when there’s no option for ‘none,’ or simply flipping Boolean values, JDS recommends resetting the form to the values it had when it originally loaded. There’s no way to do this out-of-the-box, so we’re going to have to get a little creative. The actual code is quite small, but it has to be placed in a strategic location.

Step One: Build a Catalog Client Script Library

Behind the scenes, ServiceNow retains a large amount of information about the widgets on each portal page, including the value of various fields, so we’re going to tap into this to reset our form to the default values. To do this, we need to add a UI script that runs in the background whenever our catalog loads. We’ll use this as a way of adding a library of common functions that are accessible from any catalog item.

Note: this UI script is available as a download at the bottom of this page.

By default, ServiceNow does not allow access to components like the document object or the angular object from a catalog client script, so we’re going to sidestep this limitation by accessing these through our UI Script.

There’s a good reason ServiceNow doesn’t allow access to these components by default. The reason is that people tend to abuse these libraries and do all kinds of ridiculous things, like inserting values directly into forms instead of using the g_form libraries built into ServiceNow.

As the adage goes, “With great power comes great responsibility.” It’s fine to use components like the angular and the DOM, but they need to be used wisely. Don’t abuse them.

As you’ll see, we’ll use the angular object to get our values but NOT to set our values, as that’s where the danger lies. Instead, we’ll use the tried and tested g_form library to reset the defaults on our catalog item.

Our code is concise—only seven lines. The actions we’re undertaking are:

  • Retrieve the field object from the widget’s angular scope
  • Loop through that object to pick up each of the initial form values
  • Store them in a session object so we can retrieve them later

If you haven’t used session storage before, it provides a convenient way of passing information between components/pages. HTML is inherently state-less, so session storage allows you to overcome this limitation and share information freely. Normally, session storage is used while switching between pages, but in this case, we’re using it as a global variable shared by client scripts on the same page.

When it comes to resetting the form, it’s simply a case of looping through our session storage variable.

Notice how we pass the g_form through to our script along with ignore, a comma-separated list of any fields we don’t want reset.

Step Two: Allow the catalog widget to access this library

Now that we’ve built a library of common functions, the next step is to ensure it’s available to our catalog items. To do this, we need to add this as a JavaScript library to the service catalog widget. There are a few steps involved, but it’s straightforward, as described below.

You’ll find these within ServiceNow under SC Catalog Item in Widgets in the Portal.

Add a new Dependency called Catalog Client Script Library at the bottom of this widget record.

This new Dependency needs a JS Includes.

Which then refers to our UI Script.

All of these entries are simply a means of pointing to the UI Script we developed previously.

Step Three: Setting and Get Defaults

Our UI Script will automatically get the defaults for each catalog item when it loads, so there’s no need for an onLoad script. All we need to do is identify when and where we want to reset our form to its default values.

Once we’ve identified the trigger for resetting the form to its default state, we can add a Catalog Client Script to that onChange event.

By passing the g_form object through to our UI Script, we can use the default ServiceNow means of manipulating form elements rather than doing something risky that might backfire with an upgrade.

Also, notice we’ve passed a few fields we want to leave untouched.

In this way, we can develop a complex form catering for multiple scenarios, showing and hiding fields and resetting them at will.

Summary

ServiceNow’s g_form API provides a host of useful functions for managing catalog items, but its clear values function is a little overzealous. In light of that, we’ve developed a means of restoring the initial default values instead of blindly clearing values.

Going forward, you can add more and more common functions to this catalog script library and access them from your various catalog items.

Conclusion

To find out more about how JDS can help you with your ServiceNow needs, contact our team today on 1300 780 432, or email contactus@jds.net.au.

Our team on the case

Document as you go.

Peter Cawdron

Consultant

Length of Time at JDS

5 years

Skills

ServiceNow, Loadrunner, HP BSM, Splunk.

Workplace Passion

I enjoy working with the new AngularJS portal in ServiceNow.

Work hard, work smart.

Wei Liang Yau

Consultant

Length of Time at JDS

Since October 2015

Skills

LoadRunner, ServiceNow, AppDynamics, JMeter, C#, SharePoint, Visual Studio, LAMP (Linux, Apache, MySQL, PHP)

Workplace Solutions

Performance testing, ServiceNow ITOM

Let your problems be my problem.

Matthew Coyle

Technical Consultant

Length of Time at JDS

3 years

Skills

ServiceNow – Implementation Specialist, Administrator, Developer.

General IT – Systems Architect, Web Development, ITIL Certified, Support Specialist, SQL Developer

Workplace Solutions

I am mostly involved with complex ServiceNow implementations, including integration and/or orchestration with various business systems to support end-to-end business process implementation. Customers require ServiceNow implementors with solution architect experience to construct a design which caters to their unique environment and ensures their ServiceNow implementation can be maintained going forward whilst identifying any associated risks. Internally at JDS, I am often engaged for advice or to design an approach to solve customer problems in the most effective way. Helping colleagues and customers solve problems such as these is something I’m most passionate about in the workplace.

Our ServiceNow stories

Posted by JDS Admin in ServiceNow, Tech Tips
Is DevPerfOps a thing?

Is DevPerfOps a thing?

New technology terms are constantly being coined. One of our lead consultants answers the question: Is DevPerfOps a thing?


Hopefully it’s no surprise that traditional performance testing has struggled to adapt to the new paradigm of DevOps, or even Agile. The problems can be boiled down to a few key factors:

  1. Performance testing takes time—time to script, and time to execute. A typical performance test engagement is 4-6 weeks, during which time the rest of the project team has made significant progress. 
  2. Introducing performance testing to a CI/CD pipeline is really challenging. Tests typically run for an hour or more and can hold up the pipeline, and they are extremely sensitive to application changes as the scripts operate at the protocol level (typically HTTP requests). 
  3. Performance testing often requires a full-sized, production-like infrastructure environment. These aren’t cheap and are normally unavailable during early development, making “early performance testing” more of an aspirational idea, rather than something that is always practical.

All the software vendors will tell you that DevOps with performance testing is easy, but none of them have solved the above problems. For the most part, they have simply added a CI/CD hook without even attempting to challenge the concept of performance testing in DevOps at all.

A new world

So what would redefining the concept of performance testing look like? Allow me to introduce DevPerfOps. We would define the four main activities of a DevPerfOps engineer as:

  1. Tactical Load Tests
  2. Microbenchmarking
  3. APM and Code Reviews
  4. Container Workloads

Let’s start at the top.

Tactical Load Testing

This is can be broadly defined as running performance tests within the limitations of a CD/CI pipeline. Is it perfect? No. Is it better than not doing anything or struggling with a full end-to-end test? Absolutely, yes!

Tactical Load Testing has the following objectives:

  • Leverage CI toolkits like Jenkins for execution and results collection.
  • Use lightweight toolsets for execution (JMeter, Taurus, Gatling, LoadRunner, etc. are all perfectly good choices).
  • Configure performance tests to run for short durations. 15-20 minutes is the ideal balance. This can be achieved by minimising the run logic, reducing think times, and focusing on only testing individual business processes.
  • Expect to run your performance tests on a scaled-down environment, or a single server node. Think about what this will do to your workload profile.

Microbenchmarking

These are mini unit or integration tests designed to ensure that a component operates within a performance benchmark and that deviations are detected.

Microbenchmarking can target things like core application code, SQL queries, and Web Service integrations. There is an enormous amount of potential scope here; for example, you can write a microbenchmark test for a login, or a report execution, or a data transformation--anything goes if it makes sense.

Most importantly, microbenchmarking is a great opportunity to test early in the project lifecycle and provide early feedback.

APM and Code Reviews

In past years, having access to a good APM tool or profiler, or even the source code, has been a luxury. Not anymore--these tools are everywhere, and while a fully featured tool like AppDynamics or New Relic is beneficial for developers and operations, a lot can be achieved with low-cost tools like YourKit Profiler or VisualVM.

APM or profilers allow slow execution paths to be identified, and memory usage and CPU utilisation to be measured. Resource intensive workloads can be easily identified and performance baked into the application.

Container Workloads

Containers will one day rule the world. If you don’t know much about containers, you need to learn about them fast.

In terms of performance engineering, each container handles a small unit of workload and container orchestration engines like Kubernetes will auto-scale that container horizontally across the entire data centre. It’s not uncommon for applications to run on hundreds or thousands of containers.

What’s important here is that the smallest unit of workload is tiny, and scaling is something that a well-designed application will get for free. This allows your performance tests to scale accordingly, and it allows the whole notion of “what is application infrastructure” to be challenged. In the container world, an application is a service definition… and the rest is handled for you.

So, is DevPerfOps a thing? We think so, and we think it will only continue to grow and expand from here. It’s time that performance testing meets the needs of 2018 IT teams, and JDS can help. If you have any questions or want to discuss more, please get in touch with our performance engineering team by emailing contactus@jds.net.au. If you’re looking for a quick, simple way of getting actionable insights into the performance health of your website or application, check out our current One Second Faster promotion.

Our team on the case

Every day, do something that people want.

Nick Wilton

Consultant

Length of Time at JDS

8.5 years

Skills

Primary: Software security, Performance optimisation

Secondary: DevOps, Software development, Technical sales

Workplace Solutions

I help clients to solve problems like:
  • Is my application secure?
  • How do I manage threats?
  • Will my application perform when I need it to?

Workplace Passion

It’s all about managing risk whilst driving business confidence in technology and software solutions. That’s what I’m passionate about.

Read more Tech Tips