Category: Tech Tips

Tech Tip: Adding a pop-up extension to a Splunk Dashboard

Contributed by Jade Bujeya

My manager said: “I was thinking it might be good to have a pop-up that allows people to leave feedback on our dashboards.”

Like any true-born nerd, I said: “Pick me! Pick me!”

Google Forms are for lesser mortals.

Thus began my mission to add a pop-up extension to a Splunk dashboard. While I’m sure there are other ways to do this, my approach was to use a JavaScript extension which would provide the additional objects and functions I needed to make my pop-up interactive.

According to Codecademy: “JavaScript is a programming language that adds dynamic functionality and complex features like interactivity and animation to web pages. Together with HTML and CSS, JavaScript forms the foundation of web development.”

Now web design is a broad and complex topic, worthy of its own blog post, or indeed its own blog. Essentially, HTML creates objects on the screen that you can see, CSS controls how these objects look, and JavaScript controls how they behave.

Theoretically, you can use HTML and Splunk source code to add buttons and popups to a dashboard, however I preferred to use the extensions method due to the potential for greater customisability. This is done by adding script=”my_code_file.js” inside the <form> or <dashboard> tag as seen in the example below:

For extra credit, you can also add stylesheet=”my_style_sheet.css” here to control the design of your JavaScript objects.

After a few internet searches, and with the help of our expansive and enthusiastic community of Splunkers, I quickly found an example I could use as a starting point, which created a ‘hidden’ object that only became visible when a button was clicked. Alternatively, this same functionality could be achieved by using the <change> and <condition> tags offered within the Dashboard Editor interface.

The ‘hidden’ object was a JavaScript constant that contained a long string of HTML describing the popup I wanted. This HTML was appended to the dashboard code, and then made visible when needed using one of several JavaScript functions outside the main constant with dictated functionality.

Amongst other things, such as confirming that field contents were valid, these functions would initiate the running of a search using the ‘outputlookup’ command. In this way, user responses could be added to a predefined lookup, enabling them to be displayed and utilised later.

In later phases of the project, I also built in a feature that would attempt to prevent a user from submitting feedback under someone else’s name. This works by creating a dummy search, which automatically runs when the user opens the popup. While the search produces no results, it does create a record in the _audit index. This record contains the Splunk username of the user who ran the dummy search, which is then added to the new line being created in the lookup as the ‘User’ who is submitting the feedback.

It goes without saying that this is not a foolproof approach, and is one which would likely become unreliable if many users were trying to leave feedback at the same time. Inevitably, there would be a small gap between the creation of the record in the _audit index and its retrieval, which might result in the wrong record from _audit being used to assign the user. However, for the relatively limited number of users in my case, it worked superbly.

Benefits of my approach:

  • I could see and edit the entire pop-up as one block of text using simple and well-supported web design languages.
  • I could easily integrate other JavaScript functions to control different aspects of the form’s behaviour and could edit all of it in one single document.

Drawbacks of my approach:

  • This method required knowledge and understanding of both HTML and JavaScript.
  • There were considerable complexities involved in passing HTML into a JavaScript constant, which would then be pasted as a HTML addendum at the bottom of a Splunk dashboard.
  • Testing was a bit arduous, as to ensure Splunk recognises the changes you’ve made to a file, it is necessary to clear both the browser and server caches. While this can certainly be done, I found it faster to solve the problem by adding a constantly changing version number to the file name, so that Splunk would keep reading it as a new file.
  • There were some characters (e.g. quotation marks etc.) that I had trouble passing into strings, as they would get confused while being passed from one language to another.

The resulting code used to add a pop-up extension to a Splunk dashboard is displayed below.

Happy Scripting!

require(['splunkjs/mvc',
'splunkjs/mvc/searchmanager',
'splunkjs/mvc/simplexml/ready!'],
// required packages
function(mvc, SearchManager){
               var updateCSV = new SearchManager({
                id: "updateCSV",
                autostart: false,
               cache:false,
                search : "| makeresults | eval feedback=\"$setMsg$\" | eval dashboard=\"$setDsh$\" | eval name=\"$setName$\"| eval score=\"$setScore$\" | append [| search index=_audit | regex search=\"index=not_an_index sourcetype=not_a_sourcetype\" | fields user] | stats latest(_time) as _time, latest(dashboard) as dashboard, latest(feedback) as feedback, latest(name) as name, latest(score) as score, latest(user) as user | inputlookup append=true feedback.csv | outputlookup feedback.csv"
                  },{tokens: true});
                  //Creates function to run a search that creates an event with a series of fields populated by user entered tokens and then saves that event to a lookup file
                 //See Below: … | append [| search index=_audit | regex search=\"index=not_an_index sourcetype=not_a_sourcetype\" | fields user] | stats latest(_time) as _time, latest(dashboard) as dashboard, latest(feedback) as feedback, latest(name) as name, latest(score) as score, latest(user) as user
                 // This (^) goes into the _audit index, and retreives the 'user' based on a dummy search that is automatically run with your user context when you click    

    var markerSearch = new SearchManager({
            id: "markerSearch",
            autostart: false,
    cache:false,
            search : "| search index=not_an_index sourcetype=not_a_sourcetype",
            },{tokens: false});
            //this is the dummy search that creates a record in _audit

let params = (new URL(document.location)).searchParams;
const contents = document.createElement('contents');
// creates the below HTML and adds it to a variable called 'contents' seen below

contents.innerHTML = `
<div class="chat-popup" id="myForm">
    <form class="form-container">
        <h1>Feedback</h1>
        <div style="display: table">
            <div style="display: table-cell; padding-left: 10px">
            <label for="user">
                <b>Name</b>
            </label>
            <input type="text" name="name" id="name">
            </input>
            </div>
        </div>
        <div style="display: table">
            <div style="display: table-cell; padding-left: 10px">
            <label for="dshbrd">
                <b>Good 10 - 1 Bad</b>
            </label>
            <input type="number" name="score" id="score" min="1" max="10" required>
            </input>
            </div>
            <div style="display: table-cell; padding-left: 10px">
            <label for="dshbrd">
                <b>Select Dashboard:</b>
            </label>
            <select id="dshbrd">
                <option>DashboardName1</option>
                <option>DashboardName2</option>
                <option>DashboardName3</option>
                <option>DashboardName4</option>
                <option>DashboardName5</option>
                <option>DashboardName6</option>
                <option>DashboardName7</option>
                <option>DashboardName8</option>
                <option>DashboardName9</option>S
                <option>DashboardName10</option>
                <option>DashboardName11</option>
            </select>
            </div>
        </div>
        <label for="msg">
            <b>Message</b>
        </label>
        <textarea placeholder="Type Feedback..." name="msg" id="msgFeedback" onchange="msgFeedback.value = msgFeedback.value.replace(/&quot;/g, '')" required></textarea>
        <span id="validationFeebback"></span>
        <button id="sbmtFeedback" type="button" class="btn">Submit</button>
        <button id="cnclFeebackPopUP" type="button" class="btn cancel">Close</button>
    </form>
</div>
`    
$(document).find('.dashboard-body').append('<button id="feedback" class="btn btn-primary">Provide Feedback</button>');
// creates the feedback button and adds it to the screen. As this is a btn type item, the default behaviour is 'show'.
$(document).find('.dashboard-body').append(contents);
// adds contents to the screen. As 'contents' is a chat-popup type item, the default behaviour is 'hidden' (see .css file).

$("#feedback").on("click", function (){
    $(document).find('.chat-popup').show();
    markerSearch.startSearch();
    // runs the above marker (dummy) search 
});

$("#cnclFeebackPopUP").on("click", function (){
    $(document).find('.chat-popup').hide();
    window.location.reload(true);
});
// hides the popup when the user clicks cancel

$("#sbmtFeedback").on("click", function (){
    var msg=$('#msgFeedback').val();
    var dsh=$('#dshbrd').val();
    var name=$('#name').val();
    var score=$('#score').val();
    if (msg.length<=10 || (msg.length==1 && msg==" ")){
        $(document).find("#validationFeebback").text("Invalid Feedback. Must be more then 10 characters").css({'color':'red',});
    }
    else{
        var tokens = splunkjs.mvc.Components.get("default");
            tokens.set("setMsg", msg);
            tokens.set("setDsh", dsh);
            tokens.set("setName", name);
            tokens.set("setScore", score)
        markerSearch.startSearch();
        // runs the above marker search (again, just to be safe)
        updateCSV.startSearch();
        // runs the search to add the new response to the feedback.csv file
        $(document).find("#validationFeebback").text("Your feedback has been submitted..!").css({'color':'green'});
    }
}); 
}
);

Integrating Splunk ITSI and Observability Cloud for Unified Insights

The Splunk Observability Cloud suite (O11y) delivers powerful real-time infrastructure and application monitoring capabilities, while Splunk IT Service Intelligence (ITSI) enables holistic and fully customisable service modelling and impact analysis. When these two technologies are integrated, they effortlessly bridge the gap between tracking infrastructure performance and the overall well-being of your business service.

Making Splunk Core Aware of O11y

A fundamental aspect of integrating ITSI and O11y is making observability metrics available to Splunk Core, and in turn, to Splunk ITSI and IT Essentials Work. For this you’ll need…

This is a Splunk built add-on available on Splunkbase: Splunk Infrastructure Monitoring Add-on.
While the name points to the SIM portion of the O11y suite, the Splunk Infrastructure Monitoring Add-on facilitates access to all O11y metrics, including APM, RUM and Synthetic Monitoring metrics.
NOTE: It is only O11y metric data that can be made available to Splunk Core – not the traces and spans from which these metric results and metadata originate.

SIM Add-on Integration Options

The add-on offers two integration options:
1. Enable Splunk Core to Query O11y Metric Stores
The Splunk Infrastructure Monitoring Add-on introduces a new SPL command called “sim” which allows you to specify a SignalFlow program for querying observability metrics in an SPL search. The SignalFlow program will be run on the remote O11y instance, and the returned metrics can then be processed in the remainder of the SPL search. 

2. Ingesting O11y Metrics into Splunk Indexes
The add-on also contains modular inputs which can be used to index O11y metrics in Splunk Core indexes. You are able to configure these modular inputs by specifying a SignalFlow program which will be run periodically to query the desired O11y metric summaries and index the results in Splunk Core.

NOTE: Ensure that the “stash” source type is always used for the data collected by these modular inputs (as in their default state) so that the collected metrics will not count toward Splunk licence charges.

Where to Install the SIM Add-on

Depending on which integration options are required, the add-on will need to be installed in at least one of these Splunk Core nodes:

Search Heads:
Required on any Search Heads where the “sim” command will be used in SPL searches to query O11y metrics.  In particular, this add-on will be required on Splunk ITSI instances utilising the “sim” command in KPI searches.

Indexers:
Required on any Indexer node/cluster where target metric store indexes are created for ingesting O11y metrics via the SIM add-on modular inputs. The add-on creates an index called “sim_metrics“ which should be used as the default target for O11y metrics as it will not count toward Splunk licence charges (and remember to specify “stash” sourcetype in the modular inputs as noted above).

Forwarders:
Required on any Heavy Forwarder node which will be running the SIM add-on modular inputs to query O11y metrics.

Which Integration Option Is Best?

While it is not possible to give a “one size fits all” answer, consider the following:

The “sim” command is lightning-fast
This is because the metric store of O11y is lightning-fast. By design, the O11y platform is capable of storing and retrieving massive volumes of highly granular data in real time. So performance is rarely a consideration when writing SPL searches using the “sim” command.

The Modular Inputs Duplicate Predetermined Metric Summaries
With the modular inputs of the add-on, you are able to decide ahead of time what O11y metric data you’d like to summarise and index in Splunk Core and at what intervals. While this will only be a subset of the original data that is being indexed, it is still duplication which might not be necessary in a given use case. More to the point, searching the summarised data indexed in Splunk Core lacks the flexibility of using “sim” searches to query metrics directly from O11y, which can be changed on the fly without ever needing to update any modular inputs or re-ingest any data.

Querying O11y directly with the “sim” command would often be the more desirable option.  However, in some scenarios it may be necessary to index O11y metrics in Splunk Core, e.g if security policies prevent certain Splunk Core users from getting direct access to O11y.
TIP: Use the O11y plot editor to create and test SignalFlow programs which can then be copied into “sim” commands in Splunk Core searches and ITSI KPIs.

Enriching ITSI with O11y Knowledge

The sky’s the limit when modelling systems in ITSI, and for large or complex service models you’ll want to leverage templates and pre-built components instead of re-inventing the wheel.
Content Packs are the mechanism in ITSI for bundling pre-built components, and for O11y content in particular there is…

The Content Pack bundles a set of valuable ITSI knowledge objects which can be leveraged for managing and visualising O11y data, including:
> Services and KPIs
> Service Templates and KPI Base Searches
> Glass Tables and a Service Analyser
> Entity Types and Entity Import Jobs

As with those of any ITSI content pack, many of the above components may not be directly usable for a given use case. They may instead serve as examples or initial templates to the custom content you will be creating.
At the very least, the below entity import jobs from the content pack are invaluable for effortlessly bringing in all O11y-discovered objects to the ITSI entity database:
> ITSI Import Objects – Get_OS_Hosts
> ITSI Import Objects – Get_RUM_*
> ITSI Import Objects – Get_SIM_AWS_*
> ITSI Import Objects – Get_SIM_Azure_*
> ITSI Import Objects – Get_SIM_GCP_*
> ITSI Import Objects – SSM_get_entities_*
> ITSI Import Objects – Splunk-APM Application Entity Search

Whatever the situation, it is in your best interest to install the Content Pack for Splunk Observability Cloud in ITSI when integrating with the O11y suite.

Installing the O11y Content Pack

The latest O11y Content Pack requires the following two add-ons to be installed in the Splunk Core environment first:
> Splunk Infrastructure Monitoring Add-on – The Splunk-built add-on described earlier in this document
> Splunk Synthetic Monitoring Add-on – A SplunkWorks-built add-on (not formally released by Splunk)

Also, if the Content Pack for Splunk Infrastructure monitoring was previously installed in ITSI, then there are additional migration steps to perform before installing the O11y content pack:
> Migrate from the Content Pack for Splunk Infrastructure Monitoring to the Content Pack for Splunk Observability Cloud topic

After the above items are addressed, the method for installing the Content Pack in ITSI is the same as with any other content pack, i.e. via Configuration > Data Integrations > Content Library.
TIP: When installing the content pack, consider using the option of adding a prefix to the names of imported content such as services, service templates and KPI base searches. That way they can be easily identified as examples which can be copied from. This is not so important for items like the entity import jobs (and you may then need to separate imports for differently named objects).

Unified Alerting with O11y and ITSI

In an environment armed with ITSI, an ideal strategy is to consolidate alert management  with ITSI as the central point for processing alerts originating from any Splunk sources such as O11y, as well as from external systems. ITSI’s advanced analytics can be leveraged to implement intelligent alert logic and the alerts actions can interface to Splunk On-Call for escalation management.

This Content Pack is required in ITSI for integrating O11y and ITSI alerting. It comes with correlation searches and aggregation policies that are utilised in the integration procedure (as noted in the High Level Implementation Plan further below).
Installing this Content Pack requires additional version-dependent actions as well as an update to the “Itsi_kpi_attributes” lookup. Please follow the below installation instructions:
Installing and Configuring the Content Pack for ITSI Monitoring and Alerting

Universal Alerting

Splunk have defined the Universal Alerting Field Normalisation Standard in ITSI for which there are pre-built correlation searches provided in the Monitoring and Alerting Content Pack. Normalising alerts to adhere to this schema ensures that alerts from any source can be processed in a common fashion using the pre-built content.
The schema details many fields, many of which are optional, and the following 4 are mandatory for any alert to comply:
> src: the target of the alert, e.g. host, device, service etc.
> signature: a string which uniquely identifies the type of alert
> vendor_severity: the original vendor-specific severity/health/status string
> severity_id: normalised severity

High Level Implementation Plan

  1. Configure O11y to send alerts to Splunk Enterprise or Cloud Platform:
    This requires creating an alert index in Splunk Core (labelled “Alert Index” in the above diagram), and a HEC endpoint. Then in O11y you can configure a new “Webhook” integration to send alerts to the HEC endpoint.
  2. Normalise O11y alerts to conform to the ITSI Universal Alerting schema
  3. Configure “Universal Correlation Search – o11y” to create notable events:
    This correlation search is shipped with the ITSI Monitoring and Alerting content pack
  4. Configure the “Episodes by Application/SRC o11y” notable event aggregation policy (NEAP):
    Also shipped with the ITSI Monitoring and Alerting content pack
  5. Configure ITSI correlation searches for monitoring aggregated episodes:
    The below 2 searches, also from the content pack:
    “Episode Monitoring – Set Episode to Highest Alarm Severity o11y”
    “Episode Monitoring – Trigger OnCall Incident”
  6. Integrate Splunk On-Call with ITSI:
    This requires installation of the Splunk On-Call (VictorOps) addon in Splunk core, and configuring it with the details of an O11y Splunk On-Call account
  7. Configure action rules in the ITSI NEAP from step 4 for Splunk On-Call Integration
  8. Configure Splunk On-Call with appropriate escalation policies

Full implementation details are documented on the Splunk Lantern site: Managing the lifecycle of an alert from detection to remediation

Next Steps

Now you have the playbook to integrate the Splunk Observability Cloud suite with Splunk ITSI. 
JDS excels in delivering tailored solutions for our customers where we integrate their O11y suite with Splunk ITSI, optimising alert management and reducing Mean Time to Resolution (MTTR).
Reach out if you would like help or advice in improving your observability and troubleshooting efficiency with Splunk Observability Cloud and Splunk ITSI.


Read a recent JDS Customer Success Story here.

Part Three: ServiceNow Hyperautomation – Process Mining

ServiceNow, a leading platform for process mining and automation, recently launched its Utah release with several new features and enhancements, including multidimensional mining.

Part Three of this blog series will explore how you can analyse and improve your IT service management (ITSM) workflows using process mining and automation.

What is Process Mining?

Process mining is a technique that applies data science to discover, validate and improve workflows based on event logs from information systems. It uses artificial intelligence and machine learning to automatically extract process models from your ServiceNow data and visualise them in an interactive dashboard. You can then use the dashboard to identify bottlenecks, inefficiencies, deviations and best practices in your ITSM processes, and take action to optimise them with automation solutions.

How can ServiceNow Process Mining help you to enhance your organisation’s incident management process?

ServiceNow Process Mining can help your organisation enhance your incident management process by providing you with valuable insights into how your process is actually performed, how it deviates from the expected or desired behavior, and how it can be improved.

Process overview shows you the key metrics and statistics of your process, such as the number of cases, the average duration, the throughput time or the compliance rate. You can also filter and segment your data by various criteria, such as category, urgency, assignment group or resolution code.

Process Map provides your organisation with a graphical representation of your process model, which is automatically generated using your data. You can see the flow of cases from start to end, the frequency and duration of each activity, and the variants and deviations of your process. You can also drill down into specific cases or activities to see more details.

Process Analysis presents you the results of various analyses that are performed on your organisation’s process model, such as bottleneck analysis, root cause analysis, conformance analysis and best practice analysis. You can use these to identify and understand the causes and effects of problems and opportunities in your process.

Using these three capabilities, you can gain a comprehensive understanding of your organisation’s incident management process and discover ways to optimise it through elimination of unnecessary steps or automation of repetitive tasks.

Final Thought

As an Elite ServiceNow Partner, JDS can help you navigate the complex and fast-changing world of hyper-automation. We have the expertise, experience and passion to support your organisation’s digital transformation journey. Whether you need to automate your business processes, optimise your IT operations, or leverage artificial intelligence and machine learning, we can guide you every step of the way.

What is the difference between a Vulnerability Scan and a Penetration Test?

Vulnerability Scanning and Penetration Testing are terms that are often interchanged and even confused for the same activity, and while they are similar, they are not the same. So what are the key differences, and how and when should they be carried out?

What is a Vulnerability Scan?

A Vulnerability Scan is performed within your network, systems, services, or applications in the security perimeter concerned. Generally speaking, a Vulnerability Scan is fully automated, providing detailed reports on vulnerabilities found such as out-of-date frameworks and dependencies, publicly known exploits, loopholes, and common configuration issues that could lead to further vulnerabilities.

Typically, these scans are run with tools like Tenable Nessus, Rapid7 Nexpose, and Qualys, among many others, and may be tailored to your requirements.

A Vulnerability Scan is limited to providing a report on raw data and is, for the most part, unable to paint a full picture of where greater issues can arise.

One major difference between using a scanning tool and a human tester is understanding where the pieces of the puzzle can come together, such as chaining low-level vulnerabilities into a high-level critical exploit which may need far more urgent and immediate action than a scan report may suggest.

While often very detailed and including information like CVE details, CVSS scores, and overviews of the vulnerability, etc., there are often false positives presented in even the most finely tuned scans due to the lack of a discerning human eye and experience. Depending on the level of the report, the provided data may still require significant human attention to filter through and further verify, as a scan does not attempt to actively exploit its findings.

What is a Penetration Test?

As suggested by its name, a Penetration Test is a test that attempts to penetrate a system or service from outside of the security perimeter.

A Penetration Test is handled by a human penetration tester and aided by the many tools and techniques available to them, to identify and further exploit any vulnerabilities found on the subject of the test.

Having a human rather than an automated tool may be slower and more expensive on a per-engagement basis, but can provide more accurate results limiting false positives, and providing proof of concept exploitation, experience-driven vulnerability overviews, exploit pivoting, and quality remediation advice with the context of your system or service in mind.

Another advantage of a Penetration Test over a Vulnerability Scan is the ability to research on the fly, find unknown exploits including Zero Days, and find vulnerabilities that may not have been added to the Vulnerability Scanners library yet.

What and When?

While it may seem that a Penetration Test may be the best overall service to take due to its accuracy, there is a time and a place for both services, or even have them work hand in hand.

Where security is concerned, both services are valid but are not providing the complete picture when they are used separately.

Due to the automated nature of a vulnerability scanning tool, it can be set to scan at specified intervals to report changes between two or more points in time, providing a real-time surface view of your systems, network, or other services, and generating a human-friendly report, all whilst running hands-off in the background.

A penetration tester can interpret reports provided by a vulnerability scan and this can supplement a penetration test itself, in many cases helping the human tester speed up the overall engagement by targeting identified points of vulnerability rather than having to manually find them.

Both a Vulnerability Scan and a Penetration Test have their strengths and weaknesses and typically speaking, one’s strength covers the weakness of the other.

It isn’t uncommon for an organization to have a vulnerability scanner conducting day-to-day scans of systems and networks, and periodically have a human penetration tester validate and carry out further tests based on the outputs provided by the scanner. When used together in this way, you can achieve the highest level of security assurance for your organisation.

Manual Security Testing vs Automated Scanning?

The art of penetration testing has evolved over the years. What began with testing arrows on armour, has now become testing tools and techniques on systems and applications. Without a doubt, we are still mostly using manually driven techniques, however this can be slow, cumbersome, and subject to the human element which can result in faults and missed opportunities.

Over the last decade or so, tools to aid and automate security testing have rapidly entered the fray and are increasingly taking the burden off some of the more time-intensive tasks in the cyber security sphere, such as scanning, brute-forcing, or even full-fledged attacks commanded with single line commands. Tools such as BurpSuite, Nmap, SQLMap, Metasploit, and Nessus, among many others, have certainly sped up the discovery and exploitation of vulnerabilities, allowing more in-depth testing within often limited test windows.

Looking at the bounty of tools available to us, you may start to wonder why manual testing is required anymore. Here is a quick rundown on some of the benefits and disadvantages of both, and how using both on engagements, big and small, can be greatly beneficial.

Manual Testing – The Old Reliable

Manual testing, simply put, is the act of using little to no automation for tasks. A great example of this would be the manual exploration of a website while data is being captured by BurpSuite, where the tester can manually analyse the headers and requests as its own task later, rather than immediately after every click.

Manual testing also extends to the exploitation stage of an engagement, where the tester may need to utilise very specific commands or customised scripts to achieve the desired result.

While manual testing can be very meticulous, and provide a detailed and deep understanding of the subject of the test, it can be very time-consuming, possibly taking days longer than an automation-driven test. There are some vulnerabilities that just simply can’t be automated entirely, or are very prone to false positives if automated, and therefore will require further investigation, possibly using more time than if done entirely manually from the beginning.

Some examples of vulnerabilities that require manual testing to correctly identify and safely exploit are:

  • Social Engineering
  • Access Control Violations
  • Password Spraying and Credential Stuffing Attacks
  • SQL Injection
  • Cross-Site Request Forgery (CSRF)

Another advantage of manual testing over automation is the ability to find, and use, newly or not yet discovered zero-day exploits, which can take a significant amount of time to be implemented into commonly used tools.

Automated Testing – The Shiny New Tools

Automated penetration testing is really what is written on the package – it is the process of utilizing automation tools, such as applications, platforms, and scripts, rather than the expertise and efforts of a human tester. It can be significantly cheaper and far more time efficient (which also adds to cost efficiency) than manual testing by one or more human ethical hackers.

Automation tools can perform actions such as content discovery, vulnerability analysis, and brute forcing, in a matter of minutes or seconds, where it could take a manual tester hours or days to get the same results. Automated tools, namely scanning, can be left to run in the background while manual testing is also performed, or set to periodically scan for issues, such as Tenable Nessus keeping an eye on things and providing reports at set intervals or upon request.

When it comes to regular penetration testing, companies factor in cost, and it can be rather expensive to hire human penetration testers for regular tests or as in-house, so it can be more cost-effective to have automated tools do the day-to-day, then infrequently have a human run further tests and analysis.

There is no doubt that automation is the way of the future, and will continuously improve; however, there are many tasks that are best suited to manual testing, either due to the simple inability to automate or due to the hassle of false positives (and false negatives).

Another advantage to automation is consistency, in both its actions and results, and with the reporting at the end. As the scans and processes run are mostly, if not entirely hands-off, there is less room for human error or deviation, and therefore don’t require a highly trained expert to perform the required tasks, which ultimately can save money for the organization. Automation, however, is often unable to fully assess a threat and how it can impact you in context to your application, platform, infrastructure, network, or organisation as a whole, which is something a sufficiently trained human penetration tester can do, and make new actions accordingly. A vulnerability that may be picked up and reported as a low finding by an automation tool, could have much more critical consequences when chained with other low, or even informational, vulnerabilities.

So, what’s better? A manual or automated approach?

Simply put, both manual and automated testing methods have their place, and should always be used in penetration tests of all kinds. The level of detail and effectiveness provided by manual testing is unsurpassed, as well as contextual reporting and risk analysis that simply cannot be provided by even the best automation tools on the market. However, where speed and consistency of tasks are concerned, automation wins without question.

Although both methods can provide you with a satisfactory outcome in terms of vulnerability identification, what is best for your organization will come down to what level of detail and quality your organisation requires, the frequency of the tests, and the cost factor.

Ultimately, a combination of both manual and automated testing is the best way to get the highest quality outcome of a penetration test, with the most efficient use of time and money, to bring you a greater assurance of security and peace of mind that your assets are secure from malicious attack.

Five Reasons Why Your Organisation Should Be Penetration Testing

Modern businesses require an advanced approach to security and due diligence.  Having anti-virus software and a firewall is no longer an efficient strategy to prevent highly sophisticated security attacks which can result in irreversible damage to your organisation.

A professional penetration testing service is the best way to identify the strengths, weaknesses and holes in your defences.  Read on to uncover the five best reasons why your organisation needs penetration testing.

1. Protect Your Organisation From Cyber Attacks

Reports of cyber crime within Australia have increased nearly 15% each year since 2019, with the average reported financial loss per successful cybercrime incident being $50,673. Regardless of your organisational size or sector, cyber criminals view every business as a potentially exploitable prospect. The internet is continuously being scanned in search of vulnerable systems.  Carrying out penetration tests will enable you to identify vulnerabilities that are most likely to be exploited, determine what the potential impact could be, and enable you to implement measures to mitigate or eliminate the threat.     

2. Identify and Prioritise Vulnerabilities

Put simply, a pen test will uncover all of the potential threats and vulnerabilities that could damage your organisation’s IT assets.  The resulting report prioritises these vulnerabilities from low to critical, and further categorises them by likelihood and impact.  This gives your team a clear picture of your security posture, and the opportunity to mitigate the greatest threats first before moving on to less risky ones.

3. Stay Compliant With Security Standards and Regulations

Regular penetration testing can help you to comply with security standards and regulations such as ISO 27001 and PCI.  These standards require company managers and system owners to conduct regular penetration tests and security audits to demonstrate ongoing due diligence and maintenance of required security controls. Not only does pen testing identify potential vulnerabilities, ensuring that you are protecting your customers and assets, but it also helps to avoid costly fines and fees connected with non-compliance. 

4. Reduce Financial Losses and Downtime

Recent studies have reported that the average financial impact of a major data breach in Australia is around $3.7million per incident.  This takes into account expenditures on customer data protection programs, regulatory fines, and loss of revenue due to business operability.  System downtime is incredibly costly – the longer your system is down, the more exorbitant the cost.  A penetration test is a proactive solution to highlight and fix your system’s most critical vulnerabilities, and ensure your team are ready to act if your systems were to go down unexpectedly. 

5. Protect Your Reputation and Company Loyalty

Consumers are extremely quick to lose trust in companies and brands, and all it takes is one security breach or data leak to tarnish your reputation.  Customers and partners of your organisation want to know that their private data is safe in your hands, so it is in your best interest to be aware of any vulnerabilities which may put the company’s reputation and reliability in jeopardy.  

This is just a handful of reasons why organisations should carry out regular penetration tests, but there are many more.  Connect with JDS to discuss your pen testing needs and get a full scope of work customised to your requirements.

Splunk

As a Splunk Elite partner, JDS has a dedicated Splunk practice with expertise spanning ITOps, AIOps, and Security. JDS has proven to be trusted advisors and provide a safe pair of hands to architect, implement and manage Splunk for many organisations across a wide range of use cases.


IT Service Intelligence / Business Service Insights

Customisable business dashboards, mapped to key performance indicators, can provide invaluable real-time visibility into the health of your digital services. Our skilled JDS team have extensive experience in implementing Splunk’s unique platform to assist organisations ensure uninterrupted access to critical services.


IT Operations, Analytics and AIOps

JDS can transform your entire IT Ops approach with a suite of tools that put AI and machine learning at their core, allowing you to predict and prevent, instead of triage and react.  We enable a genuine understanding of the complete environment to get ahead of issues before they occur.


End to End Application Visibility

Gaining End-to-End Visibility means unifying business, application and infrastructure health for full-stack observability of critical apps and services.  With JDS and Splunk, gain the ability to visualise the health of your services at a glance, and make smarter, data-driven decisions.

Enterprise Security and Analytics

Splunk is renowned for its Security Information and Event Management (SIEM) and Security Orchestration, Automation and Response (SOAR) capabilities and JDS has the experience to establish and build out these capabilities along with the integrations to related systems.


Call Centre Visibility

Having insights into how your call centre is responding to customers can improve efficiency, effectiveness, quality of service and the overall customer experience. Using Splunk’s Contact Center Analytics, JDS can unlock this vital visibility, whether you’re working with centralized call centres or remote agents.

Splunk Cloud Migrations

JDS will help you to minimise downtime whilst maximising your architecture when migrating to Splunk Cloud. Our experts lead a collaborative engagement to make the transition as seamless as possible, while maintaining full visibility into your infrastructure before, during and after migration.

Success Stories

Transforming operations at Transurban with Splunk ITSI >

Helping one of Australia’s largest banking institutions migrate seamlessly to Splunk Cloud >

Unifying Insights with a Splunk ITSI and Observability Cloud Integration >

Mastering Modal Dialog Boxes

Modal Dialog boxes are a great way to enhance the user experience and provide more detailed feedback to users than a default browser popup.

Recently, we had a challenging set of requirements from a customer. They needed a bunch of mandatory fields (easy enough with UI Policies) but there was a twist—they needed:

  • Fields that were mandatory ONLY if the person wanted to send the record for approval
  • Four fields but ONLY one of the four is mandatory (ie, put data in any one field and all four are good to go)
  • There had to be at least one record in a related list

Modal dialogs are awesome!

We developed a simple approval dialog box for a UI action to cater to these requirements. It’s a good example of how complex requirements can be distilled into a simple solution.

Have fun and happy coding!

Understanding your Customer Journeys in Salesforce with AppDynamics

The Problem

JDS Australia works with numerous customers who utilise the force.com platform as the primary interface for their end users (internal and external) to execute business critical services. The flexibility and extensibility of the component based Lightning framework has allowed businesses to customise the platform to meet their specific requirements.

However, many of these companies struggle to monitor, quantify or pinpoint the impact of performance in the Salesforce platform to their end users and ultimately their business. Furthermore, there is limited capability to provide detailed Salesforce information for root cause analysis (e.g. is the problem with a particular lightning component(s) vs core Salesforce platform vs multiple pages?).

The Solution

Using AppDynamics Real User Browser Monitoring (RUM) coupled with advanced JavaScript configuration, we have created a solution.

Unlike traditional methods involving logfile or API based monitoring; real user monitoring collects rich metrics from the end users’ perspective. JDS has also further integrated  additional custom code to identify AJAX requests and inject page names into the stream and provide business context and make sense of the data.

Dashboards provide Salesforce Performance at a glance

Additionally, AppDynamics RUM is able to identify and dynamically visualise each step of Customer Journeys as they traverse Salesforce, in near real time. 

Using the collated metrics these businesses have been able to proactively alert support teams of issues, and also utilise the historic data to analyse customer behaviour to understand how customers are using the platform. For example, expected user journeys vs actual user journeys.

AppDynamics RUM captures detailed diagnostic information to help triage issues, including:

  • Single Page Application performance
  • Page Component load details, 
  • AJAX requests, 
  • Detailed Error Snapshots
  • Dynamic Business Transaction Baselining (Normal vs Slow Performance)
  • User browser version and device type,
  • Geographic location of users,
  • Connection method (e.g. browser vs mobile), and device type. 

AppDynamics RUM can also provide direct correlation to AppDynamics APM agents to combine the ‘front-end’ and ‘back-end’ of these user sessions where Salesforce may traverse additional down-stream applications and infrastructure.

Why JDS?

As experts in Application Performance Management (APM) and Observability, JDS have extensive experience in helping our customers determine the root cause of performance issues.

Contact us at [email protected] to discuss how monitoring Salesforce can be used to understand your end users and make informed decisions with quantifiable metrics. 

Implementing Salesforce monitoring in Splunk

The problem

A JDS customer embarked on a project to implement Salesforce to provide their users a single user interface to fulfil their customer needs.  Their aim, to make the interface easy to use and reduce the time to process customer requests.  At the same time, the business had to ensure that their customer data stored in Salesforce was secure and to be able to detect any malicious use.

The Solution

Implementing Splunk with the Splunk Add-on for Salesforce enabled the collection of logs and objects from Salesforce using REST APIs.  This in turn, enabled proactive alerting and the creation of informative dashboards and reports to satisfy the business’ security requirements.

Scenarios detected:

  • Failed or unusual login attempts (same user tries to login from multiple IP addresses)
  • Large amounts of data extracted from Salesforce
  • Unauthorised changes in Salesforce configuration such as Connected Apps settings or Authentication Provider settings
  • Integration user account activity occurring outside of scheduled job runs
  • Privileged user activity
  • Apex code execution

All of this was achieved by setting up the required data inputs via the Splunk Add-on.  Creating lookups to enhance the alert content with meaningful information and macros for re-usability and ease of administration, then adding alerts to ensure the required conditions were notified to the operational support teams.

Splunk dashboards and reports built on Salesforce data allowed the business to easily view login patterns and analyse EventLog events and Setup Audit Trail changes.  Additionally, Salesforce data ingestion and alert summary dashboards were added to assist the support team to identify issues or delays in data ingestion as well as review the number of alerts being generated over time.

When developing any application that provides access to secure information, it’s important to not only monitor in terms of user experience, but also look at security aspects. Our customer was able to satisfy the security monitoring requirements of the business with the Splunk Add-on for Salesforce and achieved their go-live target date. The configured alerts will keep them informed of any potential security issues, giving them confidence that the platform is secure. The accompanying dashboards provide an intuitive summary of user actions, all backed by an extended data retention policy in Splunk to satisfy regulatory compliance. With SalesForce data now available in Splunk, they are planning additional use cases to not only monitor security, but get insights into how the platform is used by employees.

Why choose JDS?

JDS has experience and expertise in bringing SalesForce application data into Splunk . If your focus is on security, performance, or custom monitoring, speak to JDS today about how we can convert your SalesForce data into useful insights.

Finding Exoplanets with Splunk

Splunk is a software platform designed to search, analyze and visualize machine-generated data, making sense of what, to most of us, looks like chaos.

Ordinarily, the machine data used by Splunk is gathered from websites, applications, servers, network equipment, sensors, IoT (internet-of-things) devices, etc, but there’s no limit to the complexity of data Splunk can consume.

Splunk specializes in Big Data, so why not use it to search the biggest data of all and find exoplanets?

What is an exoplanet?

An exoplanet is a planet in orbit around another star.

The first confirmed exoplanet was discovered in 1995 orbiting the star 51 Pegasi, which makes this an exciting new, emerging field of astronomy. Since then, Earth-based and space-based telescopes such as Kepler have been used to detect thousands of planets around other stars.

At first, the only planets we found were super-hot Jupiters, enormous gas giants orbiting close to their host stars. As techniques have been refined, thousands of exoplanets have been discovered at all sizes and out to distances comparable with planets in our own solar system. We have even discovered exomoons!

 

How do you find an exoplanet?

Imagine standing on stage at a rock concert, peering toward the back of the auditorium, staring straight at one of the spotlights. Now, try to figure out when a mosquito flies past that blinding light. In essence, that’s what telescopes like NASA’s TESS (Transiting Exoplanet Survey Satellite) are doing.

The dip in starlight intensity can be just a fraction of a percent, but it’s enough to signal that a planet is transiting the star.

Transits have been observed for hundreds of years in one form or another, but only recently has this idea been applied outside our solar system.

Australia has a long history of human exploration, starting some 60,000 years ago. In 1769 after (the then) Lieutenant James Cook sailed to Tahiti to observe the transit of Venus across the face of the our closest star, the Sun, he was ordered to begin a new search for the Great Southern Land which we know as Australia. Cook’s observation of the transit of Venus used largely the same technique as NASA’s Hubble, Kepler and TESS space telescopes but on a much simpler scale.

Our ability to monitor planetary transits has improved considerably since the 1700s.

NASA’s TESS orbiting telescope can cover an area 400 times as broad as NASA’s Kepler space telescope and is capable of monitoring a wider range of star types than Kepler, so we are on the verge of finding tens of thousands of exoplanets, some of which may contain life!

How can we use Splunk to find an exoplanet?

 Science thrives on open data.

All the raw information captured by both Earth-based and space-based telescopes like TESS are publicly available, but there’s a mountain of data to sift through and it’s difficult to spot needles in this celestial haystack, making this an ideal problem for Splunk to solve.

While playing with this over Christmas, I used the NASA Exoplanet Archive, and specifically the PhotoMetric data containing 642 light curves to look for exoplanets. I used wget in Linux to retrieve the raw data as text files, but it is possible to retrieve this data via web services.

MAST, the Mikulski Archive for Space Telescopes, has made available a web API that allows up to 500,000 records to be retrieved at a time using JSON format, making the data even more accessible to Splunk.

Some examples of API queries that can be run against the MAST are:

The raw data for a given observation appears as:

Information from the various telescopes does differ in format and structure, but it’s all stored in text files that can be interrogated by Splunk.

Values like the name of the star (in this case, Gliese 436) are identified in the header, while dates are stored either using HJD (Heliocentric Julian Dates) or BJD (Barycentric Julian Dates) centering on the Sun (with a difference of only 4 seconds between them).

Some observatories will use MJD which is the Modified Julian Date (being the Julian Date minus 2,400,000.5 which equates to November 17, 1858). Sounds complicated, but MJD is an attempt to simplify date calculations.

Think of HJD, BJD and MJD like UTC but for the entire solar system.

One of the challenges faced in gathering this data is that the column metadata is split over three lines, with the title, the data type and the measurement unit all appearing on separate lines.

The actual data captured by the telescope doesn’t start being displayed until line 138 (and this changes from file to file as various telescopes and observation sets have different amounts of associated metadata).

In this example, our columns are…

  • HJD - which is expressed as days, with the values beyond the decimal point being the fraction of that day when the observation occurred
  • Normalized Flux - which is the apparent brightness of the star
  • Normalized Flux Uncertainty - capturing any potential anomalies detected during the collection process that might cast doubt on the result (so long as this is insignificant it can be ignored).

Heliocentric Julian Dates (HJD) are measured from noon (instead of midnight) on 1 January 4713 BC and are represented by numbers into the millions, like 2,455,059.6261813 where the integer is the days elapsed since then, while the decimal fraction is the portion of the day. With a ratio of 0.00001 to 0.864 seconds, multiplying the fraction by 86400 will give us the seconds elapsed since noon on any given Julian Day. Confused? Well, your computer won’t be as it loves working in decimals and fractions, so although this system may seem counterintuitive, it makes date calculations simple math.

We can reverse engineer Epoch dates and regular dates from HJD/BJD, giving Splunk something to work with other than obscure heliocentric dates.

  • As Julian Dates start at noon rather than midnight, all our calculations are shifted by half a day to align with Epoch (Unix time)
  • The Julian date for the start of Epoch on CE 1970 January 1st 00:00:00.0 UT is 2440587.500000
  • Any-Julian-Date-minus-Epoch = 2455059.6261813 - 2440587.5 = 14472.12618
  • Epoch-Day = floor(Any-Julian-Date-minus-Epoch) * milliseconds-in-a-day = 14472 * 86400000 = 1250380800000
  • Epoch-Time = floor((Any-Julian-Date-minus-Epoch – floor(Any-Julian-Date-minus-Epoch)) * milliseconds-in-a-day = floor(0. 6261813 * 86400000) = 10902064
  • Observation-Epoch-Day-Time = Epoch-Day + Epoch-Time = 1250380800000 + 10902064 = 1250391702064

That might seem a little convoluted, but we now have a way of translating astronomical date/times into something Splunk can understand.

I added a bunch of date calculations like this to my props.conf file so dates would appear more naturally within Splunk.

[exoplanets]

SHOULD_LINEMERGE = false

LINE_BREAKER = ([\r\n]+)

EVAL-exo_observation_epoch = ((FLOOR(exo_HJD - 2440587.5) * 86400000) + FLOOR(((exo_HJD - 2440587.5) - FLOOR(exo_HJD - 2440587.5))  *  86400000))

EVAL-exo_observation_date = (strftime(((FLOOR(exo_HJD - 2440587.5) * 86400000) + FLOOR(((exo_HJD - 2440587.5) - FLOOR(exo_HJD - 2440587.5))  *  86400000)) / 1000,"%d/%m/%Y %H:%M:%S.%3N"))

EVAL-_time = strptime((strftime(((FLOOR(exo_HJD - 2440587.5) * 86400000) + FLOOR(((exo_HJD - 2440587.5) - FLOOR(exo_HJD - 2440587.5))  *  86400000)) / 1000,"%d/%m/%Y %H:%M:%S.%3N")),"%d/%m/%Y %H:%M:%S.%3N")

Once date conversions are in place, we can start crafting queries that map the relative flux of a star and allow us to observe exoplanets in another solar system.

Let’s look at a star with the unassuming ID 0300059.

sourcetype=exoplanets host="0300059"

| rex field=_raw "\s+(?P<exo_HJD>24\d+.\d+)\s+(?P<exo_flux>[-]?\d+.\d+)\s+(?P<exo_flux_uncertainty>[-]?\d+.\d+)" | timechart span=1s avg(exo_flux)

And there it is… an exoplanet blotting out a small fraction of starlight as it passes between us and its host star!

What about us?

While curating the Twitter account @RealScientists, Dr. Jessie Christiansen made the point that we only see planets transit stars like this if they’re orbiting on the same plane we’re observing. She also pointed out that “if you were an alien civilization looking at our solar system, and you were lined up just right, every 365 days you would see a (very tiny! 0.01%!!) dip in the brightness that would last for 10 hours or so. That would be Earth!”

There have even been direct observations of planets in orbit around stars, looking down from above (or up from beneath depending on your vantage point). With the next generation of space telescopes, like the James Webb, we’ll be able to see these in greater detail.

 

Image credit: NASA exoplanet exploration

Next steps

From here, the sky’s the limit—quite literally.

Now we’ve brought data into Splunk we can begin to examine trends over time.

Astronomy is BIG DATA in all caps. The Square Kilometer Array (SKA), which comes on line in 2020, will create more data each day than is produced on the Internet in a year!

Astronomical data is the biggest of the Big Data sets and that poses a problem for scientists. There’s so much data it is impossible to mine it all thoroughly. This has led to the emergence of citizen science, where regular people can contribute to scientific discoveries using tools like Splunk.

Most stars have multiple planets, so some complex math is required to distinguish between them, looking at the frequency, magnitude and duration of their transits to identify them individually. Over the course of billions of years, the motion of planets around a star fall into a pattern known as orbital resonance, which is something that can be predicted and tested by Splunk to distinguish between planets and even be used to predict undetected planets!

Then there’s the tantalizing possibility of exomoons orbiting exoplanets. These moons would appear as a slight dip in the transit line (similar to what’s seen above at the end of the exoplanet’s transit). But confirming the existence of an exomoon relies on repeated observations, clearly distinguished from the motion of other planets around that star. Once isolated, the transit lines should show a dip in different locations for different transits (revealing how the exomoon is swinging out to the side of the planet and increasing the amount of light being blocked at that point).

Given its strength with modelling data, predictive analytics and machine learning, Splunk is an ideal platform to support the search for exoplanets.

Find out more

If you’d like to learn more about how Splunk can help your organization reach for the stars, contact one of our account managers.

Our team on the case

Our Splunk stories

Glide Variables

ServiceNow uses a special type of super flexible variable to store information in what appears like a single field, but is actually a complex storage/management system with a database column type called glide_var.As each record can have a different number of variables stored as key/value pairs, there’s no easy way of dot-walking to the name of the variable within the glide_var as the names can change from record to record within the same table! You can, however, detect and retrieve variables from a glide_var by treating the gliderecord field as an object.

In this example, from an automated test framework step, you can see each of the variables and their values from the database glide_var column inputs.

var gr = new GlideRecord('the table you are looking at')
gr.get('sys_id of the record you are looking at')

for(var eachVariable in gr.inputs){
gs.info(eachVariable + ' : ' + gr.inputs[eachVariable])
}

If you run this in a background script you’ll see precisely which variables exist and what their values are.

It doesn’t need to be complicated! Reach out to us and we can help you.