Splunk

Finding Exoplanets with Splunk

Finding Exoplanets with Splunk

Splunk is a software platform designed to search, analyze and visualize machine-generated data, making sense of what, to most of us, looks like chaos.

Ordinarily, the machine data used by Splunk is gathered from websites, applications, servers, network equipment, sensors, IoT (internet-of-things) devices, etc, but there’s no limit to the complexity of data Splunk can consume.

Splunk specializes in Big Data, so why not use it to search the biggest data of all and find exoplanets?

What is an exoplanet?

An exoplanet is a planet in orbit around another star.

The first confirmed exoplanet was discovered in 1995 orbiting the star 51 Pegasi, which makes this an exciting new, emerging field of astronomy. Since then, Earth-based and space-based telescopes such as Kepler have been used to detect thousands of planets around other stars.

At first, the only planets we found were super-hot Jupiters, enormous gas giants orbiting close to their host stars. As techniques have been refined, thousands of exoplanets have been discovered at all sizes and out to distances comparable with planets in our own solar system. We have even discovered exomoons!

 

How do you find an exoplanet?

Imagine standing on stage at a rock concert, peering toward the back of the auditorium, staring straight at one of the spotlights. Now, try to figure out when a mosquito flies past that blinding light. In essence, that’s what telescopes like NASA’s TESS (Transiting Exoplanet Survey Satellite) are doing.

The dip in starlight intensity can be just a fraction of a percent, but it’s enough to signal that a planet is transiting the star.

Transits have been observed for hundreds of years in one form or another, but only recently has this idea been applied outside our solar system.

Australia has a long history of human exploration, starting some 60,000 years ago. In 1769 after (the then) Lieutenant James Cook sailed to Tahiti to observe the transit of Venus across the face of the our closest star, the Sun, he was ordered to begin a new search for the Great Southern Land which we know as Australia. Cook’s observation of the transit of Venus used largely the same technique as NASA’s Hubble, Kepler and TESS space telescopes but on a much simpler scale.

Our ability to monitor planetary transits has improved considerably since the 1700s.

NASA’s TESS orbiting telescope can cover an area 400 times as broad as NASA’s Kepler space telescope and is capable of monitoring a wider range of star types than Kepler, so we are on the verge of finding tens of thousands of exoplanets, some of which may contain life!

How can we use Splunk to find an exoplanet?

 Science thrives on open data.

All the raw information captured by both Earth-based and space-based telescopes like TESS are publicly available, but there’s a mountain of data to sift through and it’s difficult to spot needles in this celestial haystack, making this an ideal problem for Splunk to solve.

While playing with this over Christmas, I used the NASA Exoplanet Archive, and specifically the PhotoMetric data containing 642 light curves to look for exoplanets. I used wget in Linux to retrieve the raw data as text files, but it is possible to retrieve this data via web services.

MAST, the Mikulski Archive for Space Telescopes, has made available a web API that allows up to 500,000 records to be retrieved at a time using JSON format, making the data even more accessible to Splunk.

Some examples of API queries that can be run against the MAST are:

The raw data for a given observation appears as:

Information from the various telescopes does differ in format and structure, but it’s all stored in text files that can be interrogated by Splunk.

Values like the name of the star (in this case, Gliese 436) are identified in the header, while dates are stored either using HJD (Heliocentric Julian Dates) or BJD (Barycentric Julian Dates) centering on the Sun (with a difference of only 4 seconds between them).

Some observatories will use MJD which is the Modified Julian Date (being the Julian Date minus 2,400,000.5 which equates to November 17, 1858). Sounds complicated, but MJD is an attempt to simplify date calculations.

Think of HJD, BJD and MJD like UTC but for the entire solar system.

One of the challenges faced in gathering this data is that the column metadata is split over three lines, with the title, the data type and the measurement unit all appearing on separate lines.

The actual data captured by the telescope doesn’t start being displayed until line 138 (and this changes from file to file as various telescopes and observation sets have different amounts of associated metadata).

In this example, our columns are…

  • HJD - which is expressed as days, with the values beyond the decimal point being the fraction of that day when the observation occurred
  • Normalized Flux - which is the apparent brightness of the star
  • Normalized Flux Uncertainty - capturing any potential anomalies detected during the collection process that might cast doubt on the result (so long as this is insignificant it can be ignored).

Heliocentric Julian Dates (HJD) are measured from noon (instead of midnight) on 1 January 4713 BC and are represented by numbers into the millions, like 2,455,059.6261813 where the integer is the days elapsed since then, while the decimal fraction is the portion of the day. With a ratio of 0.00001 to 0.864 seconds, multiplying the fraction by 86400 will give us the seconds elapsed since noon on any given Julian Day. Confused? Well, your computer won’t be as it loves working in decimals and fractions, so although this system may seem counterintuitive, it makes date calculations simple math.

We can reverse engineer Epoch dates and regular dates from HJD/BJD, giving Splunk something to work with other than obscure heliocentric dates.

  • As Julian Dates start at noon rather than midnight, all our calculations are shifted by half a day to align with Epoch (Unix time)
  • The Julian date for the start of Epoch on CE 1970 January 1st 00:00:00.0 UT is 2440587.500000
  • Any-Julian-Date-minus-Epoch = 2455059.6261813 - 2440587.5 = 14472.12618
  • Epoch-Day = floor(Any-Julian-Date-minus-Epoch) * milliseconds-in-a-day = 14472 * 86400000 = 1250380800000
  • Epoch-Time = floor((Any-Julian-Date-minus-Epoch – floor(Any-Julian-Date-minus-Epoch)) * milliseconds-in-a-day = floor(0. 6261813 * 86400000) = 10902064
  • Observation-Epoch-Day-Time = Epoch-Day + Epoch-Time = 1250380800000 + 10902064 = 1250391702064

That might seem a little convoluted, but we now have a way of translating astronomical date/times into something Splunk can understand.

I added a bunch of date calculations like this to my props.conf file so dates would appear more naturally within Splunk.

[exoplanets]

SHOULD_LINEMERGE = false

LINE_BREAKER = ([\r\n]+)

EVAL-exo_observation_epoch = ((FLOOR(exo_HJD - 2440587.5) * 86400000) + FLOOR(((exo_HJD - 2440587.5) - FLOOR(exo_HJD - 2440587.5))  *  86400000))

EVAL-exo_observation_date = (strftime(((FLOOR(exo_HJD - 2440587.5) * 86400000) + FLOOR(((exo_HJD - 2440587.5) - FLOOR(exo_HJD - 2440587.5))  *  86400000)) / 1000,"%d/%m/%Y %H:%M:%S.%3N"))

EVAL-_time = strptime((strftime(((FLOOR(exo_HJD - 2440587.5) * 86400000) + FLOOR(((exo_HJD - 2440587.5) - FLOOR(exo_HJD - 2440587.5))  *  86400000)) / 1000,"%d/%m/%Y %H:%M:%S.%3N")),"%d/%m/%Y %H:%M:%S.%3N")

Once date conversions are in place, we can start crafting queries that map the relative flux of a star and allow us to observe exoplanets in another solar system.

Let’s look at a star with the unassuming ID 0300059.

sourcetype=exoplanets host="0300059"

| rex field=_raw "\s+(?P<exo_HJD>24\d+.\d+)\s+(?P<exo_flux>[-]?\d+.\d+)\s+(?P<exo_flux_uncertainty>[-]?\d+.\d+)" | timechart span=1s avg(exo_flux)

And there it is… an exoplanet blotting out a small fraction of starlight as it passes between us and its host star!

What about us?

While curating the Twitter account @RealScientists, Dr. Jessie Christiansen made the point that we only see planets transit stars like this if they’re orbiting on the same plane we’re observing. She also pointed out that “if you were an alien civilization looking at our solar system, and you were lined up just right, every 365 days you would see a (very tiny! 0.01%!!) dip in the brightness that would last for 10 hours or so. That would be Earth!”

There have even been direct observations of planets in orbit around stars, looking down from above (or up from beneath depending on your vantage point). With the next generation of space telescopes, like the James Webb, we’ll be able to see these in greater detail.

 

Image credit: NASA exoplanet exploration

Next steps

From here, the sky’s the limit—quite literally.

Now we’ve brought data into Splunk we can begin to examine trends over time.

Astronomy is BIG DATA in all caps. The Square Kilometer Array (SKA), which comes on line in 2020, will create more data each day than is produced on the Internet in a year!

Astronomical data is the biggest of the Big Data sets and that poses a problem for scientists. There’s so much data it is impossible to mine it all thoroughly. This has led to the emergence of citizen science, where regular people can contribute to scientific discoveries using tools like Splunk.

Most stars have multiple planets, so some complex math is required to distinguish between them, looking at the frequency, magnitude and duration of their transits to identify them individually. Over the course of billions of years, the motion of planets around a star fall into a pattern known as orbital resonance, which is something that can be predicted and tested by Splunk to distinguish between planets and even be used to predict undetected planets!

Then there’s the tantalizing possibility of exomoons orbiting exoplanets. These moons would appear as a slight dip in the transit line (similar to what’s seen above at the end of the exoplanet’s transit). But confirming the existence of an exomoon relies on repeated observations, clearly distinguished from the motion of other planets around that star. Once isolated, the transit lines should show a dip in different locations for different transits (revealing how the exomoon is swinging out to the side of the planet and increasing the amount of light being blocked at that point).

Given its strength with modelling data, predictive analytics and machine learning, Splunk is an ideal platform to support the search for exoplanets.

Find out more

If you’d like to learn more about how Splunk can help your organization reach for the stars, contact one of our account managers.

Our team on the case

Document as you go.

Peter Cawdron

Consultant

Length of Time at JDS

5 years

Skills

ServiceNow, Loadrunner, HP BSM, Splunk.

Workplace Passion

I enjoy working with the new AngularJS portal in ServiceNow.

Our Splunk stories

Posted by JDS Admin in Blog, Splunk, Tech Tips
 

 

Splunk .conf18 – Splunk Next: 10 Innovations

As part of .conf18 and in the balmy Florida weather surrounded by theme parks, JDS was keen to hear more about what’s coming next from Splunk – namely Splunk Next.

Splunk CEO Doug Merritt announced a lot of new features released with Splunk 7.2 which you can read about in our earlier post (Splunk .conf recap). He also talked about Splunk’s vision of creating products that reduce the barriers to getting the most out of your data. As part of that vision he revealed Splunk Next which comprises a series of innovations that are still in the beta phase.

Being in beta, these features haven’t been finalised yet, but they showcase some of the exciting things Splunk is working towards. Here are the Top 10 innovations that will help you get the most out of your data:

  1. SplunkDeveloper Cloud – develop data-driven apps in the cloud, using the power of Splunk to provide rich data and analytics.
  2. SplunkBusiness Flow – an analytics-driven approach to users’ interactions and identify ways to optimise and troubleshoot. This feature generates a process flow diagram based solely on your index data, and shows you what users are doing, and where you can optimise the system to make smarter decisions.
  3. SplunkData Fabric Search – with the addition of an Apache Spark cluster, you can now search over multiple disparate Splunk instances with surprising speed. This federated search will allow you to search trillions of events and metrics across all your Splunk environments.
  4. SplunkData Stream Processor - a GUI interface to allow you to test your data ingestion in real-time without relying on config files. You can mask data, send it to various indexes or even different Splunk instances, all from the GUI.
  5. SplunkCloud Gateway – a new gateway for the Splunk Mobile App, get Splunk delivered to your mobile device securely.
  6. SplunkMobile – a new mobile interface for Splunk, which shows dashboards in a mobile-friendly format. Plays nicely with the Cloud Gateway.
  7. SplunkAugmented Reality – if you have a VR headset, you can pin glass-table style KPI metrics onto real-world devices. It’s designed so you can walk around a factory floor and see IoT data metrics from the various sensors installed. Also works with QR codes and your smart phone. Think Terminator vision!
  8. SplunkNatural Language Processor – lets you integrate an AI assistant like Alexa and ask English-language based questions and get English-language responses – all from Splunk. e.g. “Alexa, what was the highest selling product last month?” It would be a great addition to your organisation’s ChatOps.
  9. SplunkInsights for Web and Mobile Apps – helps your developers and operators improve the quality of experience delivered by your applications.
  10. SplunkTV – an Apple TV app which rotates through Splunk You no longer need to have a full PC running next to your TV display – just Apple TV.

To participate in any of the above betas go here:

https://www.splunk.com/en_us/software/splunk-next.html

Find out more

Interested to know more about these new Splunk capabilities? We’d love to hear from you. Whether it’s ChatOps, driving operational insight with ITSI, or leveraging Machine Learning - our team can take you through new ways of getting the most out of your data.

Our team on the case

Work smarter, not harder. (I didn't even come up with that. That's smart.)

Daniel Spavin

Performance Test Lead

Length of Time at JDS

7 years

Skills

IT: HPE Load Runner, HPE Performance Center, HPE SiteScope, HPE BSM, Splunk

Personal: Problem solving, Analytical thinking

Workplace Solutions

I care about quality and helping organisations get the best performance out of their IT projects.

Organisations spend a great deal of time and resources developing IT solutions. You want IT speeding up the process, not holding it up. Ensuring performance is built in means you spend less time fixing your IT solutions, and more time on the problems they solve.

I solve problems in our customers’ solutions, so customers can use their solutions to solve problems.

Our Splunk stories

Posted by JDS Admin in Blog, Splunk, Tech Tips
Splunk .conf18

Splunk .conf18

Splunk’s annual conference took place in Orlando, Florida this year, and JDS was there to soak up sun and the tech on offer.

Three days went by quickly, with exciting announcements (dark mode anyone?), interesting discussion and the chance to mingle with customers and Splunkers alike. We also enjoyed the chance to meet up with the US distributors of PowerConnect, and time spent with the uberAgent team.

Splunk CEO Doug Merritt kicked off the keynote, announcing a raft of features to Splunk 7.2 along with advancements released in beta – dubbed Splunk Next (but more of that to come, so stay tuned). Here’s a rundown of what’s new to 7.2:

  • SmartStore– some smarts behind using S3 for storage, allowing you to scale your indexer compute and storage separately. Great news if you want to expand your indexers, but don’t want the associated costs of SSD storage. SmartStore also gives you access to the impressive durability and availability of S3, simplifying your backup requirements.
  • Metrics Workspace– a new GUI interface for exploring metrics. You can drag and drop both standard events and metrics to create graphs over time and easily save them directly to dashboards.
  • Dark Mode– as simple as it sounds, with the crowd going wild for this one. You can now have your NOC display dark themed dashboards at the click of a mouse.
  • Official Docker support– Splunk Enterprise 7.2 now officially supports Docker containers, letting you quickly scale up and down based on user demands.
  • Machine Learning Tool Kit 4.0– now easier to train, test and validate your Machine Learning use cases. Includes the announcement of GitHub based solutions to share with fellow Splunkers.
  • ITSI 4.0– this latest version includes predictive KPIs, so your glass tables can show the current state, and the predicted state 30 minutes in the future. There’s also predictive cause analysis – to drill down and find out what will likely cause issues in the future. Metrics can now also feed into KPIs, allowing for closer integration with Splunk Insights for Infrastructure.
  • ES 5.1.1– introduces event sequencing to help with investigations, a Use Case Library to help with adoption, and the Investigation Workbench for incident investigation.
  • Health Report– in addition to the monitoring console, the health report shows the health of the platform, including disk, CPU, memory, and Splunk specific checks. It’s accessible via a new icon next to your login name.
  • Guided Data Onboarding– guides now available to help you onboard data, like those you can find in Enterprise Security. They include diagrams, high-level steps, and documentation links to help set up and configure your data source.
  • Logs to Metrics– a new GUI feature to help configure and convert logs into metric indexes.
  • Workload Management– prioritise users’ searches based on your own criteria – like a QoS for Searching.

 

If you weren’t lucky enough to go in person, or want to catch up on a missed presentation, the sessions are now available online:

https://conf.splunk.com/conf-online.html

Find out more

Interested to know more about these new Splunk capabilities? We’d love to hear from you. Whether it’s ChatOps, driving operational insight with ITSI, or leveraging Machine Learning - our team can take you through new ways of getting the most out of your data.

Our team on the case

Work smarter, not harder. (I didn't even come up with that. That's smart.)

Daniel Spavin

Performance Test Lead

Length of Time at JDS

7 years

Skills

IT: HPE Load Runner, HPE Performance Center, HPE SiteScope, HPE BSM, Splunk

Personal: Problem solving, Analytical thinking

Workplace Solutions

I care about quality and helping organisations get the best performance out of their IT projects.

Organisations spend a great deal of time and resources developing IT solutions. You want IT speeding up the process, not holding it up. Ensuring performance is built in means you spend less time fixing your IT solutions, and more time on the problems they solve.

I solve problems in our customers’ solutions, so customers can use their solutions to solve problems.

Our Splunk stories

Posted by JDS Admin in Blog, SAP, Splunk, Tech Tips
What if your application was one second faster?

What if your application was one second faster?

Why one second faster?

Improving your website performance will increase your business. But don’t take our word for it—there is plenty of evidence.

According to Kissmetrics:

  • 25% of consumers will abandon a website that takes more than four seconds to load
  • 47% of consumers expect a webpage to load in two seconds or less
  • 79% of shoppers who are dissatisfied with website performance are less likely to buy from the same site again
  • A one-second delay in page response can result in a 7% reduction in conversions
  • A one-second delay (or three seconds of waiting) decreases customer satisfaction by about 16%

So, what would performing one second faster mean for your web application or website? JDS is now offering a limited time promotion that will allow you to realise the maximum performance of your website or application. Over the course of five days, our experts will work with your team to analyse your web application and accelerate its performance for your customers.

 

What’s included?

  • Your own dedicated performance expert for five days (either on-site or off-site)
  • A technical deep dive of your web application, turning over every rock to understand how it can work faster and harder for your business
  • Best practice tips and techniques straight from the guys in the know
  • Experts fluent in everything from Java and .NET through to SAP and Oracle
  • A presentation and roadmap of the findings and recommendations found

Why JDS?

We are Australia’s leading performance test consultancy with 15 years of experience partnering with organisations of every size, from startups to large enterprises and governments. We have a reputation for being a key player in making Australian web applications exceptional. Want to get started? Reach out to a JDS team member, send an email to onesecond@jds.net.au, or call 1300 780 432 to confidentially discuss your web application and how we can help.

We partner with leading technologies

How to maintain versatility throughout your SAP lifecycle

How to maintain versatility throughout your SAP lifecycle

There are many use cases for deploying a tool to monitor your SAP system. Releasing your application between test environments, introducing additional users to your production system, or developing new functionality—all of these introduce an element of risk to your application and environment. Whether you are upgrading to SAP HANA, moving data centres, or expanding your use of ECC modules or mobile interfaces (Fiori), you can help mitigate the risk with the insights SAP PowerConnect for Splunk provides.

Upgrading SAP

Before you begin upgrades to your SAP landscape, you need to verify several prerequisites such as hardware and OS requirements, source release of the SAP system, and background process volumes. There are increased memory, session, and process requirements when performing the upgrade, which need to be managed. The SAP PowerConnect solution provides you with all key information about how your system is responding during the transition, with up-to-date process, database, and system usage information.

Triaging and correlating events or incidents is also easier than ever with PowerConnect through its ability to time series historic information. It means you can look back to a specific point in time and see what the health of the system or specific server was, the configuration settings, etc. This is a particularly useful feature for regression testing.

Supporting application and infrastructure migration

Migration poses risks. It’s critical to mitigate those risks through diligent preparation, whether it’s ensuring your current code works on the new platform or that the underlying infrastructure will be fit for purpose.

For example, when planning a migration from an ABAP-based system to an on-premise SAP HANA landscape, there are several migration strategies you can take, depending on how quickly you want to move and what data you want to bring across. With a greenfield deployment, you start from a clean setup and bring across only what you need. The other end of the spectrum is a one-step upgrade with a database migration option (DMO), where you migrate in-place.

Each option will have its own advantages and drawbacks; however, both benefit from the enhanced visibility that PowerConnect provides throughout the deployment and migration process. As code is deployed and patched, PowerConnect will highlight infrastructure resource utilisation issues, greedy processes, and errors from the NetWeaver layer. PowerConnect can also analyse custom ABAP code and investigate events through ABAP code dumps by ID or user.

Increasing user volumes

Deployments can be rolled out in stages, be it through end users or application functionality. This is an effective way to ease users onto the system and lessen the load on both end-user training and support desk tickets due to confusion. As user volume increases, you may find that people don’t behave like you thought they would—meaning your performance test results may not match up with real-world usage. In this case, PowerConnect provides the correlation between the end-user behaviour and the underlying SAP infrastructure performance. This gives you the confidence that if the system starts to experience increased load, you will know about it before it becomes an issue in production. You can also use PowerConnect to learn the new trends in user activity, and feed that information back into the testing cycle to make sure you’re testing as close to real-world scenarios as possible.

It may not be all bad news. PowerConnect can highlight unexpected user behaviour in a positive light, where you might find new users are introduced to the system, they don’t find a feature as popular as you thought they would. Hence you would then be able to turn off the feature to reduce licence usage or opt to promote the feature internally. PowerConnect will not only give you visibility into system resource usage, but also what users are doing on the system to cause that load.

Feedback across the development lifecycle

PowerConnect provides a constant feedback solution with correlation and insights throughout the application delivery lifecycle. Typically migrations, deployments, and upgrades follow a general lifecycle of planning, deploying, then business as usual, before making way for the next patch or version.

During planning and development, you want insights into user activity and the associated infrastructure performance to understand the growth of users over time.

  • With the data retention abilities of Splunk, PowerConnect can identify trends from the last hour right back to the last year and beyond. These usage trends can help define performance testing benchmarks by providing concurrent user volumes, peak periods, and what transactions the users are spending time on.
  • In the absence of response time SLAs, page load time goals can be defined based on current values from the SAP Web Page Response Times dashboard.
  • With the ability to compare parameters, PowerConnect can help you make sure your test and pre-prod environments have the same configuration as production. When the test team doesn’t have access to run RZ10 to view the parameters, a discrepancy can be easy to miss and cause unnecessary delays.

Once in production, PowerConnect also gives you client-centric and client-side insights.

  • You can view the different versions of SAP GUI that users have installed or see a world map showing the global distribution of users.
  • Splunk can even alert from a SecOps perspective, and notify you if someone logs in from a country outside your user base. You can view a list of audited logins and browse the status of user passwords.
  • The power of Splunk gives you the ability to alert or regularly report on trends in the collected data. You can be informed if multiple logins fail, or when the CPU vs Work processes is too high. Automated scripts can be triggered when searches return results so that, for example, a ServiceNow ticket can be raised along with an email alert.

Even after a feature has completed its lifecycle and is ready to be retired, PowerConnect remains rich with historical data describing usage, issues, and configuration settings in Splunk, even if that raw data disappears or has been aggregated from SAP.

Backed by the power of Splunk, and with the wealth of information being collected, the insights provided by PowerConnect will help you effectively manage your SAP system throughout your SAP lifecycle.

Want to find out more?

Find out more about SAP PowerConnect for Splunk and how it can be a key tool for your business in 2018 by attending our event, "Splunkify your SAP data with PowerConnect" in May.

Choose the most convenient location and date for you, and register below. We look forward to seeing you there.

Register in Sydney or Melbourne

Sydney Event—Tuesday 15 May, 12pm

Splunk Office

Level 1, 141 Walker Street

North Sydney NSW 2060

[formidable id="17"]

By clicking this button, you submit your information to JDS Australia, who will use it to communicate with you about this event and their other services.

Melbourne Event—Thursday 17 May, 12pm

Splunk Office

Level 16, North Tower, 525 Collins Street

Melbourne VIC 3000

[formidable id="20"]

By clicking this button, you submit your information to JDS Australia, who will use it to communicate with you about this event and their other services.

Find out more

To find out more about how JDS can help you implement PowerConnect, contact our team today on 1300 780 432, or email contactus@jds.net.au.

Our team on the case

Work smarter, not harder. (I didn't even come up with that. That's smart.)

Daniel Spavin

Performance Test Lead

Length of Time at JDS

7 years

Skills

IT: HPE Load Runner, HPE Performance Center, HPE SiteScope, HPE BSM, Splunk

Personal: Problem solving, Analytical thinking

Workplace Solutions

I care about quality and helping organisations get the best performance out of their IT projects.

Organisations spend a great deal of time and resources developing IT solutions. You want IT speeding up the process, not holding it up. Ensuring performance is built in means you spend less time fixing your IT solutions, and more time on the problems they solve.

I solve problems in our customers’ solutions, so customers can use their solutions to solve problems.

Focus. Work hard. Stay positive. It always seems impossible until it’s done.

Daniel Tam

Account Manager

Length of Time at JDS

10 years

Skills

I am an experienced Account Manager, with a strong capability for problem-solving and creating the solutions that best match each customer’s unique requirements. I have a demonstrated technical capability across numerous disciplines, including:

  • Performance testing
  • Application performance monitoring (‘APM’)
  • IT Service Management (ITIL 3 certified)
  • CMDB, automation and orchestration

Workplace Passion

I have a strong passion for understanding our customers’ unique requirements and helping to create solutions that best solve both their technical and business needs.

Read more Tech Tips

Posted by JDS Admin in SAP, Splunk, Tech Tips
Event: What will drive the next wave of business innovation?

Event: What will drive the next wave of business innovation?

It’s no secret that senior managers and C-level executives are constantly wading through the latest buzzwords and jargon as they try to determine the best strategies for their business. Disruption, digital transformation, robots are taking our jobs, AI, AIOps, DevSecOps… all of the “next big thing” headlines, terms, clickbait articles, and sensationalism paint a distorted picture of what the business technology landscape really is.

Understand the reality amongst the virtuality, and make sense of what technology will drive the next wave of business innovation.

Join Tim Dillon, founder  of Tech Research Asia (TRA), for a presentation that blends technology market research trends with examples from Australian businesses already deploying solutions in areas such as cloud computing, intelligent analytics, robotics, artificial intelligence, and “the realities” (mixed, virtual, and augmented). Tim will examine when these innovation technologies will genuinely transform Australian industry sectors as well as the adoption and deployment plans of your peers. Not just a theoretical view, the presentation will provide practical tips and learnings drawn from real-life use cases.

Hosted by JDS Australia and Splunk, this is an event not to be missed by any executive who wants an industry insider view of what’s happening in technology in 2018, and where we’re headed in the future.

When: Tuesday 1 May, 11.45am-2pm (includes welcome drinks and post-event networking)

Where: Hilton Brisbane, 190 Elizabeth St, Brisbane City, QLD 4000

Cost: Complimentary

Agenda

11.45-12.30 Registration, canapes and drinks

12.30-12.40 Opening: Gene Kaalsen, Splunk Practice Manager, JDS Australia

12.35-1.05 Presentation: Tim Dillon

1.05-1.20 Q and A

1.20-1.25 Closing: Amanda Lugton, Enterprise Sales Manager, Splunk

1.25- 2.00 Networking, drinks and canapes

By clicking this button, you submit your information to JDS Australia, who will use it to communicate with you about this enquiry and their other services.

Tim Dillon, Founder and Director, Tech Research Asia

Tim is passionate about the application of technology for business benefit. He has been involved in business and technology research and analysis since 1991. In July 2012 he established Tech Research Asia (www.techresearch.asia) to provide bespoke analyst services to vendors in the IT&T sector. From 2007 to late 2012, he held the role of IDC’s Associate Vice President Enterprise Mobility and End-User (Business) research, Asia Pacific. Prior to this he was Current Analysis’ (now Global Data) Director of Global Telecoms Research and European and Asia Pacific Research Director.

For a period of time he also worked with one of Europe’s leading competitive intelligence research houses as research director with a personal focus on the telecoms and IT sectors. He combines more than 20 years of business and technology research with a blend of professional, international experience in Australia, Asia Pacific, and Europe. Of late, his particular areas of interest have centred upon emerging innovation technologies such as AI, virtual and augmented realities, security and data management, and governance. Tim truly delights in presenting, facilitating, and communicating with organisations and audiences discussing how trends and development in technology will shape the future business environment. 

A strong communicator, he has presented to large (1500+) audiences through to small, intimate round table discussions. A high proportion of Tim’s roles have been client focused—leading and delivering consulting projects or presenting at client conferences and events and authoring advisory reports. A regular participant in industry judging panels, Tim also works with event companies in an advisory role to help create strong, relevant technology business driven agendas. He has also authored expert witness documents for cases relating to the Australian telecommunication markets. A Tasmanian by birth, Tim holds a Bachelor of Economics from the University of Tasmania.

Visualise and consolidate your data with SAP PowerConnect for Splunk

Visualise and consolidate your data with SAP PowerConnect for Splunk

SAP systems can be complex. If you have multiple servers and operating systems across different data centres and the cloud, it’s often difficult to know what’s happening under the hood. SAP PowerConnect for Splunk makes it easier than ever to get valuable insight into your SAP system, using its intuitive visual interface and consolidated system data.

Leverage the power of Splunk’s intuitive interface

Traditionally, visualising data using SAP has been a challenge. Inside SAP the data is aggregated, so detail is not available after a few days or weeks—assuming historical records are even being kept. The data that is available is often displayed in tabular format, which needs to be manipulated to provide any meaningful analysis. Visualisations within SAP are limited, and inconsistent across platforms.

A key benefit of SAP PowerConnect is its highly visual interface, which allows your IT operators and executives to see what’s going on in your SAP instance at a glance. Once the data is ingested, you have the full visualisation power of Splunk at your disposal. You can also combine other data from Splunk to create correlations and see visual trends between different systems.

The PowerConnect solution provides more than 50 pre-built dashboards showing you the status of everything from individual metrics to global system trends. You also have the option to collect and display any custom metrics via a function module that you develop.

Whether you want to visualise HANA DB performance or global work processes, you can visually track your data to clearly see trends, spikes, and issues over time.

To illustrate the improved visualisation PowerConnect has over native SAP, compare the following:

This screen shows standard performance metrics from within SAP:

SAP Data PowerConnect

SAP PowerConnect for Splunk converts this tabular data and presents it on a highly visual dashboard:

PowerConnect for Splunk

Bring all your data into one place

By using PowerConnect to consolidate your SAP system data within Splunk, you will discover a wealth of information. Detailed SAP data relating to specific servers, processes, and users will be presented in a range of targeted dashboards. You can see live performance metrics, or look back at trends with a high level of granularity from the past months or years on demand.

With your SAP data available in Splunk, you can:

  • Configure custom alerting based on known triggers, or a combination of metrics from multiple servers or processes
  • Be alerted when you hit a certain user load, or when a custom module encounters an error
  • Schedule custom reports to provide a summary of performance from high-level trends right down to low level CPU metrics
  • Monitor and regularly report on key performance indicators, visualising historical trends over time, including important events during the year

Often different teams within an enterprise will each have a tool of choice to monitor their part of the landscape. This may let the team work efficiently, but can hamper knowledge sharing across the organisation. This is especially true with complex tools, where unintended misuse can cause issues so access is often held back. PowerConnect opens up access to your SAP data by making it available to everyone from your basis team to your executives, and removes the silos.

Furthermore, you can further enrich your SAP PowerConnect data by combining data from non-SAP or third-party applications within your organisation. For example, if your SAP system makes calls to a web service that you manage, you can use Splunk to consume available data to help triage during issues, or complete the picture of what your users are doing on the system. This provides an even deeper level of correlation between applications and business services.

Leveraging the power of Splunk and SAP PowerConnect, you will have a consolidated view of your enterprise operational data, with open and accessible information displayed in a highly visual interface.

Want to find out more?

Find out more about SAP PowerConnect for Splunk and how it can be a key enabler for your business in 2018 by attending our event, "Splunkify your SAP data with PowerConnect." JDS and Splunk are co-hosting three events across Australia throughout the month of March. Each event will open at 5pm, with the presentation to take place from 5.30-6.30pm. Light canapés and beverages will be served.

Choose the most convenient location and date for you, and register below. We look forward to seeing you there.

Register in one of three locations

Brisbane Event—Wednesday 28 March, 5pm

Tattersall’s Club
215 Queen Street
Brisbane City QLD 4000

 

[formidable id="22"]

By clicking this button, you submit your information to JDS Australia, who will use it to communicate with you about this event and their other services.

Sydney Event—Tuesday 15 May, 12pm

Splunk Office
Level 1, 141 Walker Street
North Sydney NSW 2060

 

[formidable id="17"]

By clicking this button, you submit your information to JDS Australia, who will use it to communicate with you about this event and their other services.

Melbourne Event—Thursday 17 May, 12pm

Splunk Office
Level 16, North Tower, 525 Collins Street
Melbourne VIC 3000

 

[formidable id="20"]

By clicking this button, you submit your information to JDS Australia, who will use it to communicate with you about this event and their other services.

Find out more

To find out more about how JDS can help you implement PowerConnect, contact our team today on 1300 780 432, or email contactus@jds.net.au.

Our team on the case

Work smarter, not harder. (I didn't even come up with that. That's smart.)

Daniel Spavin

Performance Test Lead

Length of Time at JDS

7 years

Skills

IT: HPE Load Runner, HPE Performance Center, HPE SiteScope, HPE BSM, Splunk

Personal: Problem solving, Analytical thinking

Workplace Solutions

I care about quality and helping organisations get the best performance out of their IT projects.

Organisations spend a great deal of time and resources developing IT solutions. You want IT speeding up the process, not holding it up. Ensuring performance is built in means you spend less time fixing your IT solutions, and more time on the problems they solve.

I solve problems in our customers’ solutions, so customers can use their solutions to solve problems.

Focus. Work hard. Stay positive. It always seems impossible until it’s done.

Daniel Tam

Account Manager

Length of Time at JDS

10 years

Skills

I am an experienced Account Manager, with a strong capability for problem-solving and creating the solutions that best match each customer’s unique requirements. I have a demonstrated technical capability across numerous disciplines, including:

  • Performance testing
  • Application performance monitoring (‘APM’)
  • IT Service Management (ITIL 3 certified)
  • CMDB, automation and orchestration

Workplace Passion

I have a strong passion for understanding our customers’ unique requirements and helping to create solutions that best solve both their technical and business needs.

Our Splunk stories

Posted by JDS Admin in Blog, Splunk
Get more from your SAP data with PowerConnect

Get more from your SAP data with PowerConnect

SAP PowerConnect for Splunk is the only SAP-certified SAP to Splunk connector, “powered by SAP NetWeaver.” This solution runs inside SAP and extracts machine data, security events, and logs from SAP and ingests the information into Splunk in real time. As an SAP partner and the sole Australian implementation partner for SAP PowerConnect for Splunk, JDS Australia can help you see what’s happening inside your SAP system, proactively report on trends, alert on incidents, and even enable you to predict what will happen in the future.

This screen shows standard performance metrics from within SAP:

SAP Data PowerConnect

SAP PowerConnect for Splunk converts this tabular data and presents it on a highly visual dashboard:

PowerConnect for Splunk

Find out more about SAP PowerConnect for Splunk and how it can be a key enabler for your business in 2018 by attending our event, “Splunkify your SAP data with PowerConnect.” JDS and Splunk are co-hosting events in Sydney and Melbourne in May 2018. Light canapés and beverages will be served. Choose the most convenient location for you and register below. We look forward to seeing you there.

What is AIOps?

What is AIOps?

Gartner has coined another buzz word to describe the next evolution of ITOM solutions. AIOps uses the power of Machine Learning and big data to provide pattern discovery and predictive analysis for IT Ops.

What is the need for AIOps?

Organisations undergoing digital transformation are facing a lot of challenges that they wouldn’t have faced even ten years ago. Digital transformation represents a change in organisation structure, processes, and abilities, all driven by technology. As technology changes, organisations need to change with it.

This change comes with challenges. The old ITOps solutions now need to manage micro services, public and private APIs, and Internet-of-Things devices.

As consumers, IT managers are used to personalised movie recommendations from Netflix, or preemptive traffic warnings from Google. However, their IT management systems typically lack this sort of smarts—reverting to traffic light dashboards.

There is an opportunity in the market to combine big data and machine learning with IT operations.

Gartner has coined this concept AIOps: Artificial Intelligence for IT Ops.

AIOps platforms utilize big data, modern machine learning and other advanced analytics technologies to directly and indirectly enhance IT operations (monitoring, automation and service desk) functions with proactive, personal and dynamic insight. AIOps platforms enable the concurrent use of multiple data sources, data collection methods, analytical (real-time and deep) technologies, and presentation technologies.” – Colin Fletcher, Gartner

AIOps 1

Current State – Gartner Report

Gartner coined the term AIOps in 2016, although they originally called it Algorithmic IT Operations. They don’t yet produce a magic quadrant for AIOps, but that is likely coming.

Gartner has produced a report which summarises both what AIOps is hoping to solve, and which vendors are providing solutions.

Eleven core capabilities are identified, with only four vendors able to do all 11: HPE, IBM, ITRS, and Moogsoft.

How does Splunk do AIOps?

Splunk is well positioned to be a leader in the AIOps field. Their AIOps solution is outlined on their website. Splunk AIOps relies heavily on the Machine Learning Toolkit, which provides Splunk with about 30 different machine learning algorithms.

Splunk provides an enterprise machine learning and big data platform which will help AIOps managers:

  • Get answers and insights for everyone: Through the Splunk Query Language, users can predict past, present, and future patterns of IT systems and service performance.
  • Find and solve problems faster: Detect patterns to identify indicators of incidents and reduce irrelevant alerts.
  • Automate incident response and resolution: Splunk can automate manual tasks, which are triggered based on predictive analytics.
  • Predict future outcomes: Forecast on IT costs and learn from historical analysis. Better predict points of failure to proactively improve the operational environment.
  • Continually learn to take more informed decisions: Detect outliers, adapt thresholds, alert on anomalous patterns, and improve learning over time.

Current offerings like ITSI and Enterprise Security also implement machine learning, which take advantage of anomaly detection and predictive algorithms.

As the complexity in IT systems increases, so too will the need to analyse and interpret the large amount of data generated. Humans have been doing a good job to date, but there will come a point where the complexity will be too great. Organisations which can complement their IT Ops with machine learning will have a strategic advantage over those who rely on people alone.

Conclusion

To find out more about how JDS can help you implement AIOps, contact our team today on 1300 780 432, or email contactus@jds.net.au.

Our team on the case

Work smarter, not harder. (I didn't even come up with that. That's smart.)

Daniel Spavin

Performance Test Lead

Length of Time at JDS

7 years

Skills

IT: HPE Load Runner, HPE Performance Center, HPE SiteScope, HPE BSM, Splunk

Personal: Problem solving, Analytical thinking

Workplace Solutions

I care about quality and helping organisations get the best performance out of their IT projects.

Organisations spend a great deal of time and resources developing IT solutions. You want IT speeding up the process, not holding it up. Ensuring performance is built in means you spend less time fixing your IT solutions, and more time on the problems they solve.

I solve problems in our customers’ solutions, so customers can use their solutions to solve problems.

Posted by JDS Admin in Blog, Splunk
Using Splunk to look for Spectre and Meltdown security breaches

Using Splunk to look for Spectre and Meltdown security breaches

Meltdown and Spectre are two security vulnerabilities that are currently impacting millions of businesses all over the world. Since the news broke about the flaw in Intel processor chips that opened the door to once-secure information, companies have been bulking up their system security and implementing patches to prevent a breach.

Want to make sure your system is protected from the recent outbreak of Spectre and Meltdown? One of our Splunk Architects, Andy Erskine, explains one of the ways JDS can leverage Splunk Enterprise Security to check that your environment has been successfully patched.

What are Spectre and Meltdown and what do I need to do?

According to the researchers who discovered the vulnerabilities, Spectre “breaks the isolation between different applications”, which allows attackers to expose data that was previously considered secure. Meltdown “breaks the most fundamental isolation between user applications and the operating system”.

Neither type of attack requires software vulnerabilities to be carried out. Labelled “side channel attacks”, they are not solely based on operating systems as they use side channels to acquire the breached information from the memory location.

The best way to lower the risk of your business’s sensitive information being hacked is to apply the newly created software patches as soon as possible.

Identifying affected systems

Operating system vendors are forgoing regular patch release cycles and publishing operating system patches to address this issue.

Tools such as Nessus/Tenable, Qualys, Tripwire IP360, etc. regularly scan their environments for vulnerabilities such as this and can identify affected systems by looking for the newly released patches.

Each plugin created for the Spectre and Meltdown vulnerabilities will be marked with at least one of the following CVEs:

Spectre:

CVE-2017-5753: bounds check bypass

CVE-2017-5715: branch target injection

Meltdown:

CVE-2017-5754: rogue data cache load

To analyse whether your environment has been successfully patched, you would want to ingest data from these traditional vulnerability management tools and present the data in Splunk Enterprise Security.

Most of these tools have a Splunk app that brings the data in and maps to the Common Information Model. From there, you can use searches that are listed to identify the specific CVEs associated with Spectre and Meltdown.

Once the data is coming into Splunk, we can then create a search to discover and then be proactive and notify on any vulnerable instances found within your environment, and then make them a priority for patching.

Here is an example search that customers using Splunk Enterprise Security can use to identify vulnerable endpoints:

tag=vulnerability (cve=" CVE-2017-5753" OR cve=" CVE-2017-5715" OR cve=" CVE-2017-5754")
| table src cve pluginName first_found last_found last_fixed
| dedup src
| fillnull value=NOT_FIXED last_fixed
| search last_fixed=NOT_FIXED
| stats count as total

Example Dashboard Mock-Up

JDS consultants are experts in IT security and proud partners with Splunk. If you are looking for advice from the experts to implement or enhance Splunk Enterprise Security or any other Splunk solution, get in touch with us today.

Conclusion

To find out more about how JDS can help you with your security needs, contact our team today on 1300 780 432, or email contactus@jds.net.au.

Our team on the case

Work hard, Play hard.

Andy Erskine

Consultant

Length of Time at JDS

2.5 years

Skills

  • The CA Suite of tools, notably CA APM, CA Spectrum, CA Performance Manager
  • Splunk

Workplace Passion

Learning new applications and applying them to today’s problems.

Our Splunk stories

Posted by JDS Admin in Splunk, Tech Tips
How synthetic monitoring will improve application performance for a large bank

How synthetic monitoring will improve application performance for a large bank

JDS is currently working with several businesses across Australia to implement our custom synthetic monitoring solution, Active Robot Monitoring—powered by Splunk. ARM is a simple and effective way of maintaining the highest quality customer experience with minimal cost. While other synthetic monitoring solutions operate on price-per-transaction model, ARM allows you to conduct as many transactions as you want using under the umbrella of your Splunk investment. We recently developed a Splunk ARM solution for one of the largest banks in Australia and are in the process of implementing it. Find out more about the problem presented, our proposed solution, and the expected results below.


The problem

A large Australian bank (‘the Bank’) needs to properly monitor the end-to-end activity of its core systems/applications. This is to ensure that the applications are available and performing as expected at all times. Downtime or poor performance, even for only a few minutes, could potentially result in great loss of revenue and reputation damage. While unscheduled downtime or performance degradation will inevitably occur at some point, the Bank wants to be notified immediately of any performance issues. They also want to identify the root cause of the problem easily, resolve the issue, and restore expected performance and availability as quickly as possible. To achieve this, the Bank approached JDS for a solution to monitor, help triage, and highlight error conditions and abnormal performance.

The solution

JDS proposed implementing the JDS Active Robot Monitoring (ARM) Splunk application. ARM is a JDS-developed Splunk application which utilises scripts written in a variety of languages (e.g. Selenium) with custom built Splunk dashboards. In this case, Selenium is used to emulate actual users interacting with the web application. These interactions or transactions will be used to determine if the application is available, whether a critical function of the application is working properly, and what the performance of the application is like. All that information will be recorded in Splunk and used for analysis.

Availability and performance metrics will be displayed in dashboards, which fulfils several purposes—namely providing management with a summary view of the status of applications and support personnel with more information to help identify the root cause of the problem efficiently. In this case, Selenium was chosen as it provides for complete customisations not available in other similar offerings in the synthetic monitoring segment, and when coupled with Splunk’s analytical and presentation capability, provides the best solution to address the Bank’s problem.

The expected results

With the implementation of the JDS ARM application at the Bank, availability, and performance of their core applications is expected to improve and remain at a higher standard. Downtime, if it occurs, will be quickly rectified as support personnel will be alerted immediately and have access to all the vital data required to do a root cause analysis of the problem quickly. Management will have a better understanding of the health of the application and will be able to assign valuable resources more effectively to work on it.

What can ARM do for your business?

Throughout the month of November 2017, JDS is open to registrations for a free on-site workshop at your business. We will discuss Active Robot Monitoring and how it could benefit your organisation specifically. To register for this exclusive opportunity,  please enter your information below and one of our account executives will contact you to set up a time to meet at your location.

By clicking this button, you submit your information to JDS Australia, who will use it to communicate with you about this request and their other services.

Our team on the case

Always be learning.

Mingxen Chuah

Consultant

Length of Time at JDS

2+ years

Skills

Automation, monitoring, software development

Workplace Solutions

Splunk, HP Operations Orchestration, CA APM

Commas save lives.

Amy Clarke

Marketing Communications Manager

Length of Time at JDS

Since July 2017

Skills

Writing, communications, marketing, design, developing and maintaining a brand, social media, sales.

Workplace Solutions

Words matter, so make sure you get them right!

Workplace Passion

Helping a company develop its voice and present that to their clients with pride.

Posted by JDS Admin in Case Study, Splunk
Splunk ARM on-site workshop

Splunk ARM on-site workshop

Workshop registration form

Splunk ARM Workshop












By clicking this button, you submit your information to JDS Australia, who will use it to communicate with you about this request and their other services.

What is Active Robot Monitoring?

ARM is a custom-built solution developed by JDS Australia—the best synthetic monitoring solution on the market. Unlike other synthetic monitoring solutions, ARM is flexible and can be tailored to the specific needs of your business. Leveraging the power of Splunk, ARM enables synthetic performance monitoring for websites, mobile, cloud-based, on-premise, and SaaS apps. This means that if you are already a Splunk customer, you can handle thousands of transactions in Splunk ARM with no further licensing costs.

Learn how you can ensure a seamless experience for your customers without relying on real users to tell you when a problem occurs by registering for an on-site workshop of Splunk ARM in your office. Our account executives will give you an overview of what Splunk ARM is capable of, what it looks like in action, and how it can benefit your business.

Read more about Splunk ARM