Your organisation deserves a good dashboard (and here’s why)

Your organisation deserves a good dashboard (and here’s why)

Cars cost a lot of money, and when a driver gets behind the wheel, they want to know that every component is working correctly. It can mean the difference between life and death—not to mention getting to your destination on time! For this reason, vehicle dashboards are painstakingly designed to be simple yet functional, so that virtually anyone can understand them at a glance.

In a similar vein, how much investment was involved in building up your organisation and its IT infrastructure—and let's not forget ongoing maintenance! The cost of system failure can mean life or death for your business, missing destinations and deadlines. In some sectors, such as health or search and rescue, it can even lead to injury or loss of life. With the consequences of lost visibility in mind, take a look at your organisation's dashboards (if they exist!). Ask yourself—are they as easy to understand as the dashboard of a car?

Most organisations would reply 'no' to that question. All too often, dashboards exist because the organisation's monitoring solution provided one out-of-the-box.  That's fine if your intended audience is all technically inclined, and understand what it means when there is a 'memory bottleneck' or the 'committed memory in use is too high'. These alerts, however, might mean nothing to upper management or the executive team, who are directly responsible for approving your team's budget. Action needs to be taken to translate the information, so that it is accessible to all your key decision-makers. So what are the first steps?

Here are three initial items to consider:

  1. Context is everything! Without context, your executive team may not understand the impact of an under-resourced ESX server that's beginning to fail. If, however, your dashboard were to show that the ESX server happens to host the core income stream systems for the organisation, you may have their attention (and funding).
  2. Visualise the data! Approximately 60% of the world population are visual thinkers, so your dashboard should be visually designed.  Find a way to visualise your data. Show the relationships and dependencies between systems. Oh, and "death to pie charts!".
  3. Invest the time and effort! Find your creative spark, and brainstorm as a team. A well-designed dashboard will pay on-going dividends with every incident managed, or business case written. Make sure you allot time to prove your work against SLAs.

If you need help with dashboard development or design, give JDS a call on 1300 780 432 and speak with one of our friendly consultants.

Our team on the case

Never burn a bridge, humility may require you to take a step backward one day.

Warren Saunders

Consultant

Length of Time at JDS

6 years

Workplace Solutions

All things Event Management (HPE Operations Management i, HPE Business Service Management, HPE Business Process Monitoring, HPE Sitescope, and custom integrations for everything in between).

Workplace Passion

  • Making complex issues easy to understand through competent technical writing, presentation delivery and creative design.
  • Finding solutions to problems which were previously placed in the “too hard” basket.

Ensure IT works.

Chris Younger

Delivery Manager

Length of Time at JDS

10 years

Skills

HP Certified Instructor, HP Business Service Management, HP LoadRunner, Splunk Sales Engineer, Certified in Service Now.

Workplace Passion

I am a big advocate for the value provided by Synthetic Business Process monitoring. When I first saw it 10 years ago in HP Business Availability Center I was impressed and I am still touting the benefits after all this time. Time and time again, I have seen it provide invaluable visibility into the user experience. These days it can be achieved using trendy newer tools such as Splunk for a fraction of the price it previously cost.

Work hard, Play hard.

Andy Erskine

Consultant

Length of Time at JDS

2.5 years

Skills

  • The CA Suite of tools, notably CA APM, CA Spectrum, CA Performance Manager
  • Splunk

Workplace Passion

Learning new applications and applying them to today’s problems.
Posted by wsaunders in Monitor, Tech Tips, 0 comments
Australia’s new mandatory security notifications

Australia’s new mandatory security notifications

The majority of Australian organisations will soon be required to report major data security breaches. But what does this mean, and how can businesses avoid associated risks?

Several years ago, JDS received a fax. This was unusual for two reasons: firstly, it was a fax in the 21st century; secondly, it was an authorisation for payment of 60 million dollars from a large market fund. The fax was from a broker, who was merely confirming 'our' bank account details before sending through the transfer—if JDS were in the business of heists, it would have been a matter of changing a digit or two, then faxing the form back for payment.

As you can tell by the fact JDS haven't converted downtown Melbourne into a tropical beach, no such skullduggery transpired: instead, JDS MD John Bearsley called the broker and explained that he might have the wrong fax number on file. The broker was a bit shocked, to say the least. But what about the client? Did they ever find out?

Under Australia's new mandatory data security notification laws, applicable from 22 February 2018, the broker would have been forced to notify the client and the Office of the Australian Information Commissioner (OAIC) of the information breach. This is because, through a simple mix-up, we gained access to personal and private information about the fax's intended recipient, and the breach could have had serious consequences. Under the new requirements, data security breaches are to be dealt with as follows:

  1. Contain the breach and assess
  2. Evaluate risks or individuals associated with the breach
  3. Consider whether there is need for notification
  4. Review and take action to prevent further breaches

The difference between this new schema and any internal risk or incident management procedure lies in the role of compulsory reporting. If there is real risk of serious harm, then the individuals involved, and potentially the police as well as the OAIC, must be notified. This notification is to include the scope of the breach, and information regarding containment of the breach and action taken to prevent further breaches.

So what construes 'serious harm'? This relates to the type of information, information sensitivity, whether the information is protected, if the information can be used in combination with other information to cause harm, the attributes of the person or body who now hold the information, and the nature of the harm. It ties into existing Australian privacy and information security legislation, and has particular relevance for organisations that hold databases of information, particularly personal or sensitive information, about their customers or users. Consider the following IT security-related disasters that have come to light, noting that a number are based in the US, where compulsory reporting is already in effect:

Bangladesh Bank

A group of internationally-based hackers attempted to steal nearly US$1 billion from Bangladesh Bank after identifying some security vulnerabilities. They compromised the bank’s network, and used the credentials they gained to authorise bank transfers to the tune of US$951 million. Similar attacks have been seen at the Banco del Austro in Ecuador (US$12 million stolen) and the Tien Phong Bank in Vietnam (unsuccessful).

Result

US$101 million of transfers were successfully completed by the thieves; US$63 million was never recovered.

Indiana University

The names, addresses, and Social Security Numbers of a large number of Indiana University students and graduates were stored on an unprotected site. The lack of protection meant that several data mining applications not just accessed, but downloaded all the data files.

Result

Students and credit reporting agencies had to be notified; ongoing risk for financial fraud and identity theft, and associated liability.

Anthem

Anthem suffered a cyber attack in late 2014, with information accessed potentially including names, home addresses, email addresses, employment information, birth dates, and income data. The FBI investigation found that the attacks were conducted by international parties who were curious about the American healthcare system. Almost all of Anthem’s product lines were impacted.

Result

Anthem had to pay US$115 million to settle a class action litigation suit as a result of the breach. They also provided up to four years of credit monitoring and identity protection services to affected customers.

Philippine Commission on Elections (COMELEC)

Weaknesses in COMELEC’s network and data security meant hackers were able to access the full database of all registered voters in the Philippines. The database contained personal details many of which were stored in plain text, and included fingerprints, passport numbers and expiry dates, and potentially voting behaviour.

Result

The data could be used for extortion, phishing, or blackmailing purposes, and related hacks may lead to election manipulation.

Tesco Bank

Tesco Bank had monitoring and security mechanisms in place. However, Tesco Bank data such as credit card verification had to be accessed by the parent company Tesco, which does not appear to have been as secure. Security is only as strong as the weakest link in the chain, and in this instance, money was stolen and customers defrauded.

Result

Customers defrauded to the tune of 2.5 million pounds. The bank had to pay associated costs, and manage associated brand damage.

Yahoo

Yahoo’s security was breached twice, in 2014 (500 million accounts stolen by a state-sponsored actor) and 2013 (one billion accounts). Information included user names, telephone numbers, birth dates, and encrypted passwords.

Result

Yahoo’s sale price to Verizon was reduced by some US$350 million as a result of the hacks.

The above breaches cover a wide scope of industries—from health to insurance, government, and education. They have led to wide-ranging financial and reputational damage.

It would be naive to think that similar data breaches don't take place in Australia, though at the moment, it is not compulsory to report them. In 2015–2016, 107 organisations voluntarily notified the OAIC of breaches, and we are likely to see a rise in this number once the new legislation kicks in.

What does this mean for your organisation?

If your organisation deals with sensitive or personal information, including data such as emails, passwords, addresses, birth dates, health records, education records, passport numbers, ID numbers, travel information etc., then you need to prepare for the upcoming legislation. Part of this will be ensuring you have the correct policies, procedures, and training in place—and the other part will be making sure your environment is protected. The security of your IT infrastructure has always been, and will continue to be, vital: but now, there is an increased risk to your organisation, financially and particularly reputationally, if you do not ensure your environment is as secure as possible before mandatory reporting comes in. Test and assess your infrastructure and applications now, rather than down the line following a reportable incident.  

For advice or to book an assessment, call our friendly JDS consultants today.

 

Posted by Laura Skillen in News, Secure, 0 comments
The Splunk Gardener

The Splunk Gardener

The Splunk wizards at JDS are a talented bunch, dedicated to finding solutions—including in unexpected places. So when Sydney-based consultant Michael Clayfield suffered the tragedy of some dead plants in his garden, he did what our team do best: ensure it works (or ‘lives’, in this case). Using Splunk’s flexible yet powerful capabilities, he implemented monitoring, automation, and custom reporting on his herb garden, to ensure that tragedy didn’t strike twice.

My herb garden consists of three roughly 30cm x 40cm pots, each containing a single plant—rosemary, basil, and chilli. The garden is located outside our upstairs window and receives mostly full sunlight. While that’s good for the plants, it makes it harder to keep them properly watered, particularly during the summer months. After losing my basil and chilli bush over Christmas break, I decided to automate the watering of my three pots, to minimise the chance of losing any more plants. So I went away and designed an auto-watering setup, using soil moisture sensors, relays, pumps, and an Arduino—an open-source electronic platform—to tie it all together.

Testing the setup by transferring water from one bottle to another.

Testing the setup by transferring water from one bottle to another.

I placed soil moisture sensors in the basil and the chilli pots—given how hardy the rosemary was, I figured I could just hook it up to be watered whenever the basil in the pot next to it was watered. I connected the pumps to the relays, and rigged up some hosing to connect the pumps with their water source (a 10L container) and the pots. When the moisture level of a pot got below a certain level, the Arduino would turn the equivalent pump on and water it for a few seconds. This setup worked well—the plants were still alive—except that I had no visibility over what was going on. All I could see was that the water level in the tank was decreasing. It was essential that the tank always had water in it, otherwise I'd ruin my pumps by pumping air.

To address this problem, I added a float switch to the tank, as I was aiming to set it up so I could stop pumping air if I forgot to fill up the tank. Using a WiFi adapter, I connected the Arduino to my home WiFi. Now that the Arduino was connected to the internet, I figured I should send the data into Splunk. That way I'd be able to set up an alert notifying me when the tank’s water level was low. I'd also be able to track each plant’s moisture levels.

The setup deployed: the water tank is on the left; the yellow cables coming from the tank are for the float switch; and the plastic container houses the pumps and the Arduino, with the red/blue/black wires going to the sensors planted in the soil of the middle (basil) and right (chilli) pots. Power is supplied via the two black cables, which venture back inside the house to a phone charger.

The setup deployed: the water tank is on the left; the yellow cables coming from the tank are for the float switch; and the plastic container houses the pumps and the Arduino, with the red/blue/black wires going to the sensors planted in the soil of the middle (basil) and right (chilli) pots. Power is supplied via the two black cables, which venture back inside the house to a phone charger.

Using the Arduino’s Wifi library, it’s easy to send data to a TCP port. This means that all I needed to do to start collecting data in Splunk was to set up a TCP data input. Pretty quickly I had sensor data from both my chilli and basil plants, along with the tank’s water status. Given how simple it was, I decided to add a few other sensors to the Arduino: temperature, humidity, and light level. With all this information nicely ingested into Splunk, I went about creating a dashboard to display the health of my now over-engineered garden.

The overview dashboard for my garden. The top left and centre show current temperature and humidity, including trend, while the top right shows the current light reading. The bottom left and centre show current moisture reading and the last time each plant was watered. The final panel in the bottom right gives the status of the tank's water level.

The overview dashboard for my garden. The top left and centre show current temperature and humidity, including trend, while the top right shows the current light reading. The bottom left and centre show current moisture reading and the last time each plant was watered. The final panel in the bottom right gives the status of the tank's water level.

With this data coming in, I was able to easily understand what was going on with my plants:

  1. I can easily see the effect watering has on my plants, via the moisture levels (lower numbers = more moisture). I generally aim to maintain the moisture level between 300 and 410. Over 410 and the soil starts getting quite dry, while putting the moisture probe in a glass of water reads 220—so it’s probably best to keep it well above that.
  2. My basil was much thirstier than my chilli bush, requiring about 50–75% more water.
  3. It can get quite hot in the sun on our windowsill. One fortnight in February recorded nine 37+ degree days, with the temperature hitting 47 degrees twice during that period.
  4. During the height of summer, the tank typically holds 7–10 days’ worth of water.

Having this data in Splunk also alerts me to when the system isn't working properly. On one occasion in February, I noticed that my dashboard was consistently displaying that the basil pot had been watered within the last 15 minutes. After a few minutes looking at the data, I was able to figure out what was going on.

Using the above graph from my garden’s Splunk dashboard, I could see that my setup had correctly identified that the basil pot needed to be watered and had watered it—but I wasn't seeing the expected change in the basil’s moisture level. So the next time the system checked the moisture level, it saw that the plant needed to be watered, watered it again, and the cycle continued. When I physically checked the system, I could see that the Arduino was correctly setting the relay and turning the pump on, but no water was flowing. After further investigation, I discovered that the pump had died. Once I had replaced the faulty pump, everything returned to normal.

Since my initial design, I have upgraded the system a few times. It now joins a number of other Arduinos I have around the house, sending data via cheap radio transmitters to a central Arduino that then forwards the data on to Splunk. Aside from the pump dying, the garden system has been functioning well for the past six months, providing me with data that I will use to continue making the system a bit smarter about how and when it waters my plants.

I've also 3D printed a nice case in UV-resistant plastic, so my gardening system no longer has to live in an old lunchbox.

The Splunk Gardener


Just Splunk it.

Michael Clayfield

Splunk Consultant

Length of Time at JDS

2.5 years

Skills

Splunk, HP BSM, 3D Printing.

Workplace Passion

Improving monitoring visibility and saving people’s time.

Posted by Laura Skillen in Case Study, Monitor, Splunk, 0 comments
ServiceNow—The latest and greatest at Knowledge17

ServiceNow—The latest and greatest at Knowledge17

ServiceNow’s Knowledge17 event has now come to a close, with many exciting new features and developments announced. This year’s was the biggest Knowledge event to date, with 15,000 delegates in attendance—the first event, in 2004, had just 85! The theme for 2017 was ‘Enterprise at Lightspeed’, and this idea underpins the year’s featured products.

New ServiceNow CEO, John Donohoe, was the conference’s first keynote speaker. He brings leadership and vision from his time at eBay and Bain, and his speech outlined how experience in enterprise transformation is informing his vision for ServiceNow’s continuing growth. Donohoe emphasised his role as a servant of customers, staff and partners, with his first 100 days spent on a customer listening tour.

ServiceNow have committed to continuing to provide customers with more capabilities out-of-the-box, better visibility over the product roadmap, improved training for new products, and a better user experience.

Event guests included customers, such as Ashley Haynes-Gaspar from GE, who described their experience of business transformation delivered on a ServiceNow platform.  Ms Haynes-Gaspar shared the insights she has gained from her experiences as an advocate for women in IT, and entertained the crowd in her exchange with executive Dave Schneider, referring to him as a great ‘man-bassador’.

GE plans to consolidate 90 systems to a single platform, as well as enhancing self-service through use of Service Portal on top of Customer Service Management, which Haynes-Gaspar demonstrated for the audience. The solution featured GE service agents as the end users, with tabbed views, a Service 360 view, and a responsive solution that provides suggestions on how to resolve cases. A 10% increase in self-service will deliver an anticipated $10m in benefits.

Knowledge17 focused on the idea that automation will drive growth and productivity for organisations, and highlighted new offerings in key areas.

HR Onboarding

One of the most requested capabilities in ServiceNow has been the ability to deliver HR approval and provisioning workflows out-of-the-box. ServiceNow have now responded with an application introducing a new framework that enables onboarding, offboarding, and change activities to be driven by configuration.

Security Response

ServiceNow are recognising the increasing importance of security to the enterprise, and have identified security response as a key area requiring automation. The growth of IoT and the increasing prevalence of attacks means staff do not have the capacity to respond to security incidents and vulnerabilities without the assistance of automated remediation. ServiceNow Security Operations extends existing SIEM capabilities to provide triage and automated response. It also leverages the policy and compliance capabilities found in GRC to monitor and proactively manage risks in IT.

The Consumer Experience

A fundamental principle underlying ‘Enterprise at Lightspeed’ is that enterprise users are entitled to an experience like that associated with consumer services in our lives outside of work. Central to delivering this type of experience is the Service Portal, which links the intuitiveness and flexibility of the modern web service UI to the power of the ServiceNow platform.

Performance Improvement

Donohoe acknowledged that customers are demanding better performance from ServiceNow. He outlined performance improvements expected from the upcoming release of Jakarta, and committed to continuing to provide improvements in future releases.

Software Asset Management

While ServiceNow has included hardware asset management for several years, IT operations professionals have been screaming for the addition of Software Asset Management (SAM) capabilities. ServiceNow has stepped up, and their new SAM module provides automation for managing software inventories. It features native integration with systems of record such as SCCM, as well as the ability to sync with sources on an ongoing basis. It is a significant improvement for those of us that have been managing true-ups in Excel, because ServiceNow is able to provide a central, automatically-synced record. Where SAM truly shines is in the ability to build intelligent business rules that model your actual licence entitlements, and proactively manage allocations. IT Ops can make real savings, not only by lessening labour efforts, but by reducing software licensing costs.

Intelligent Automation

Intelligent automation of IT operations promises to provide greater visibility and proactive management of IT. JDS continues to follow this trend closely, as we have been bringing this capacity to customers for several years via a range of products in the industry. ServiceNow brings a new approach, with a ‘Guitar Hero’-style user interface that maps events onto timelines. The potential is there to provide more meaningful insights for IT, enabling staff to more efficiently manage incidents, problems, and changes in their environment. ServiceNow uses machine learning to provide automated responses to issues based on event and response histories. ServiceNow CTO Dave Wright demonstrated how the approach is capable of raising incidents for future issues that have not yet occurred. Exciting as this is, we will want to take care with this feature in customer environments, to be sure we are not generating unnecessary noise.

Benchmarking

Jakarta brings the ability for IT managers to benchmark performance of their operation against peers in industry groups. This will be valuable for organisations that apply a continual improvement approach to their IT operations, and which are mature enough to benefit from a data-driven approach.

Over the next few weeks, JDS will continue to bring you insights into our experiences and learnings at Knowledge17, and what lies ahead for customers—so stay tuned!

Posted by Matthew Stubbs in News, Products, ServiceNow, 0 comments
Service Portal Simplicity

Service Portal Simplicity

The introduction of the Service Portal, using AngularJS and Bootstrap, has given ServiceNow considerable flexibility, allowing customers to develop portals to suit their own specific needs.

While attribution depends on whether you subscribe to Voltaire, Winston Churchill, or Uncle Ben from Spiderman, “With great power comes great responsibility.” And this is especially true when it comes to the Service Portal. Customers should tread lightly so as to use the flexibility of the portal correctly.

A good example of this arose during a recent customer engagement, where the requirement arose to allow some self-service portal users the ability to see all the incidents within their company. This particular customer provides services across a range of other smaller companies, and wanted key personnel in those companies to see all their incidents (without being able to see those belonging to other companies, and without being able to update or change others’ incidents). 

The temptation was to build a custom widget from scratch using AngularJS, but simplicity won the day. Instead of coding, testing, and debugging a custom widget, JDS reused an existing widget, wrapping it with a simple security check, and reduced the amount of code required to implement this requirement down to one line of HTML and two lines of server-side code.

The JDS approach was to instantiate a simple list widget on-the-fly, but only if the customer’s security requirements were met.

Normally, portal pages are designed using Service Portal’s drag-and-drop page designer, visible when you navigate to portal configuration. In this case, we’re embedding a widget within a widget.

We’ll create a new widget called secure-list that dynamically adds the simple-list-widget only if needed when a user visits the page.

Let’s look at the code—all three lines of it:

HTML

<div><sp-widget widget="data.listWidget"></sp-widget></div>

By dynamically creating a widget only if it meets certain requirements, we can control when this particular list widget is built.

Server-Side Code

(function() {
                if(gs.getUser().isMemberOf('View all incidents')){
                                data.listWidget = $sp.getWidget('widget-simple-list', {table:'incident',display_field:'number',secondary_fields:'short_description,category,priority',glyph:'user',filter:'active=true^opened_by.company=javascript:gs.getUser().getCompanyID()',sp_page:'ticket',color:'primary',size:'medium'});
                }             
})();

This code will only execute if the current user is a member of the group View all incidents, but the magic occurs in the $sp.getWidget, as this is where ServiceNow builds a widget on-the-fly. The challenge is really, ‘where can we find all these options?’

How do you know what options are needed for any particular widget, given those available change from one widget to another? The answer is surprisingly simple, and can be applied to any widget.

When viewing the service portal, administrators can use ctrl-right-click to examine any widgets on the page. In this case, we’ll examine the way the simple list widget is used by selecting instance options.

Immediately, we can see there are a lot of useful values that could be set, but how do we know which are critical, and what additional options there are?

How do we know what we should reference in our code?

Is “Maximum entries” called max, or max_entry, or max_entries? Also, are there any other configuration options that might be useful to us?

By looking at the design of the widget itself in the ServiceNow platform (by navigating to portal-> widgets), we can scroll down and see both the mandatory and optional configuration values that can be set per widget, along with the correct names to use within our script.

Looking at the actual widget we want to build on-the-fly, we can see both required fields and a bunch of optional schema fields. All we need to do to make our widget work is supply these as they relate to our requirements. Simple.

{table:'incident',display_field:'number',secondary_fields:'short_description,category,priority',glyph:'user',filter:'active=true^opened_by.company=javascript:gs.getUser().getCompanyID()',sp_page:'ticket',color:'primary',size:'medium'}

Also, notice the filter in our widget settings, as that limits results shown according to the user’s company…

opened_by.company=javascript:gs.getUser().getCompanyID()

This could be replaced with a dynamic filter to allow even more flexibility, should requirements change over time, allowing the filter to be modified without changing any code.

The moral of the story is keep things simple and…

Don't reinvent the wheel


Document as you go.

Peter Cawdron

Consultant

Length of Time at JDS

5 years

Skills

ServiceNow, Loadrunner, HP BSM, Splunk.

Workplace Passion

I enjoy working with the new AngularJS portal in ServiceNow.

Posted by Peter Cawdron in ServiceNow, Tech Tips, 0 comments
IT security is broken—but help is on the horizon.

IT security is broken—but help is on the horizon.

2017 may well be the year that people remember as the year we lost our software innocence. This past weekend, we saw an NSA/CIA-related leak leading to massive data loss and the disruption of hospital services and other institutions worldwide, via the ransomware ‘WannaCry’. There have also been recent, highly-publicised allegations that hacking operations have affected email servers, databases, banks—and even elections.

As IT users, we all have the responsibility to protect the data we collect, store and use—it is no longer enough to ignore information security in an attempt to reduce costs, or because we are not aware of the threats posed by a lack of security. However, what we find is that as the complexity of software has exploded, particularly in association with the move to mobile and cloud applications, few single technical experts or even groups of experts know of and can understand all the functions and vulnerabilities associated with their software. This creates a situation brimming with opportunities for ill-intentioned actors to gain access to sensitive information, such as client or patient personal details, confidential business data, or the programs underlying critical business applications. This trend is not slowing, and means that individuals and organisations need to take immediate steps to secure their data.

The best time to plant a tree is twenty years ago. The second-best time is now.

Make sure you secure your environment now.

Fortunately, there are changes on the horizon, and things you can do to secure your data today. Firstly, the reality of the threat means the Australian government is investing heavily in improving our national monitoring and intelligence defences, and businesses are starting to follow suit. Secondly, the government has (finally!) passed legislation that requires mandatory reporting of data breaches in Australian companies and institutions. This means that from February 2018, issues must be reported, and can no longer be kept out of the public eye—this is good for the safety of Australian businesses and customers, and something for which organisations must prepare as soon as possible. Preparation entails active penetration testing, code and configuration reviews, and security event monitoring—simply deploying a firewall is not sufficient to protect an organisation.

Finally, trusted companies like JDS Australia have the tools, skills, and capabilities to help secure your environment. We can analyse your software to find weaknesses, work with your development and project teams to bake healthy security practices into your software from the start, and provide the tools and services that will enable you to easily monitor the security posture of your entire organisation, thwarting the ‘bad guys’ wherever they are. Vitally, we strive to communicate potential problems in an easily understood manner, allowing our customers to make informed business decisions.

For a confidential discussion about how we can help, call JDS Australia on 1300 780 432.

Posted by Nick Wilton in Secure, Test, 0 comments
Using Splunk and Active Robot Monitoring to resolve website issues

Using Splunk and Active Robot Monitoring to resolve website issues

Recently, one of JDS’ clients reached out for assistance, as they were experiencing inconsistent website performance. They had just moved to a new platform, and were receiving alerts about unexpectedly slow response times, as well as intermittent logon errors. They were concerned that, were the reports accurate, this would have an adverse impact on customer retention, and potentially reduce their ability to attract new customers. When manual verification couldn’t reproduce the issues, they called in one of JDS’ sleuths to try to locate and fix the problem—if one existed at all.

The Plot Thickens

The client’s existing active robot monitoring solution using the HPE Business Process Monitor (BPM) suite showed that there were sporadic difficulties in loading pages on the new platform and in logging in, but the client was unable to replicate the issue manually. If there was an issue, where exactly did it lie?

Commencing the Investigation

The client had deployed Splunk and it was ingesting logs from the application in question—but its features were not being utilised to investigate the issue.

JDS consultant Danesen Narayanen entered the fray and was able to use Splunk to analyse the data received. He could therefore immediately understand the issue the client was experiencing. He confirmed that the existing monitoring solution was reporting the problem accurately, and that the issue had not been affecting the client’s website prior to the re-platform

Using the data collected by HPE BPM as a starting point, Danesen was able to drill down and compare what was happening with the current system on the new platform to what had been happening on the old one. He quickly made several discoveries:

1. There appeared to be some kind of server error.

Since the re-platform, there had been a spike in a particular server error. Our JDS consultant reviewed data from the previous year, to see whether the error had happened before. He noted that there had previously been similar issues, and validated them against BPM to determine that the past errors had not had a pronounced effect on BPM—the spike in server errors seemed to be a symptom, rather than a cause.

Database deadlocks were spiking.

Database deadlocks were spiking

It was apparent that the error had happened before

2. There seemed to be an issue with user-end response time.

Next, our consultant used Splunk to look at the response time by IP addresses over time, to see if there was a particular location being affected—was the problem at server end, or user end? He identified one particular IP address which had a very high response time. What’s more, this was a public IP address, rather than one internal to the client. It seemed like there was a end-user problem—but what was the IP address that was causing BPM to report an issue?

Daily response time for all IPs (left axis), and for the abnormal IP (right axis). All times are in seconds.

Daily response time for all IPs (left axis), and for the abnormal IP (right axis). All times are in seconds.

Tracking Down the Mystery IP Address

At this point our consultant called for the assistance of another JDS staff member, to track down who owned the problematic IP address. As it turned out, the IP address was owned by the client, and was being used by a security tool running vulnerability checks on the website. After the re-platform, the tool had gone rogue: rather than running for half an hour after the re-platform, it continued to open a number of new web sessions throughout the day for several days.

The Resolution

Now that the culprit had been identified, the team were quickly able to log in to the security tool to turn it off, and the problem disappeared. Performance and availability times returned to what they should be, BPM was no longer reporting issues, and the client’s website was running smoothly once more. Thanks to the combination of Splunk’s power, HPE's active monitoring tools, and JDS’ analytical and diagnostic experience, resolution was achieved in under a day.

Our Team on the Case

Technology is only one part of IT.

Shane Andrewartha

Consultant

Length of Time at JDS

Since early 2016

Skills

OMi, HP Operations Manager/Agents, SiteScope, xMatters, ITIL, Unix, Coding.

Workplace Passion

I enjoy working with monitoring systems. It is satisfying to solve technical issues that would otherwise keep people up at night.

Posted by Laura Skillen in Case Study, Financial Services, Splunk, 0 comments
The Australian Defence Reserves Support Council Awards JDS

The Australian Defence Reserves Support Council Awards JDS

Lt. Samuel Abdelsayed receives the 'Employer Support Award - Medium Business' on behalf of JDS.

Lt. Samuel Abdelsayed receives the ‘Employer Support Award – Medium Business’ on behalf of JDS.

JDS has again been awarded by the Australian Defence Reserves Support Council, an organisation which assists Reservists both within Australia and overseas.

This is the second time JDS has received an ‘Employer Support Award’.  Staff member, Lieutenant Samuel Abdelsayed, nominated JDS for being supportive of his Reserve activities and enabling him to attend all training activities and courses.

JDS Managing Director John Bearsley says that Sam’s service in the defence forces is to be highly commended, highlighting the role of service in developing professionalism and problem-solving skills.

Thanks go to both the Defence Reserves Support Council, and to Sam for his demonstration of leadership and commitment.

 

Read more:

 

Posted by Laura Skillen in News, 0 comments
JDS is now a CAUDIT Splunk Provider

JDS is now a CAUDIT Splunk Provider

Splunk Enterprise provides universities with a fast, easy and resilient way to collect, analyse and secure the streams of machine data generated by their IT systems and infrastructure.  JDS, as one of Australia’s leading Splunk experts, has a tradition of excellence in ensuring higher education institutions have solutions that maximise the performance and availability of campus-critical IT systems and infrastructure.

The CAUDIT Splunk offering provides Council of Australian University Directors of Information Technology (CAUDIT) Member Universities with the opportunity to buy on-premise Splunk Enterprise on a discounted, 3-year basis.  In acknowledgement of JDS’ expertise and dedication to client solutions, Splunk Inc. has elevated JDS to a provider of this sector-specific offering, meaning we are now better placed than ever to help the higher education sector reach their data collection and analysis goals.

What does this mean for organisations?

Not-for-profit higher education institutions that are members of CAUDIT can now use JDS to access discounted prices for on-premises deployments of Splunk Enterprise.  JDS are able to leverage their expertise in Splunk and customised solutions built on the platform, in combination with their insight into the higher education sector, to ensure that organisations have the Splunk solution that meet their specific needs.

 

Secure organisational applications and data, gain visibility over service performance, and ensure your organisation has the information to inform better decision-making.  JDS and Splunk are here to help.

 

You can learn more about JDS’ custom Splunk solutions here: Active Robot Monitoring with Splunk.
Contact one of our Australia-based consultants today on 1300 780 432.

Posted by Laura Skillen in Higher Education, News, Splunk, 0 comments
What’s new in ServiceNow for 2017?

What’s new in ServiceNow for 2017?

ServiceNow has released its latest version, Istanbul, which introduces a game changing innovation for the platform—Automated Test Framework to provide regression testing.

The Automated Test Framework will drastically simplify future upgrades as customers will be able to have a high level of confidence in testing releases prior to upgrades quickly and efficiently. This will allow customers to stay in-sync with the latest releases from ServiceNow without fear of broken functionality adversely impacting their business processes.

What is the Automated Test Framework?

The Automated Test Framework is a module within ServiceNow that allows customers to define their own tests against any other ServiceNow module, including bespoke modules developed in-house or by third-parties such as JDS.

Tests can be reused in test suites to provide comprehensive coverage of a variety of business processes. In this example, a test has been developed to verify that critical incidents are properly identified in the upgraded version. Notice how each step contains a detailed description of what the test will do in plain English.

Be sure to always include a validation step!

Opening a form, setting values, and successfully submitting that form is not enough to ensure a test is successful. The key to a successful regression testing is to have a point of validation, or confirmation that the test has produced the expected outcome. In this case, validating that the priority of the incident has been set to critical.

When defining test steps, ServiceNow will walk you through a wizard that will show you all the possible automated steps that can be undertaken. Note that ServiceNow will automatically insert this next step after the last step, unless you specify otherwise.

The automated framework can open new or existing records and undertake any action an end-user would normally complete.

Also, note how it is possible to run automated regression tests directly against the server and not just through a regular ServiceNow form.

Be careful when defining server tests as, ideally, automated regression tests should be run through a form, mimicking a transaction undertaken by an actual user. This ensures UI policies and actions are all taken into account in the test results. Tests conducted directly against the server will include business rules, but not UI scripts/actions, which may lead to inconsistent results.

Server tests are ideal for testing integration with third-party systems via web services.

Test results include screenshots so you can see precisely what occurred during your test run. Also, notice that the test has been run against a particular browser engine, and the overall time for the entire process has been captured.

 

 

The individual runtime of each step can be eye-balled from the start-time, although it is possible to have this automatically calculated if need be.

Please note, whether you are creating a new record or modifying an existing record, the results of your automated regression test will NOT become part of that particular table’s records. For example, if you insert an new incident, you will see screenshots of the new incident complete with a new incident number, but if you go to the incident table, it won’t be present. If you update an existing incident, the update will be visible in the test results, but the actual record in the table will not change.

When it comes to best practices, JDS recommends structuring your tests carefully to ensure your coverage is comprehensive and represents an end-to-end business transaction, taking into account the impact on workflows.

If you have any questions about Automated Test Frameworks in ServiceNow, please contact JDS.

 

Posted by Peter Cawdron in ServiceNow, Tech Tips, 0 comments
JDS Win AppDynamics Partner Award

JDS Win AppDynamics Partner Award

JDS Win AppDynamics 2016 Emerging Markets Partner of the Year Award

We are delighted to announce JDS have received the AppDynamics 2016 Emerging Markets Partner of the Year Award at AppSphere 2016 in Las Vegas on the 15th Of November. As a team, we are very proud of this award as it is the result of a lot of hard work from a lot of talented people at JDS.

John Bearsley, Managing Director of JDS said, “AppDynamics has stormed the market in the past few years as one of the leading next generation Application Performance Management Tools. We have seen first hand how it can transform the monitoring landscape for our customers providing them great visibility into the end user experience and the performance and availability of their business critical applications. AppDynamics is a vendor we are proud to support and we look forward to helping many more customers derive its significant benefits in the years to come”.
appd-awardv2

To see how JDS can help ensure it works with AppDynamics check out our AppDynamics Page.

ensure it works with

 

Posted by Joseph Banks in AppDynamics, News, 0 comments
Learning Analytics

Learning Analytics

CAUDIT Analysis Part 3

View Part 1 Here
View Part 2 Here

Since 2006, the Council of Australian University Directors of Information Technology (CAUDIT) has undertaken an annual survey of higher education institutions in Australia and New Zealand to determine the Top Ten issues affecting their usage of Information Technology (IT).

To help our customers get the most out of this report, JDS has analysed the results and is offering a series of insights into how best to address these pressing issues and associated challenges.

Issue 8: Learning Analytics

student-success

Supporting improved student progress through establishing & utilising learning analytics.”

Every interaction from students, staff and researchers, through any of an institution’s systems, is a data point of interest. These points of interest are a wealth of information that can be used to improve services and experiences institution wide.

Numerous studies show the importance of basing decisions on quantifiable measurements. It allows you to target, change and measure outcomes. For example, measuring student performance beyond the standard result orientated structure can allow an institution to intervene earlier in a student’s progress. One study showed that a university was able to intervene as early as two weeks into a semester (Sclater, 2016). They identified students who were at high risk of dropping out and helped these students adjust their program.

These areas may be outside of the typical ICT purview, but are becoming increasingly ICT driven. ICT tends to own the analytics platform and should be responsible for ensuring it meets an institutes requirements.

CAUDIT Ranking Trend: 2014 – | 2015 #17 | 2016 #8
 

Challenges and Considerations

challenges
Challenge: Manage increased data inflow from a vast array of sources across campuses.
Students and teachers can bring in a multitude of devices for use on campus, including laptops, tablet, smartphones and more. They can also access institutional facilities, applications, systems, and data whilst at university, work, home. These are just a few examples of different institutional data points. Combining all of these devices across a variety of locations depicts how data points are increasing exponentially.

Each data point when analysed in the right manner, holds valuable information – information that determines how an institution should respond and/or intervene to ensure it is providing the right solution to the matter at hand. In order to gain this value, an institution should first understand how to manage many and different data inputs.

JDS has the experience in building and implementing platforms that can provide a universal solution – a solution that can collect, index and analyse any machine data at massive scale, without limitation. Institutions can then search through this data in real time, correlate and analyse with the benefit of machine learning to better understand institutional trends and ultimately make better informed decisions.

Challenge: Cross collaboration across institutional stakeholders (eg. students, teachers, admin, etc.)
An institution such as a university has a wide variety of internal and external stakeholders, all with the need to share information between one another. This information also often originates from a large variety of sources.

The challenge here is making this data from students, teachers, researchers and administration staff accessible to each other in a controlled, normalised manner. These stakeholders will also have different questions to ask of the same data, each with their own requirements. ICT does not necessarily have a deep understanding nor require a deep understanding of the content of this data, but they can provide a single platform for collaboration. A common platform can address these issues and allowing cross collaboration between stakeholders while improving an institutions products and services.

With the help of JDS, ICT can provide such a platform that allows ease of generating dashboards, reports and alerts. JDS can enable the institution to not only ask various questions of the same data but also create visualisations and periodic reports to help interpret and use the information gleamed.

JDS has implemented such solutions across various Australian organisations, unleashing the powerful data collection functionalities of a variety of industry leading monitoring tools. We have our own proprietary solution, JDash, purpose build to help organisations collaborate to understand service impact levels, increase visibility into IT environments, allow efficient decision making and provide cross silo data access and visualisations. JDash is also fully customisable to various institutional needs and requirements.

Challenge: Using analytics for early intervention with student difficulties.
In a recent article in The Australian, the attrition rate from some universities was above 20%. Numerous factors contribute to this, which can ultimately be reduced through early prevention. Institution can make better informed decisions if they have the right data at the right time.

The CAUDIT report highlighted the need to identify student performance early. This allows an institution to intervene by either advising a student of a different direction or providing the right resources to handle their difficulties.

In order to provide this level of insight into student drop-out rates and early intervention, institutions need to build and manage a learning analytics platform. A learning analytics platforms collects, measures, analyses, and reports data on the progress of learners within context.

JDS can assist institutions with building a centralised, scalable, and secure learning analytics platform. This platform can ingest the growing availability of big datasets and digital signals from students interacting with numerous services. From the centralised platform, institutions can interpret the information to make decisions on intervention and ultimately reduce attrition rates of students.

About JDS

about-jds
With extensive experience across the higher education sector, JDS assists institutions of all sizes to streamline operations, boost performance and gain competitive advantage in an environment of tight funding and increasing student expectations. With more than 10 years’ experience in optimising IT service performance and availability, JDS is the partner of choice for trusted IT solutions and services across many leading higher education institutions.

 

ensure it works with

 

Posted by Joseph Banks in Higher Education, News, Tech Tips, 0 comments