Blog

Integrating Splunk ITSI and Observability Cloud for Unified Insights

The Splunk Observability Cloud suite (O11y) delivers powerful real-time infrastructure and application monitoring capabilities, while Splunk IT Service Intelligence (ITSI) enables holistic and fully customisable service modelling and impact analysis. When these two technologies are integrated, they effortlessly bridge the gap between tracking infrastructure performance and the overall well-being of your business service.

Making Splunk Core Aware of O11y

A fundamental aspect of integrating ITSI and O11y is making observability metrics available to Splunk Core, and in turn, to Splunk ITSI and IT Essentials Work. For this you’ll need…

This is a Splunk built add-on available on Splunkbase: Splunk Infrastructure Monitoring Add-on.
While the name points to the SIM portion of the O11y suite, the Splunk Infrastructure Monitoring Add-on facilitates access to all O11y metrics, including APM, RUM and Synthetic Monitoring metrics.
NOTE: It is only O11y metric data that can be made available to Splunk Core – not the traces and spans from which these metric results and metadata originate.

SIM Add-on Integration Options

The add-on offers two integration options:
1. Enable Splunk Core to Query O11y Metric Stores
The Splunk Infrastructure Monitoring Add-on introduces a new SPL command called “sim” which allows you to specify a SignalFlow program for querying observability metrics in an SPL search. The SignalFlow program will be run on the remote O11y instance, and the returned metrics can then be processed in the remainder of the SPL search. 

2. Ingesting O11y Metrics into Splunk Indexes
The add-on also contains modular inputs which can be used to index O11y metrics in Splunk Core indexes. You are able to configure these modular inputs by specifying a SignalFlow program which will be run periodically to query the desired O11y metric summaries and index the results in Splunk Core.

NOTE: Ensure that the “stash” source type is always used for the data collected by these modular inputs (as in their default state) so that the collected metrics will not count toward Splunk licence charges.

Where to Install the SIM Add-on

Depending on which integration options are required, the add-on will need to be installed in at least one of these Splunk Core nodes:

Search Heads:
Required on any Search Heads where the “sim” command will be used in SPL searches to query O11y metrics.  In particular, this add-on will be required on Splunk ITSI instances utilising the “sim” command in KPI searches.

Indexers:
Required on any Indexer node/cluster where target metric store indexes are created for ingesting O11y metrics via the SIM add-on modular inputs. The add-on creates an index called “sim_metrics“ which should be used as the default target for O11y metrics as it will not count toward Splunk licence charges (and remember to specify “stash” sourcetype in the modular inputs as noted above).

Forwarders:
Required on any Heavy Forwarder node which will be running the SIM add-on modular inputs to query O11y metrics.

Which Integration Option Is Best?

While it is not possible to give a “one size fits all” answer, consider the following:

The “sim” command is lightning-fast
This is because the metric store of O11y is lightning-fast. By design, the O11y platform is capable of storing and retrieving massive volumes of highly granular data in real time. So performance is rarely a consideration when writing SPL searches using the “sim” command.

The Modular Inputs Duplicate Predetermined Metric Summaries
With the modular inputs of the add-on, you are able to decide ahead of time what O11y metric data you’d like to summarise and index in Splunk Core and at what intervals. While this will only be a subset of the original data that is being indexed, it is still duplication which might not be necessary in a given use case. More to the point, searching the summarised data indexed in Splunk Core lacks the flexibility of using “sim” searches to query metrics directly from O11y, which can be changed on the fly without ever needing to update any modular inputs or re-ingest any data.

Querying O11y directly with the “sim” command would often be the more desirable option.  However, in some scenarios it may be necessary to index O11y metrics in Splunk Core, e.g if security policies prevent certain Splunk Core users from getting direct access to O11y.
TIP: Use the O11y plot editor to create and test SignalFlow programs which can then be copied into “sim” commands in Splunk Core searches and ITSI KPIs.

Enriching ITSI with O11y Knowledge

The sky’s the limit when modelling systems in ITSI, and for large or complex service models you’ll want to leverage templates and pre-built components instead of re-inventing the wheel.
Content Packs are the mechanism in ITSI for bundling pre-built components, and for O11y content in particular there is…

The Content Pack bundles a set of valuable ITSI knowledge objects which can be leveraged for managing and visualising O11y data, including:
> Services and KPIs
> Service Templates and KPI Base Searches
> Glass Tables and a Service Analyser
> Entity Types and Entity Import Jobs

As with those of any ITSI content pack, many of the above components may not be directly usable for a given use case. They may instead serve as examples or initial templates to the custom content you will be creating.
At the very least, the below entity import jobs from the content pack are invaluable for effortlessly bringing in all O11y-discovered objects to the ITSI entity database:
> ITSI Import Objects – Get_OS_Hosts
> ITSI Import Objects – Get_RUM_*
> ITSI Import Objects – Get_SIM_AWS_*
> ITSI Import Objects – Get_SIM_Azure_*
> ITSI Import Objects – Get_SIM_GCP_*
> ITSI Import Objects – SSM_get_entities_*
> ITSI Import Objects – Splunk-APM Application Entity Search

Whatever the situation, it is in your best interest to install the Content Pack for Splunk Observability Cloud in ITSI when integrating with the O11y suite.

Installing the O11y Content Pack

The latest O11y Content Pack requires the following two add-ons to be installed in the Splunk Core environment first:
> Splunk Infrastructure Monitoring Add-on – The Splunk-built add-on described earlier in this document
> Splunk Synthetic Monitoring Add-on – A SplunkWorks-built add-on (not formally released by Splunk)

Also, if the Content Pack for Splunk Infrastructure monitoring was previously installed in ITSI, then there are additional migration steps to perform before installing the O11y content pack:
> Migrate from the Content Pack for Splunk Infrastructure Monitoring to the Content Pack for Splunk Observability Cloud topic

After the above items are addressed, the method for installing the Content Pack in ITSI is the same as with any other content pack, i.e. via Configuration > Data Integrations > Content Library.
TIP: When installing the content pack, consider using the option of adding a prefix to the names of imported content such as services, service templates and KPI base searches. That way they can be easily identified as examples which can be copied from. This is not so important for items like the entity import jobs (and you may then need to separate imports for differently named objects).

Unified Alerting with O11y and ITSI

In an environment armed with ITSI, an ideal strategy is to consolidate alert management  with ITSI as the central point for processing alerts originating from any Splunk sources such as O11y, as well as from external systems. ITSI’s advanced analytics can be leveraged to implement intelligent alert logic and the alerts actions can interface to Splunk On-Call for escalation management.

This Content Pack is required in ITSI for integrating O11y and ITSI alerting. It comes with correlation searches and aggregation policies that are utilised in the integration procedure (as noted in the High Level Implementation Plan further below).
Installing this Content Pack requires additional version-dependent actions as well as an update to the “Itsi_kpi_attributes” lookup. Please follow the below installation instructions:
Installing and Configuring the Content Pack for ITSI Monitoring and Alerting

Universal Alerting

Splunk have defined the Universal Alerting Field Normalisation Standard in ITSI for which there are pre-built correlation searches provided in the Monitoring and Alerting Content Pack. Normalising alerts to adhere to this schema ensures that alerts from any source can be processed in a common fashion using the pre-built content.
The schema details many fields, many of which are optional, and the following 4 are mandatory for any alert to comply:
> src: the target of the alert, e.g. host, device, service etc.
> signature: a string which uniquely identifies the type of alert
> vendor_severity: the original vendor-specific severity/health/status string
> severity_id: normalised severity

High Level Implementation Plan

  1. Configure O11y to send alerts to Splunk Enterprise or Cloud Platform:
    This requires creating an alert index in Splunk Core (labelled “Alert Index” in the above diagram), and a HEC endpoint. Then in O11y you can configure a new “Webhook” integration to send alerts to the HEC endpoint.
  2. Normalise O11y alerts to conform to the ITSI Universal Alerting schema
  3. Configure “Universal Correlation Search – o11y” to create notable events:
    This correlation search is shipped with the ITSI Monitoring and Alerting content pack
  4. Configure the “Episodes by Application/SRC o11y” notable event aggregation policy (NEAP):
    Also shipped with the ITSI Monitoring and Alerting content pack
  5. Configure ITSI correlation searches for monitoring aggregated episodes:
    The below 2 searches, also from the content pack:
    “Episode Monitoring – Set Episode to Highest Alarm Severity o11y”
    “Episode Monitoring – Trigger OnCall Incident”
  6. Integrate Splunk On-Call with ITSI:
    This requires installation of the Splunk On-Call (VictorOps) addon in Splunk core, and configuring it with the details of an O11y Splunk On-Call account
  7. Configure action rules in the ITSI NEAP from step 4 for Splunk On-Call Integration
  8. Configure Splunk On-Call with appropriate escalation policies

Full implementation details are documented on the Splunk Lantern site: Managing the lifecycle of an alert from detection to remediation

Next Steps

Now you have the playbook to integrate the Splunk Observability Cloud suite with Splunk ITSI. 
JDS excels in delivering tailored solutions for our customers where we integrate their O11y suite with Splunk ITSI, optimising alert management and reducing Mean Time to Resolution (MTTR).
Reach out if you would like help or advice in improving your observability and troubleshooting efficiency with Splunk Observability Cloud and Splunk ITSI.


Read a recent JDS Customer Success Story here.

Anatomy of a New Relic FSO Dashboard

Full stack observability (FSO) is of paramount importance in today’s digital landscape due to the complexity of applications and infrastructure.  Modern business systems can involve distributed architectures, microservices, and containers, necessitating a holistic view to properly understand how all components interact.  The rise of cloud-native environments and DevOps practices also underscores the need for full stack observability to support resource optimisation, collaboration, and continuous delivery.

The New Relic FSO Dashboard offers a powerful and customisable platform that empowers developers and operations teams with real-time insights and comprehensive data visualisation.

So what does the New Relic FSO dashboard actually look like?

The ‘Business Overview’ dashboard displays important business metrics such as Average Order values By Day, Purchase Volume, Revenue Trend, Sales Funnel, and Abandoned Cart Rate.

In the second tab, the IT Platform Overview dashboard provides detailed information about application and infrastructure performance and status, including availability, daily site visitors, slow pages, most-used features, and mobile device types accessing the platform. This allows for quick identification of issues and prompt corrective actions.

The third tab, Cloud Migration Dashboard, helps track the progress of the migration journey and provides insights into how the cloud environment is being executed. The Percent of On-premises Hosts vs AWS Hosts, Average CPU Usage by AWS vs On-premises Host Location, Total Number of Hosts by Application, and Average Response Time by Host Location are displayed to show the before and after application/host response times, allowing you to determine the extent of improvement in response time after migration. This dashboard enables developers and IT operations teams to monitor the progress of migration in real time.

The User Metrics Dashboard provides an overall view of user behaviour to optimise applications and enhance user experience. It includes metrics such as visitor count, app usage, page visits, average response time, page loading time, and visitor countries/cities.

The Data Analytics Dashboard helps track database performance, identify and resolve performance issues, and the Application Metrics Dashboard provides real-time information on application performance, including response time, error rate, and throughput, allowing for proactive optimisation.

Additionally, there is a dashboard that helps manage AWS costs. Many companies experience difficulties in cost management when migrating to the cloud. In the AWS Budget Overview dashboard, you can examine the total cost divided into actual costs, forecasted costs, and limits. You can also view important metrics such as Pre-production Budget, Total Cost Trends, Budget Trending, and Estimated changes per service. This allows you to track cloud costs in real time and quickly identify areas that require cost optimisation. Furthermore, it provides valuable assistance in planning future budgeting related to the cloud environment.

As a trusted, Gold partner of New Relic, JDS has a team of experts who can not only implement the New Relic FSO dashboard, but also elevate your dashboard experience, take it to the next level, and unleash its full potential.  From designing visually appealing layouts, to fine-tuning performance metrics and alerts, JDS can maximise the value of New Relic’s powerful observability platform. Your applications and users will thank you for it!

The Power of FSO with New Relic

In the sophisticated landscape of today’s IT environments, the journey from telemetry data to business information demonstrates the complexity involved in achieving full-stack observability. Businesses in the throes of digital transformation have found the intricacy of their IT infrastructure compounding. Incorporating modern technologies such as cloud-based systems, microservices, containers, and serverless architectures, along with DevOps and Site Reliability Engineering (SRE) practices, is changing the shape of our technological future.

Observability surpasses traditional monitoring by providing a more comprehensive understanding of ‘why and how‘ an issue occurred. This holistic approach integrates telemetry data—logs, events, metrics, and traces—and presents it on a unified dashboard, unveiling hidden issues and promoting a profound understanding of their root causes. The concept of “observability” offers a departure from traditional IT performance management, providing a comprehensive view of system performance. By combining telemetry data into a unified dashboard, observability allows for an all-inclusive approach to managing cost, resource allocation, and customer experience, addressing the challenges of fragmented data management.

Why is “observability” gaining momentum?

Statistics indicate a significant shift towards multi-cloud usage. This, along with the growing expectation for superior digital experiences, has put enormous strain on IT teams. The challenge lies in delivering innovative features and services at breakneck speeds, while ensuring the seamless operation of existing systems.

JDS sees a lot of over-provisioned cloud-based platforms, as was typical with on-premise configurations. Historically, the main driver for over-provisioning was to cater for spikes, however modern cloud platforms can handle variations in load a lot more effectively. The trick to optimising cloud platforms is to measure and monitor usage. New Relic is a highly efficient tool to help you get the most from your cloud investment.

Conventional IT performance management strategies have hit their peak, instigating the transition towards “observability”. In New Relic’s 2022 Observability Forecast Report, 75% of the 1,614 global respondents confirmed their top-level management’s endorsement of observability technologies. Furthermore, a robust 78% consider observability to be crucial in realising significant business objectives.

One major factor driving the observability market is the availability of new standards and technologies like Open Telemetry (Otel) and extended Berkeley Packet Filter (eBPF). Even with the increase in funding and new technology, data growth is still the main driver behind the explosion of the observability market in recent years. Data is expected to grow at a 35% CAGR through 2025​​.

The New Relic FSO platform offers observability solutions that harness the power of cloud and AI technologies, providing comprehensive functionalities such as telemetry data collection, analysis, visualisation, anomaly detection, and alerts. For those inclined towards open-source observability tools, options such as Prometheus, Grafana, Zabbix, and Jaeger are available.

Why is full-stack observability so important?

Observability isn’t just pivotal for developers, engineers, and IT teams; it’s also crucial for improving customer experience. Real-time monitoring and issue detection are vital for businesses to prevent downtime and ensure optimal performance, avoiding negative impacts on revenue and customer sentiment.

Data-driven decision-making becomes feasible through the vast amount of insight delivered from full stack observability. Organisations can analyse performance metrics, error rates and user behaviour patterns to make informed decisions that improve their products, services, and infrastructure. Additionally, full stack observability can play a crucial role in security monitoring and compliance adherence, with the ability to detect threats and vulnerabilities while meeting regulatory requirements.  

In conclusion, as the digital panorama continues to evolve, the concept of observability has emerged as a key tool in the arsenal of modern businesses. By bridging the technological chasm, it enables a unified, bird’s-eye view of telemetry and business data across disparate teams, thereby amplifying operational efficiency and enriching the digital user experience. 
As we tread deeper into the realm of artificial intelligence, the significance of AI-powered operations (AIOps) is becoming increasingly apparent. This surge in importance is exemplified by the proliferation of observability services now facilitating AIOps environments. Overall, full stack observability empowers businesses to deliver reliable services, remain agile, and drive competitiveness in the dynamic digital realm.

Our favourite announcements from Splunk .conf23

Following an incredible week in Vegas for Splunk .conf23, the JDS team is excited to see all the new and upcoming features for the Splunk platform including AI, Observability, Security and IoT.

Here is a recap of some of our favourite announcements from Splunk .conf23:

Splunk Enterprise 9.1

A new Splunk version was released a week prior to Splunk .conf23, which included some welcome features across the board, the main ones being:

  • Improved ingest action to AWS S3
  • New Federated Search modes
  • New features for Dashboard Studio

Searching logs directly in S3 – without having to ingest them into Splunk, is a widely anticipated feature that according to Splunk Docs, should be generally available very soon. With customers often struggling to balance their licensing for ingestion and retention, this feature will allow customers to keep low-value or old data in S3 while still being able to search it.

Splunk AI Assistant

The newly announced AI Assistant will not only help users find data within the Splunk platform, but will also generate SPL to search and report on it. The AI Assistant app is currently in preview but customers can sign-up to download the app at https://pre-release.splunk.com/preview/aiassist

Splunk Cloud

Splunk and Microsoft have formed a strategic partnership to bring Splunk Cloud to customers that are leveraging Azure as their cloud platform of choice, supplementing Splunk’s existing offerings with AWS and GCP.    

As a result of this partnership, Splunk and Microsoft have committed to developing more “out-of-the-box” integration capabilities. In addition, customers will now be able to spend Azure credits to buy Splunk Core, Enterprise Security and ITSI in their customer-managed environments. This is expected to be rolled out globally over the next year.

Splunk AIOps

Splunk announced the release of the Splunk App for Anomaly Detection. Anomaly Detection is already included in the existing Machine Learning Toolkit (MLTK) app but this new app has a guided wizard which will make setting up Anomaly Detection easier for users that don’t have a background in Machine Learning (ML).

The Deep Learning Toolkit has also received an update (5.1) and a rename to the “Splunk App for Data Science and Deep Learning”. It now includes a “Neural Network Designer Assistant” once again improving the accessibility of ML to those without a ML background.

One other small ML improvement is in ITSI’s Adaptive Threshold feature. Adaptive Thresholds, which dynamically creates thresholds based on historical data, can now be configured to ignore anomalies. For example, a recent P1 incident that resulted in a spike of a KPI will be excluded from threshold calculation, resulting in more accurate thresholds.

Security

TwinWave, which Splunk bought in Nov 2022, has been integrated into the Splunk portfolio and renamed Splunk Attack Analyzer. It boasts a tight integration with Splunk SOAR so that customers can automate the detonation of suspicious URLs and files in unattributable environments and subsequently feed the results back into the SOAR platform.

Enterprise Security Content Update (ESCU) 4.6 has also been released, including 6 new ML detections written by the Splunk Threat Research Team to protect against the latest threats that are being observed in the wild.

Observability 

ITSI 4.17.0 was released at the beginning of June, focusing more on improving the platform than adding new features. A couple of these improvements are:

  • Saved Searches within content packs are disabled by default.
  • A new entity clean-up command which removes searches that are no longer creating or updating entities. 
  • New dashboards to troubleshoot entity discovery issues.
  • KPI sparklines have been updated so they no longer have the “spiky” up & down visual on small time ranges – This was a common complaint from all ITSI customers.
  • Custom dashboards for viewing episodes – Each episode can now show a custom SimpleXML or Dashboard Studio dashboard so customers can customise what is shown inside of the Episode Review page. https://docs.splunk.com/Documentation/ITSI/latest/EA/EpisodeInfo#Add_an_episode_dashboard

Another welcome announcement was the introduction of Unified Identity, which enables users to log into Splunk Observability Cloud with SSO using their Splunk Cloud Platform credentials.

Splunk Edge Hub

Splunk formally announced Edge Hub at .conf, though we’ve already heard of a few organisations trying them out. It’s purpose is to combat the “data deluge” by filtering & aggregating data before it leaves the local network via Internet or internal WAN, but It’s also capable of collecting various environmental sensors (temperature, noise levels, etc) out-of-the-box. Better yet, you can see these stats directly from the built-in screen. We look forward to seeing how customers use these devices in their environments.

Splunk Edge Processor

Splunk has also added some important features to the Edge Processor product. Customers can now export their data to Splunk using Splunk HEC (HTTP Event Collector), which is easier for customers to manage. In addition, the long-awaited SPL2 has also been added to Edge Processor which is interesting because it’s yet to reach many other products (ie Splunk Core). SPL2 extends SPL with many more commands that will make it easier for customers to parse and manipulate their data in Edge Processor before it gets sent into Splunk.

It’s an exciting time for Splunk users, and JDS is pumped to be at the forefront of these latest advancements. 

One Platform. Full Stack. In Context

JDS has a proud history of working with industry-leading tools and ensuring they provide value for your business. We are excited to share that one of our major partners, Cisco, has announced their much-anticipated Full-Stack Observability (FSO) Platform at CiscoLive Las Vegas this month. We have been looking forward to the launch of the FSO platform which will help us unlock much greater value in Observability data. This will benefit our clients by allowing them to bring in a wider variety of data across the app and infrastructure stack, enriched with business context and activity data so you can ensure your tech is optimised for maximum business performance.
https://www.cisco.com/c/en_ca/solutions/full-stack-observability.html?socialshare=lightbox-fso-video

Most of our clients are involved in some level of digital transformation – be it moving to cloud-native or SaaS stacks, simplifying customer experiences with digital apps, or streamlining business processes with smart tech. This has typically meant a lot more moving parts and every time something isn’t right, a new needle-in-the-haystack challenge is presented. Being able to observe a customer’s journey and experience, including all of the technical and business elements involved, pinpoint problems or identify high-value optimisations, is critical for operational success. 

Businesses need the ability to get fast answers to questions like “where is slowness occurring”, “how can we optimise resource usage” or “where can we improve conversion.” Cisco FSO Platform has a ubiquitous and context-rich data platform, with flexible query tools and packaged solutions, to ensure IT is working at its best.

An Overview of the Cisco FSO Platform

The Cisco FSO Platform was designed from the ground up to provide end-to-end visibility across complex, hybrid and multi-cloud environments. It delivers an extensible, entity-based data model that provides the flexibility to ingest any observability data with business context. By leveraging OpenTelemetry and harnessing the power of Metrics, Events, Logs, and Traces (MELT) to seamlessly collect and analyse data generated by any source, the FSO Platform is a versatile and comprehensive solution to capture observability data across an enterprise.

Right out of the gate there are features for application visibility, security insights, resource and cost optimisation, plus partner-led tools for financial visibility and capacity planning. 


Cloud Native Application Observability

One of the standout features of the Cisco FSO Platform is its Cloud Native Application Observability capability. This feature provides deep visibility into cloud-native environments, allowing organisations to monitor and troubleshoot their applications with ease. By providing insights into digital experiences, ensuring performance alignment with end-user expectations, prioritising actions, and reducing risks, businesses can gain valuable insights into the performance and behaviour of their applications. This allows customers the ability to identify and resolve issues before they impact users.

The Verdict

The Cisco FSO Platform is an innovative solution that offers an impressive suite of features that enable businesses to enhance digital experiences, mitigate risks, and drive operational efficiency.  

The Platform represents a significant milestone in Cisco’s FSO strategy, and shows their commitment to providing a comprehensive observability solution for clients. While other observability platforms can ingest data at scale, they face challenges in understanding and building a view of services.  Cisco’s approach was to build a solution that utilises an entity model at its core, which can be tailored to overcome these limitations. This is crucial with the complexity of modern applications spanning cloud, on-premise, microservices, SaaS, and serverless technologies, yet still needing to understand your customers’ digital journey and experience as they interact with your business. 

We will be keeping a keen eye on developments and look forward to sharing our experiences as we work with our customers to rationalise their observability strategies, harnessing the unique capabilities of the FSO Platform.

Part Three: ServiceNow Hyperautomation – Process Mining

ServiceNow, a leading platform for process mining and automation, recently launched its Utah release with several new features and enhancements, including multidimensional mining.

Part Three of this blog series will explore how you can analyse and improve your IT service management (ITSM) workflows using process mining and automation.

What is Process Mining?

Process mining is a technique that applies data science to discover, validate and improve workflows based on event logs from information systems. It uses artificial intelligence and machine learning to automatically extract process models from your ServiceNow data and visualise them in an interactive dashboard. You can then use the dashboard to identify bottlenecks, inefficiencies, deviations and best practices in your ITSM processes, and take action to optimise them with automation solutions.

How can ServiceNow Process Mining help you to enhance your organisation’s incident management process?

ServiceNow Process Mining can help your organisation enhance your incident management process by providing you with valuable insights into how your process is actually performed, how it deviates from the expected or desired behavior, and how it can be improved.

Process overview shows you the key metrics and statistics of your process, such as the number of cases, the average duration, the throughput time or the compliance rate. You can also filter and segment your data by various criteria, such as category, urgency, assignment group or resolution code.

Process Map provides your organisation with a graphical representation of your process model, which is automatically generated using your data. You can see the flow of cases from start to end, the frequency and duration of each activity, and the variants and deviations of your process. You can also drill down into specific cases or activities to see more details.

Process Analysis presents you the results of various analyses that are performed on your organisation’s process model, such as bottleneck analysis, root cause analysis, conformance analysis and best practice analysis. You can use these to identify and understand the causes and effects of problems and opportunities in your process.

Using these three capabilities, you can gain a comprehensive understanding of your organisation’s incident management process and discover ways to optimise it through elimination of unnecessary steps or automation of repetitive tasks.

Final Thought

As an Elite ServiceNow Partner, JDS can help you navigate the complex and fast-changing world of hyper-automation. We have the expertise, experience and passion to support your organisation’s digital transformation journey. Whether you need to automate your business processes, optimise your IT operations, or leverage artificial intelligence and machine learning, we can guide you every step of the way.

Benefits of migrating to Atlassian Cloud now

There has never been a better time to execute a migration to Atlassian Cloud. 

Recent announcements at Atlassian’s Team23 conference included new products (Jira Product Discovery, Beacon), new features (Atlassian Intelligence (AI), Slack/Teams integration, Digital agents), and impressive improvements (Data residency location choice, Portal customization, Workflow and toolchain templates), demonstrating the vast range of opportunities and useful features/products that Atlassian’s cloud platform offers to elevate your service solutions.

Atlassian’s tools continue to grow more effective, efficient, and powerful, leading to improvement in the methods and practices they support.  Dominant technologies like ChatGPT have proven how integrating new technologies into your business can help streamline activities and improve your day-to-day operations to keep your company competitive in the market space. Taking this into consideration, organisations should understand that adopting these tools at the outset is well worth the investment of time and resources to adapt & upgrade internal tooling, methodologies and policies.

Given that cloud migrations take 3-9 months on average, it is important to commence the move off your server instance into the cloud as soon as you can. In February 2024, Atlassian’s Server End Of Support (EOS) date will lapse and server customers won’t receive any future products, developments, or improvements to their site. Executing a migration now will ensure the business impact is kept minimal and change management has low complexity while guaranteeing you are able to continue utilising everything Atlassian has to offer. 

JDS recommends you carry out the migration before the gap between server and cloud becomes too vast. It will ensure your business receives upgrades and improvements that will keep it adapting to new market climates. Migrating now will also give you greater time to implement change management and adopt new and improved practices, ultimately adding value to your business operations. Migrating before the EOS date will also allow you to integrate the new system into your business while the look and feel of Atlassian Cloud products remain similar to their on-premises server offerings. This will mean that your staff and customers will easily adapt to the new environment without requiring extensive time and resource-consuming activities like upskilling and training.

Atlassian has shown that it has big plans around its product roadmap for the future. Adopting change now will minimise business impact while allowing you to keep your business at the forefront of advancement and technology.  It will also give you a headstart on establishing best practices and refining processes as Atlassian products continue to develop and evolve into the future.

Part Two: ServiceNow Hyperautomation – Proactive Automation Resilience

What is Automation Resilience?

In today’s fast-paced digital landscape, automation is becoming increasingly vital for businesses. However, as with any technology, it’s important to be prepared for unexpected disruptions and failures. This is where automation resilience comes in.

Automation resilience is the ability of an organisation’s automation solutions to withstand unexpected interruptions and recover quickly from failures. Without automation resilience, any downtime or disruptions can lead to significant financial losses and negatively impact the customer experience.

ServiceNow understands the importance of automation resilience, which is why capabilities to enable proactive monitoring and management of automation solutions have been developed. Configuration Management Database (CMDB) is a crucial tool that allows your organisation to track automation solutions and dependencies in the same way you track other facets of your tech stack.

With ServiceNow’s automation resilience capabilities and CMDB, businesses can operate their automation solutions with confidence, knowing that they are prepared for any unexpected disruptions. There is no need for downtime or interruptions to negatively impact your business.

Establishing Proactive Automation Resilience with ServiceNow

Disruptions are inevitable in the fast-paced digitalised business environment, that’s why it’s crucial to have the right tools in place to anticipate, prevent, respond, and adapt to them. ServiceNow’s hyperautomation technologies and capabilities enable organisations to do just that, by leveraging multitudes of technologies.

Intelligence (AI) and data insights are examples of such technology, enabling ServiceNow’s hyperautomation to assist your business with predicting issues, reducing user impact, and streamlining resolutions. It also involves automating the right processes in ServiceNow to improve efficiency, productivity, and customer satisfaction. With ServiceNow, your organisation can serve customers more efficiently, deliver products faster, and protect its workforce more effectively.

ServiceNow’s automation center allows your organisation to assess every aspect of relevant change management details, not just the execution details of your RPA bots. By viewing a graphic depiction of their bot’s process infrastructure dependencies, organisations can identify potential disruptions such as maintenance windows or unexpected configuration errors.

Moreover, ServiceNow’s AI-powered root cause analysis and automated remediation actions enable your business to effectively reduce mean time to resolution (MTTR). This means that your data can be analysed from various sources, such as logs, metrics, traces, and events, and identify the source of problems in your IT operations. It can also trigger automated workflows and actions to fix issues faster and prevent them from recurring. By doing so, ServiceNow helps your organisation to improve service quality and performance, while shortening the time it takes to resolve incidents.

ServiceNow’s innovative solution empowers your business to gain a competitive advantage by providing exceptional customer experiences and satisfaction. By using insights gained from service quality data, your organisation can proactively communicate with your customers, addressing pain points before they become a problem. With the ability to map out the entire customer journey, your business can tailor your customer service strategy to meet their needs. With ServiceNow’s hyperautomation, you don’t have to wait for disruptions to occur.


Watch: Establish Proactive Automation Resilience | ServiceNow

Four Exciting Takeaways from Cisco Live 2023

2023 is off to an exciting start for JDS, as Australia’s leading Cisco Full Stack Observability partner. There were a host of new product features and announcements which came out of Cisco Live in Amsterdam last month, all focused on fast and secure customer experiences with data-driven insights and action. 

Here are four exciting takeaways from the event that we’re particularly excited about: 

New release for On-premise Platform

AppDynamics have released v23.2 for the on-premise controller and platform components, bringing a raft of security and compatibility updates, and parity for APM & EUM agent functionality with the SaaS platform. Some of our clients have strict requirements around data protection, and AppDynamics On-Premise allows them to gain the same deep level of observability into their apps, while keeping data within their network.

Two-way integration between AppDynamics and ThousandEyes (TE) and TE Open Telemetry support

None of us want to have to scramble to multiple different tools when degradation occurs just to figure out where in the stack something is broken. It adds unnecessary time in determining the root cause and increases MTTR. 

Cisco has made significant improvements by adding support for OpenTelemetry into ThousandEyes, and an out-of-the-box integration to bring detailed network level metrics into AppDynamics dashboards. This allows ThousandEyes to be aware of the application context and show application health status from AppDynamics, alongside network tests. 

This advancement will help to understand much faster whether the network or internet is impacting your application health, and which parts of the application and business are affected when it does. 

Brand new Full Stack Observability data platform

Observing modern application stacks means taking and analysing A LOT of measurements, and it needs to be fast and flexible to allow insights to be gained without onerous number crunching or actions taken. A data platform that can ingest large volumes of ubiquitous data with fast analytics is critical to this, and Cisco’s new Full Stack Observability platform delivers on these requirements.

Highlights of the new platform, slated for release in April 2023, are: 

  • Much more than just a data lake. Correlate data with business and IT context to understand impact and importance.
  • Bring in any MELT data via Open Telemetry. If you can describe it with OTLP, it can be ingested. 
  • Make use of AppDynamics Cloud, built on the FSO platform, to visualise and analyse data, use third party UIs like Grafana, or build your own for your use cases.

Secure the stack with new integrations and intelligence in Cisco Secure App

Secure App leverages the AppDynamics agents to identify vulnerabilities, detect attacks and block them in real time – cutting off new threats immediately, and giving you valuable time to rectify your code. 

This capability has now been expanded with new integrations and intelligence within a range of Cisco Security tools. 

  • Kenna & Talos – leveraging a vast feed of threat and exploit data feeds and known vulnerabilities to assess your risk levels in the context of your app and business
  • Panoptica – providing assessment of 3rd party APIs to understand your exposure to your downstream dependencies such as SaaS services and partner apps. 
  • AppDynamics Business Transaction awareness – understand the business context of threats and which aspects of your applications are at risk.

Atlassian Cloud Migration: The Clock Is Ticking

End of Server Sales and Support

In late 2020, Atlassian announced their end-of-life roadmap for Server edition sales and support in a push towards migrating to Atlassian Cloud (see below). Three years on, we find ourselves hitting the third milestone of this plan, and no longer have the ability to purchase new apps for Atlassian Server. The final end-of-support date of February 2024 is now fast approaching.

Image by Atlassian: Atlassian Server end of life (sale/support) information

It has never been more important for users of Atlassian Server to start planning their move to Atlassian Cloud as these migrations can prove to be more complex and involved than originally anticipated.

JDS, a Gold Atlassian Solution Partner, has assisted a multitude of customers with their move to Atlassian Cloud, including migrations with large-scale data, multiple instances, alternative approaches (where the traditional JCMA tool was not viable), and extended products beyond Jira and Confluence (Such as Bitbucket and Zephr).

Why Cloud?

Some users may have reservations about making the move to a new platform, especially those who have not experienced what cloud has to offer, however there are a number of benefits to be acknowledged. Moving to cloud means that Atlassian takes on the hosting and management responsibilities, resulting in a reduction in overhead costs relating to Atlassian products. Atlassian Cloud also offers a wide range of new and extended functionalities built into its out-of-the-box solutions (see table below), and there is an expansive and valuable marketplace of add-ons.

What to do next…

With this imminent end-of-support date, and the knowledge that migrating to cloud may take upwards of 6 months, it is no surprise that Atlassian and Atlassian partners are recommending cloud migrations for all Atlassian on-premises products be carried out as soon as possible, especially while Atlassian is offering reduced costings and extended cloud trials.

Atlassian supplies you with the JCMA tooling which can be implemented by your internal IT team, however there are often pitfalls and complications that can arise during migration that can be difficult to navigate without in-depth knowledge.

In addition to this, given the current climate of data security, it is extremely important that all necessary steps are taken to minimise the risk of data loss and exposure of cyber vulnerabilities. Therefore, it’s highly recommended that an Atlassian Solutions partner is engaged in the migration project to ensure a seamless implementation and successful outcome.

If you haven’t yet moved to Atlassian Cloud, JDS is here to assist as a local Atlassian Gold Solutions Partner with invaluable depth of expertise and experience.

Part One: ServiceNow Hyperautomation – Process Optimisation

Improvement Initiatives

As businesses grow, IT processes become increasingly complex and difficult to manage. This is where Process Optimisation comes in, providing a way to evaluate and enhance the performance of your IT processes using data from your systems. In other words, it’s a method of making sure that your IT processes are running as smoothly and efficiently as possible.

One of the key components of Process Optimisation is ‘Improvement Initiatives’, also known as ‘Continuous Improvement’. This approach involves using data-driven insights and best practices to identify opportunities for enhancing the effectiveness and efficiency of your IT service management processes. By continually refining and improving these processes, you can ensure that your organisation is operating at peak performance, and delivering the best possible outcomes for your customers.

Whether you’re looking to get started quickly with API-driven processes or planning for the future with Robotic Process Automation (RPA), the ServiceNow platform can help you achieve your goals. If you’re already using ServiceNow, the leading platform for digital workflows, Process Optimisation and Improvement Initiatives are built right in.
Read on to learn more about ‘Improvement Initiatives’ and explore RPA and API in more detail.

Understanding APIs

In the world of software development, an Application Programming Interface (API) is an essential building block that enables different applications to communicate with each other. APIs are like the ‘bridges’ that connect different software components, allowing them to share data and functionality seamlessly. Without APIs, software applications would have to be built from scratch every time, making the development process much more time-consuming and resource-intensive.

An API typically consists of a set of rules and protocols that govern how software applications should interact with each other. These rules and protocols define the methods that can be used to retrieve, update, or delete data, as well as the format of the data that is exchanged. APIs enable developers to create complex systems that can be easily integrated with other applications.

APIs are used in a variety of settings, from web and mobile applications to enterprise software systems. Many popular web services, such as Google Maps, Twitter, and Facebook, provide APIs that developers can use to access their data and functionality. In addition, many enterprise software systems, such as customer relationship management (CRM) software and enterprise resource planning (ERP) systems, offer APIs that enable developers to integrate their applications with these systems.

APIs are critical components of modern software development, enabling applications to communicate and share data with one another. By providing a standardised way to interact with software systems, APIs simplify the development process and make it easier to create powerful, integrated software solutions.

Understanding RPA

Robotic Process Automation (RPA) is a revolutionary technology that promises to transform the way we work. Essentially, RPA involves using software robots (or bots) to automate repetitive, rule-based tasks that are typically performed by humans. This means that employees can be freed up to focus on higher-value tasks that require creativity, critical thinking, and problem-solving skills.

RPA tools are designed to mimic the actions of a human worker. For example, they can log into applications, copy and paste data between systems, and enter information into forms. By automating these tasks, RPA can help organisations to improve efficiency, reduce costs, and increase accuracy.

One of the key benefits of RPA is that it can be used to automate a wide range of processes across different departments and industries. For example, RPA can be used to automate invoice processing in finance, customer service inquiries in retail, and claims processing in insurance.

RPA is an exciting technology that has the potential to revolutionise the way we work. As businesses look to stay competitive in an increasingly digital world, RPA is set to play a key role in driving productivity, reducing costs, and improving customer satisfaction.

API vs RPA: Understanding the Difference

In today’s world of increasing digitalisation, businesses are always looking for ways to improve efficiency and reduce costs. Two technologies that are often mentioned in this context are RPA and API. Both technologies can help businesses automate processes, but they have different approaches and capabilities. It is a common misconception in the IT industry that RPA is only used when API is not available. In fact, RPA and API have their own strengths and limitations.

The key difference between RPA and API is their approach to automation. RPA is focused on automating specific tasks or processes, while API is focused on enabling different systems to work together. RPA is typically used to automate repetitive and manual tasks that are prone to errors, while API is used to integrate systems and data sources and enable real-time communication and data exchange.

RPA and API are two very different technologies that can help businesses automate processes and improve efficiency. Depending on the needs of your business, one or both of these technologies may be useful in achieving your automation goals.

Unify your Hyperautomation Landscape with ServiceNow (Automation Center)

ServiceNow Automation Center is a cutting-edge platform that offers a centralised solution for managing and executing Hyperautomation strategies. By utilising powerful features such as workplace, dashboard, executive dashboard and RPA vendor integration, businesses can streamline their automation landscape, making it easier than ever before to implement a comprehensive automation strategy that can improve efficiency and reduce costs.

One of the key benefits of ServiceNow Automation Center is the ability to integrate disparate automation solutions across different third-party vendors. This can help to maximise the business impact of automation initiatives, as well as consolidate automation opportunities across the entire enterprise. With ServiceNow Automation Center, businesses can manage the entire automation lifecycle from intake through to execution, providing a holistic view of automation activity across the organisation.

In addition, ServiceNow Automation Center provides a powerful visualisation tool, allowing businesses to view benchmarks for automation business goals and activity in one centralised location. This feature makes it easier to track the progress of automation initiatives and make data-driven decisions to optimise their impact.

ServiceNow Automation Center also provides a comprehensive solution for monitoring and managing robotic process automation (RPA) jobs in CMDB. This means that businesses can keep automation activities active, with full visibility of their status and performance.

ServiceNow Automation Center is a powerful platform that enables businesses to achieve their automation goals through centralised management, seamless integration, and comprehensive automation monitoring and reporting.

Stay tuned for Part Two: Proactive Automation Resilience


Watch: Unify Your Hyperautomation Landscape | ServiceNow

Introduction to ServiceNow Hyperautomation

Imagine this…

You’re at work, and suddenly an application you’re using stops working. You’re in a rush and can’t afford to spend hours on the phone with technical support. But what if there was a way to get your problem solved quickly and efficiently, without ever having to speak to a human being?

The ServiceNow Virtual Agent is an AI-driven conversational chatbot that is equipped to efficiently tackle your IT issues. With just a few clicks, you can explain your problem and get instant assistance. Using a combination of ServiceNow Automation Center and Flow Designer, the ServiceNow Virtual Agent will quickly identify the submitted query and proactively initiate a conversation for resolution, without the need for human intervention.

In our initial scenario, the ServiceNow Virtual Agent would have promptly diagnosed the problem and triggered the ServiceNow Automation Center bots to restart the application service through the use of Robot Process Automation. Simultaneously, a ServiceNow incident ticket is created through the ServiceNow Flow Designer, allowing your IT department to track the issue and ensure it doesn’t happen again. All of this happens in a matter of minutes, ensuring that you can get back to work without any further interruptions, and with minimal human interference.

This is a practical example of Hyperautomation in action.

Automation vs Hyperautomation

Automation has become a buzzword in the world of technology and business, but what exactly does it mean? Simply put, automation refers to the use of technology to perform tasks or processes that would typically be performed manually by humans. This can include everything from using software bots to handle repetitive tasks, to using machine learning algorithms to make decisions based on data.

However, automation is no longer limited to simple, repetitive tasks. With the emergence of advanced technologies such as RPA (Robotic Process Automation), AI (Artificial Intelligence), and ML (Machine Learning), we have entered a new era of automation known as Hyperautomation. Hyperautomation involves using these advanced technologies to automate more complex and sophisticated tasks, such as decision-making, data analysis, and even creative work.

The goal of Hyperautomation is to create a fully automated end-to-end workflow that can deliver business value by improving efficiency, reducing errors, and increasing productivity. By leveraging the power of automation, organizations can streamline their operations, reduce costs, and free up their employees to focus on more strategic tasks.

ServiceNow Hyperautomation

ServiceNow offers a platform that combines several automation technologies, including Robotic Process Automation (RPA) Hub, Process Automation Designer (PAD), Automation Center (AC) and Integration Hub (IH), enabling organisations to automate complex, end-to-end processes.

With ServiceNow Hyperautomation, organisations can improve their overall processes, efficiency and productivity. This is achieved through the use of workflows, bots, and other automation tools that can automate everything from simple, repetitive tasks to more complex decision-making and data analysis processes. ServiceNow Hyperautomation has become an increasingly popular solution for businesses looking to stay competitive in today’s fast-paced, digital world.

Up Next…ServiceNow Hyperautomation: Part One will look at Process Optimisation, API vs RPA

Read Next…Part One: Process Optimisation

Watch: Hyperautomation and Low-Code | Knowledge 2022