Monitoring Atlassian Suite with AppDynamics

Monitoring Atlassian Suite with AppDynamics

Millions of IT professionals use JIRA, Confluence, and Bitbucket daily as the backbone of their software lifecycle. These tools are critical to getting anything done in thousands of organisations. If you’re reading this, it’s safe to guess that’s the case for your organisation too!

Application monitoring is crucial when your business relies on good application performance. Just knowing that your application is running isn’t enough; you need assurance that it’s performing optimally. This is what a JDS client that runs in-house instances of JIRA, Confluence, and Bitbucket, recently found out.

This client, a major Australian bank, started to notice slowness with JIRA, but the standard infrastructure monitoring they were using was not providing enough insight to allow them to determine the root cause.

JDS was able to instrument their Atlassian products with AppDynamics APM agents to gain insights into the performance of the applications. After deployment of the Java Agents to the applications, AppDynamics automatically populated the topology map below, known as a Flow Map. This Flow Map shows the interactions for each application, accompanied by overall application and Business Transaction health, and metrics like load, response time, and errors.

After some investigation, we found the root cause of the JIRA slowness was some Memcached backends. Once we determined the root cause and resolved the issue, operational dashboards were created to help the Operations team monitor the Atlassian application suite. Below is a screenshot of a subsection of the dashboard showing Database Response Times, Cache Response Times, and Garbage Collection information.

An overview dashboard was also created to assist with monitoring across the suite. The Dashboard has been split out to show Slow, Very Slow, and Error percentages along with Average Response Times and Call Volumes for each application. Drilldowns were also added to take the user directly to the respective application Flow Map. Using these dashboards, they can, at a glance, check the overall application health for the Atlassian products. This has helped them improve the quality of service and user experience.

The bank’s JIRA users now suffer from far fewer slowdowns, particularly during morning peaks when many hurried story updates are taking place in time for stand-ups! The DevOps team is also able to get a heads-up from AppDynamics when slowness starts to occur, rather than when performance has fallen off a cliff.

So if you’re looking for more effective ways to monitor your Atlassian products, give our AppDynamics team a call. We can develop and implement a customised solution for your business to help ensure your applications run smoothly and at peak performance.

Our team on the case

Death to pie charts.

Hamish Goodwin

Consultant, AppDynamics Team Lead

Length of Time at JDS

4 years, 10 months, 12 days


Figuring out and resolving complicated technical problems. Showing people how to get more out of their technology. Teaching people about the way things really work.

Workplace Solutions

I help our clients to understand and improve the way their customers are really experiencing their apps.


Whether you think you can, or think you can't—you're right.

Bradley Burge


Length of Time at JDS

Since June 2017


Software Development (HTML, CSS, Javascript, Java, SQL)


Workplace Passion

Learning new skills while tackling technical problems.

Our AppDynamics stories

Posted by Amy Clarke in AppDynamics, Tech Tips
Five ways your business may benefit from expert IT support

Five ways your business may benefit from expert IT support

If you already have a stellar in-house technical team, why should you look for expert IT support? There are many advantages to seeking external advice and assistance when it comes to your IT infrastructure and management. Here are five ways your business may benefit from expert IT support:

1. Bring a niche capability to your existing IT team

Perhaps you need a specialist capability for only a few months, or a couple days a week. Whether it’s for a short-term period or a part-time basis, bringing a niche capability or skillset into your team can have immediate and lasting benefits for your organisation. Engaging expert IT support allows you to add new skills and knowledge to your team, without needing to go through an extensive recruitment process.

2. Ease the burden of IT maintenance so you can focus on business services

Meeting the best practice standards for IT management and system monitoring is a time-consuming, exhaustive process. On a given day, the IT team within your business may be tasked with:

  • Product Support
  • Licensing
  • Onboarding/Offboarding
  • Product upgrades
  • EOL Support
  • Small Enhancements
  • Product administration

With the assistance of external support, these tasks can be taken care of, allowing your internal team to have more time to be innovative and develop new solutions for your business.

3. Get a neutral perspective on your IT

Corporate IT teams are constantly under the pump, meeting the day-to-day needs of their business while also working on new projects to enhance their services. At times, it can be difficult to see the forest for the trees, and getting a neutral, external perspective from an expert can make all the difference. A fresh pair of eyes on your IT infrastructure and processes can bring new ideas to the table that will improve efficiency, cut down on unnecessary processes, and nurture creativity within your team.

4. Increase the visibility of your IT services

One of the most critical benefits expert IT support can provide is a significant increase in visibility of all your systems and applications. By increasing the executive’s visibility into all your IT services, you can immediately identify when and where things are going wrong, and fix them quickly before they impact your users. You also have access to highly available and capable support consultants to assist with any issues, should they arise.

5. Enhance your service capability

Ultimately, whether you hire long-term external support or temporary managed services, the benefits of bringing a fresh perspective, new skills, and better visibility into your team have a lasting impact. By using the resources and unique expertise of external support, you can enhance your business’s service capability and provide a better experience for your consumers.

JDS provides expert IT support that achieves all of these goals. Based in Melbourne, with a fully local Australian staff, our support team provides ongoing support for several major businesses throughout the country, including long-term on-site support. Our consultants resolve most support calls without needing to escalate to a vendor, but they are also able to raise tickets with any of our premier partners on your behalf and ensure our customers are given top priority.

We provide several tiers of support and can customise based on the needs of your business. Get in touch with us today using the contact form below, or call us on 1300 780 432 to discuss how our expert IT support team can benefit you.

Contact Form

Contact Us (2019)

By clicking this button, you submit your information to JDS Australia, who will use it to communicate with you about this enquiry and their other services.

Contact Details

Phone (Australia): 1300 780 432

Phone (International): +61 (0)3 9639 3288

ABN: 88 471 933 631

GPO Box 4777
Melbourne VIC 3001

Our team on the case

Commas save lives.

Amy Clarke

Marketing Communications Manager

Length of Time at JDS

Since July 2017


Writing, communications, marketing, design, developing and maintaining a brand, social media, sales.

Workplace Solutions

Words matter, so make sure you get them right!

Workplace Passion

Helping a company develop its voice and present that to their clients with pride.

Recent success stories

Posted by Amy Clarke in AppDynamics, Tech Tips
Why businesses should do real-user and synthetic monitoring

Why businesses should do real-user and synthetic monitoring

In successful contemporary businesses, both synthetic and real-user monitoring play key roles in providing a more comprehensive and detailed understanding of user behaviour.

Synthetic monitoring simulates business transactions against production applications at set intervals, providing consistent, predictable measurements used to understand application performance trends and baselines. Real-user monitoring, on the other hand, measures performance and availability when real users are accessing the application. Both real-user and synthetic monitoring are critical aspects of IT to ensure a high-quality customer experience.

What is real-user monitoring?

Real-user monitoring (RUM) is the process of recording the interactions of users with a business, usually through a website or mobile application. RUM allows you to measure the true experience of all of your users, at the times and places where they are using the business processes and applications. You can then gauge the business impact of performance issues and outages, and isolate user trends in detail. For mobile applications, you can also capture the real native mobile user experience, including operating system-specific data related to end-user actions.

For example, you could use the data collected from RUM to identify how users access your website or application. If more users using tablets or mobiles instead of desktop, this may factor into long-term business strategy when it comes to how your monitored system is designed and what functionality you might want to make available in the future.

What is synthetic monitoring?

Also known as active robot monitoring, synthetic monitoring overcomes the need for real users to report errors or interruption in services. It does so by employing scripts that emulate the steps taken by real users engaged with your business services, automatically and according to your preferred schedule.

Immediate reporting means you have instantaneous visibility over what systems are causing issues and where. Synthetic monitoring provides you with consistent and predictable measurement of real-time performance and availability, without having to rely on feedback from users.

How do they work together?

Synthetic monitoring and real-user monitoring have different benefits and limitations, but they are highly complementary when used together.

While real-user monitoring gives you a broad understanding of how your users behave when using your website or application, synthetic monitoring allows you to test variables ahead of time to ensure the functionality works before it impacts your user experience.

With real-user monitoring, you can identify trends and potential weak areas of your website or application. By implementing synthetic monitoring, you can pinpoint and respond to errors quickly, lowering the instances of negative user experience.

A great example of when your business could benefit from both synthetic and real-user monitoring is when creating a report for your advisory board or key stakeholders. Using synthetic monitoring, you can create benchmarks and generate reports on your performance against SLAs. Then, with the data from real-user monitoring at your fingertips, you can show trends in your business and how you plan to respond to them.

How can JDS help?

With more than a decade of experience in IT consulting and solutions, JDS is highly experienced in all facets of real-user and synthetic monitoring. We have developed a unique active robot monitoring capability within Splunk, which you can read more about here. For more information on RUM, check out our real-user monitoring webpage.

If you are looking to enhance your business’s monitoring capabilities and improve your customer experience, JDS can develop a custom solution to suit your unique needs. Contact us today to find out more.

Our team on the case

Commas save lives.

Amy Clarke

Marketing Communications Manager

Length of Time at JDS

Since July 2017


Writing, communications, marketing, design, developing and maintaining a brand, social media, sales.

Workplace Solutions

Words matter, so make sure you get them right!

Workplace Passion

Helping a company develop its voice and present that to their clients with pride.

Tech tips from JDS

Posted by Amy Clarke in AppDynamics, Tech Tips
The Splunk Gardener

The Splunk Gardener

The Splunk wizards at JDS are a talented bunch, dedicated to finding solutions—including in unexpected places. So when Sydney-based consultant Michael Clayfield suffered the tragedy of some dead plants in his garden, he did what our team do best: ensure it works (or ‘lives’, in this case). Using Splunk’s flexible yet powerful capabilities, he implemented monitoring, automation, and custom reporting on his herb garden, to ensure that tragedy didn’t strike twice.

My herb garden consists of three roughly 30cm x 40cm pots, each containing a single plant—rosemary, basil, and chilli. The garden is located outside our upstairs window and receives mostly full sunlight. While that’s good for the plants, it makes it harder to keep them properly watered, particularly during the summer months. After losing my basil and chilli bush over Christmas break, I decided to automate the watering of my three pots, to minimise the chance of losing any more plants. So I went away and designed an auto-watering setup, using soil moisture sensors, relays, pumps, and an Arduino—an open-source electronic platform—to tie it all together.

Testing the setup by transferring water from one bottle to another.

Testing the setup by transferring water from one bottle to another.

I placed soil moisture sensors in the basil and the chilli pots—given how hardy the rosemary was, I figured I could just hook it up to be watered whenever the basil in the pot next to it was watered. I connected the pumps to the relays, and rigged up some hosing to connect the pumps with their water source (a 10L container) and the pots. When the moisture level of a pot got below a certain level, the Arduino would turn the equivalent pump on and water it for a few seconds. This setup worked well—the plants were still alive—except that I had no visibility over what was going on. All I could see was that the water level in the tank was decreasing. It was essential that the tank always had water in it, otherwise I'd ruin my pumps by pumping air.

To address this problem, I added a float switch to the tank, as I was aiming to set it up so I could stop pumping air if I forgot to fill up the tank. Using a WiFi adapter, I connected the Arduino to my home WiFi. Now that the Arduino was connected to the internet, I figured I should send the data into Splunk. That way I'd be able to set up an alert notifying me when the tank’s water level was low. I'd also be able to track each plant’s moisture levels.

The setup deployed: the water tank is on the left; the yellow cables coming from the tank are for the float switch; and the plastic container houses the pumps and the Arduino, with the red/blue/black wires going to the sensors planted in the soil of the middle (basil) and right (chilli) pots. Power is supplied via the two black cables, which venture back inside the house to a phone charger.

The setup deployed: the water tank is on the left; the yellow cables coming from the tank are for the float switch; and the plastic container houses the pumps and the Arduino, with the red/blue/black wires going to the sensors planted in the soil of the middle (basil) and right (chilli) pots. Power is supplied via the two black cables, which venture back inside the house to a phone charger.

Using the Arduino’s Wifi library, it’s easy to send data to a TCP port. This means that all I needed to do to start collecting data in Splunk was to set up a TCP data input. Pretty quickly I had sensor data from both my chilli and basil plants, along with the tank’s water status. Given how simple it was, I decided to add a few other sensors to the Arduino: temperature, humidity, and light level. With all this information nicely ingested into Splunk, I went about creating a dashboard to display the health of my now over-engineered garden.

The overview dashboard for my garden. The top left and centre show current temperature and humidity, including trend, while the top right shows the current light reading. The bottom left and centre show current moisture reading and the last time each plant was watered. The final panel in the bottom right gives the status of the tank's water level.

The overview dashboard for my garden. The top left and centre show current temperature and humidity, including trend, while the top right shows the current light reading. The bottom left and centre show current moisture reading and the last time each plant was watered. The final panel in the bottom right gives the status of the tank's water level.

With this data coming in, I was able to easily understand what was going on with my plants:

  1. I can easily see the effect watering has on my plants, via the moisture levels (lower numbers = more moisture). I generally aim to maintain the moisture level between 300 and 410. Over 410 and the soil starts getting quite dry, while putting the moisture probe in a glass of water reads 220—so it’s probably best to keep it well above that.
  2. My basil was much thirstier than my chilli bush, requiring about 50–75% more water.
  3. It can get quite hot in the sun on our windowsill. One fortnight in February recorded nine 37+ degree days, with the temperature hitting 47 degrees twice during that period.
  4. During the height of summer, the tank typically holds 7–10 days’ worth of water.

Having this data in Splunk also alerts me to when the system isn't working properly. On one occasion in February, I noticed that my dashboard was consistently displaying that the basil pot had been watered within the last 15 minutes. After a few minutes looking at the data, I was able to figure out what was going on.

Using the above graph from my garden’s Splunk dashboard, I could see that my setup had correctly identified that the basil pot needed to be watered and had watered it—but I wasn't seeing the expected change in the basil’s moisture level. So the next time the system checked the moisture level, it saw that the plant needed to be watered, watered it again, and the cycle continued. When I physically checked the system, I could see that the Arduino was correctly setting the relay and turning the pump on, but no water was flowing. After further investigation, I discovered that the pump had died. Once I had replaced the faulty pump, everything returned to normal.

Since my initial design, I have upgraded the system a few times. It now joins a number of other Arduinos I have around the house, sending data via cheap radio transmitters to a central Arduino that then forwards the data on to Splunk. Aside from the pump dying, the garden system has been functioning well for the past six months, providing me with data that I will use to continue making the system a bit smarter about how and when it waters my plants.

I've also 3D printed a nice case in UV-resistant plastic, so my gardening system no longer has to live in an old lunchbox.

Our team on the case

Just Splunk it.

Michael Clayfield

Splunk Consultant

Length of Time at JDS

2.5 years


Splunk, HP BSM, 3D Printing.

Workplace Passion

Improving monitoring visibility and saving people’s time.

Posted by JDS Admin in Blog, Case Study, Monitor, Splunk
Improving the management and efficiency of QTP execution

Improving the management and efficiency of QTP execution

(co-authored with Huan Nguyen)

If you are looking to run your regression suite quicker, this article shows you some tips and tools that you could use to gain better control and visibility to your HP QuickTest Pro (QTP) test execution. Unfortunately, there are several limitations to HP Quality Center (QC) that does not address some of the challenges we faced as Automation Testers. This Tech Tip will show you how to work around these limitations.

QC has a feature that allows automated HP QuickTest Pro (QTP) test scripts to be run on multiple hosts concurrently; similar to the illustration below. The QC and QTP that we will be discussing is based on version 10. 

QC does the job well instructing QTP to execute specific scripts on specific workstations. However, it does not address some of  the issues below by default:

  1. How can I run a test using only a subset of the data?
  2. How do I know which data sets is being used by each test on the workstation at the moment?
  3. How can I quickly distribute a single data set to run on various hosts?

The above-mentioned issues are most apparent when you are doing data creation with QC and QTP but also applicable to regression testing.

1. Selecting unique test data by specifying test row

To address the data selection issue, we need a framework that allows you to choose the test data that you want to use. In our implementation, we first added a new user field  for test instances in QC, called TestRow. Additionally, we customised the workflow to pass the TestRow data to QTP’s test input parameter TestArgs. Refer to the QC Admin Guide on how to pass data from QC to QTP by customising the workflow.

Once this has been set up, it is as simple as using the code below to extract the TestRow data in QTP.

Dim strTestArgs
strTestArgs = TestArgs("TestRow")

From our set up, TestRow accept the following value ranges:

  • 120 – run row 120 only.
  • 1;3;5 – run rows 1, 3 and 5.
  • 1-5; – run rows 1 to 5.
  • 1;3;6-10 – run rows 1, 3 and 6 to 10.

By allowing users to specify which row(s) to run, users can flexibly choose to use the unique data that they have prepared.  Each row represents a record.

2. Monitoring the test runs on individual hosts

Now that we have automated scripts running with different test rows on various hosts, it can be a time-consuming task logging  onto all the hosts to verify if the automated scripts are running as it should. Additionally, we may also have application and system resource constraints that prevents us from log in on to individual system/application to view the test run status. For example, it is usually not possible to RDP to the same machine concurrently, and some enterprise software may restrict a single session per user. With the limited resources, you might borrow existing resources from your colleagues who would want to keep their credentials confidential.

We can save time and reliance of others to check the status by implementing a test run monitoring tool. Some information that may be useful include:

  • Host name
  • User name
  • Current data row
  • Last updated time
  • Percentage to completion

To track the test run, this tool reads from a file periodically to grab the test run status. This file is updated by the automated scripts via the test framework. Below is a screenshot of what we have implemented.

With better visibility to how tests are performing, we can quickly step in to fix any issues when any of the tests failed, or quickly kick-off new tests when resources become available.

3. Test data distribution

What if you want to distribute the test data over several hosts to expedite test execution? In line with what we have discussed on TestRow, we can divide the data sets onto different hosts. For example:

It may be easy to configure each host to run an evenly distributed load when the load is divisible by the number of hosts . However, if you have, for example 678 rows of test data to run, and the test row starts from 17, it can start to be a bit cumbersome to divide the load evenly. Hence, we wrote an Excel macro to aid us in our quest of test data distribution. There are two parts to this macro: the first is to generate data ranges for each host; the second for re-executing failed iterations (very useful for data generation tasks).

Configure new test ranges

Observe the numbers we entered in the green cells below. We divided the load evenly for test rows 17 till 678 on 8 hosts. Clicking the “New Range” button generates the configuration for us.

Configure to re-run failed iterations

On top of generating the test run configurations, the macro can also go through test run results to pick up all the failed runs. This allows us to re-run all the failed iterations. First, a snippet of the results:

In this example, we had 3630 Purchase Orders (PO) that we wanted to create, testing the functionality of PO creation. Out of these that we ran, 220 failed. To pick up every single failed test scenario and construct the test run configuration manually can be painful. Based on the configuration in yellow, it picks up the test row and status when you click the “Update From Result” button and generates the configuration automatically. This is illustrated below.

The file can be downloaded via the link Test_Configuration_Generator.

Depending on the QC-QTP versions/framework implementation that you have, the suggestions above may or may not apply to your specific environments. However, if you are facing the same challenges as we have, we have found that the design and tools implemented have increased our effectiveness managing the execution of our tests. Hopefully, you may find it useful in your set up as well.

JDS are experts in managing and driving efficiency in large Quality Center deployments. Contact us to discuss how we can help you!

Posted by JDS Admin in Tech Tips
Monitoring Active Directory accounts with HP BAC

Monitoring Active Directory accounts with HP BAC

Lately we’ve had an annoying problem of an Active Directory (AD) account that is used for our HP Business Process Monitor (BPM) scripts getting locked at random times. Because it’s an intermittent problem, it’s hard to track down where the request is coming form.

I wasn’t getting alerted straight away of login failure because of how slowly the AD replication works at the site I was on. The account will keep on working for most BPM’s for up to a day after the original failure and alerts don’t get sent out until it’s too late to check the domain controller logs for the original lock.

One of the Active Directory sysadmins sent me a Microsoft program called lockoutstatus.exe. This tool queries the domain controllers and reports on if the account is locked out and for how long it has been locked out. Unfortunatly this only lets you check the problem re-actively instead of proactively. So I thought that maybe we could monitor the lockout status by using a BPM, recorded in the LDAP recording protocol.

Recording lockoutstatus.exe showed many hits other DCs as well as additional search functions, but we’re just interested in the bind (mldap_logon_ex function) and the search query (mldap_search_ex function). The “SaveAsParam=True” in the mldap_search_ex function saves the LDAP directory entries as parameters, the one that we’re interested in is {mldap_attribute_lockoutTime_1}.

This attribute is the amount in seconds since the account was locked. So if its not 0 we can fail the transaction and have BAC alert on this. We had some spare transactions in our BAC prod environment, while we worked out the problem, but this might be able to be deployed as a SiteScope VuGen script too if you have a stand alone SiteScope server and are short on BPM licenses.

Here is the code I used:

You will need to put these lines into your globals.h file if you’re creating a script from scratch (this will be done automatically if you do a record using LDAP protocol script):

#include "mic_mldap.h"
MLDAP mldap1;

Put this code into your Action.c or main block and modify the lr_save_string parameters to suit your environment:

      int Locktime;

      lr_start_transaction("LDAP Login and search");

	  lr_save_string("myaccount", "LDAPUser");  // AD account that's authorised to search AD
	  lr_save_string(lr_decrypt("4fc861406e270d5297cb2c4097f8"), "LDAPPass"); // Password for account
	  lr_save_string("", "DCmachine"); // FQDN of the domain controller
	  lr_save_string("lockoutaccount", "SearchUser");  // Account that's being monitored for lockout
	  lr_save_string("", "SearchUserDomain");  // Domain for the account that's being monitored for lockout

      mldap1 = 0;

	// Logon to Active Directory or LDAP

	// Execute seach

                      "Base=CN={SearchUser},OU=Service Accounts,OU=Security Principles,DC={SearchUserDomain},DC"

      lr_end_transaction("LDAP Login and search", LR_AUTO);
      lr_start_transaction("Account not locked");

      Locktime = atoi(lr_eval_string("{mldap_attribute_lockoutTime_1}"));

      if (Locktime != 0) {
                       lr_output_message("\n\n Account is locked out \n\n");
                       lr_fail_trans_with_error("Account locked out for %s seconds", lr_eval_string("{mldap_attribute_lockoutTime_1}"));

      lr_end_transaction("Account not locked", LR_AUTO);

	  /* you can put this part outside of the transaction block to save yourself a transaction
	     because we don't care too much if it doesn't logoff gracefully  */

      return 0;

You can gather a large amount of useful information from Active directory using the LDAP protocol. Some other possible applications for the LDAP protocol in VuGen are:

  • monitor accounts which need to have their passwords changed a few weeks beforehand
  • monitor password resets of sensitive accounts
  • generating reports from active directory


Posted by JDS Admin in Tech Tips


You can’t manage what you can’t see—monitoring provides visibility into how your IT is running. JDS enables monitoring of critical IT systems, from infrastructure layers right through to the end-user. We provide comprehensive insights into your IT system's health and how to fix it when things go wrong.

Why monitor your IT system?

Reduce downtime
Alerting, reporting, trending, and capacity planning reduce downtimes

User experience
Understand the end-user experience and how issues may impact them before they have to report

Infrastructure health
Monitoring your applications, servers, network devices, and databases reduces time to identify and fix issues

Bringing all alerts into one location allows for correlations and root causes to be determined quickly

Repeated issues with known resolutions can be automated to free up your operations team for more complex tasks

Monitoring support from JDS

Application performance management
In an increasingly digital era, APM suites are becoming critical for a reliable and risk-free enterprise IT landscapes. Ensure application performance and user satisfaction by proactively monitoring applications.

Operations analytics
By harnessing machine intelligence, learning algorithms, and the input of subject matter experts, operations analytics can help your IT operations team identify a root cause in minutes.

Real-user monitoring
Measure the true experience of all of your users, all the time and in all locations. Determine how well a business service is being delivered to customers with end-user experience monitoring.

Network monitoring
With the resiliency of today’s IP networks, fault and availability management is not enough. Protect your organisation from poor network performance and rendering organisational services unavailable.

Event management and correlation
Consolidate IT event management activities into a single pane of glass that reduces duplication of effort, allows quick identification of the causes of IT incidents, and decreases the time it takes to rectify IT issues.

Infrastructure monitoring
Reliably deliver performance across today’s complex, multi-technology network infrastructure with infrastructure management solutions.

Why choose JDS?

JDS is a leading IT solutions provider and systems integrator, with expertise across industry-leading tools such as ServiceNow, Splunk, AppDynamics, Micro Focus, SAP, PagerDuty, and more. JDS provides local, skilled, and responsive services to support IT projects and operations.

Bringing together expert services, the latest technology, and best practices, we achieve improved IT outcomes for businesses. We do this by giving independent advice, providing training and ongoing support, and implementing IT testing, monitoring, and management solutions.

Monitoring success stories

Posted by JDS Admin in AppDynamics, Tech Tips
An introduction to SiteScope EMS Topology

An introduction to SiteScope EMS Topology

Say you want to populate your CMDB with all the servers at your company. A discovery tool is the obvious answer, but what if you don’t have DDM (previously known as MAM)? SiteScope has new a feature which will allow you to populate your CMDB using EMS integration topology.

Screenshots of the presentation are provided here, but you may also download the PowerPoint slides from this presentation.
Continue reading →

Posted by JDS Admin in Tech Tips
Using the BAC JMX Console

Using the BAC JMX Console

This is a short presentation on the BAC JMX Console, a feature that allows you to directly call methods exposed by the underlying BAC API.

Screenshots of the presentation are provided here, but you may also download the PowerPoint slides from this presentation.
Continue reading →

Posted by JDS Admin in Tech Tips