Category: Tech Tips

Fast-track ServiceNow upgrades with Automated Testing Framework (ATF)

Why automate?

ServiceNow provides two releases a year, delivering new features to consistently support best practice processes. ServiceNow has flagged they will move towards “N-1” upgrades in 2019, meaning every customer will need to accept each release in the future. To keep up, ServiceNow customers should consider automated testing. This can provide regression testing, completed consistently for each release, and reduce the time and effort needed per upgrade. Effective test automation on ServiceNow is provided by the Automated Testing Framework (ATF), which is included in the ServiceNow platform for all applications. It enables no-code and low-code users to create automated tests with ease. Aided by complementary processes and methodology, ATF reduces upgrade pain by cutting back on manual tests, reducing business impact and accelerating development efficiency.

What is the Automated Test Framework?

Testing frameworks can be viewed as a set of guidelines used for creating and designing test cases. Test automation frameworks take this a step further by using a range of automation tools designed to make the practice of quality assurance more manageable for testing teams.

ServiceNow’s ATF application combines the above into an easy to use and implement solution specifically for ServiceNow applications and functionality. Think of ATF as a tool to streamline your upgrade and QA processes by building tests to check if software or configuration changes have potentially ‘broken’ any existing functionality. It also means developers would no longer be required to start invasive activities like code refactoring to generate new test cases.

Overview of test creation in ServiceNow ATF - click to enlarge

ATF’s unique features include:

  • Codeless setup of test steps and templates (reusable across multiple tests)
  • Testing Service Catalog from end-to-end including submitting Catalog forms and request fulfilment
  • Module testing e.g. ITSM (Incident, Problem & Change Management)
  • Testing forms in the Service Portal (Included in London release)
  • Batching with test suites e.g. Execute multiple tests in a defined order and trigger based on a schedule
  • Custom step configurations & unit testing using the Jasmine test framework
       

Using the codeless test step configuration in ATF to set a Variable value on a Service Catalog form

Simplify your upgrades

For most software upgrades, testing is a burdensome and complicated process, with poorly defined test practices often leading to compromised software quality and potentially, failed projects. Without automation, teams are forced to manually test upgrades - a costly, time-consuming and often ineffective exercise. In most cases, this can be attributed to a lack of testing resources and missing test processes, leading to inconsistent execution.

To address these, ATF introduces structure and enforces consistency across different tests applied to common ServiceNow use cases. Test consistency is important, as this forms a baseline of instance stability based on existing issues, meaning defects caused by upgrades are more reliably determined. ATF allows for full automation of test suites, where the run order can be configured based on tests that depend on each other.  

The following illustrates a simple test dependency created in ATF:

An example ‘hierarchical’ test structure implemented within ATF

A common issue resolved by automated testing is the impact on Agile development cycles. Traditionally, developers would run unit tests by hand, based on steps also written by the developer, potentially leading to missed steps and incomplete scenarios. ATF can be used to run automated unit tests between each build cycle, saving time through early detection and remediation of software issues along with shortened development cycles. The fewer issues present before the upgrade, the fewer issues introduced during the upgrade.

A common requirement of post-upgrade activities is to execute a full regression test against the platform. This means accounting for countless business logic, client scripts, UI policies and web forms that may have been affected. Rather than factoring all of these scenarios into lengthy UAT/Regression cycles, ATF can reduce the load on the business by running those common and repetitive test cases involving multiple user interactions and data entry.

The below example shows how a common use case for testing, field ‘state’ validation on a Service Portal form, is applied using the test step module:

Validating visible & mandatory states of variables with ATF against a Record Producer in Service Portal

Unfortunately, not everything can be automated with ATF out of the box, there are some gaps that are a result of complex UI components, portal widgets and custom development work. It is also important to note custom functionality will add overhead to the framework implementation, and will require specialised scripting knowledge and making use of the ‘Step Configurations’ module to create custom test steps.

When configured properly, it’s possible to automate between 40-60% of test cases with ATF depending on environment complexity and timeframes. The benefits can largely be seen during regression testing post-upgrade, and unit testing during development projects.

In summary, implementing an ATF is a great way of delivering value to ServiceNow upgrade projects and enabling development teams to be more agile. Assessing the scope and depth of testing using an agreed methodology is a great way to determine what is required, and to achieve ‘buy-in’ from others, rather than taking the ‘automate everything’ approach.

JDS is here to help

Recognised as experts in the local Test Automation space for over 12 years, JDS' specialist team has adapted this experience to provide a proven framework for ServiceNow ATF implementation.

We have developed a Rapid Automated Testing tool which can use your existing data to take up to 80% of the work out of building your automated tests. Contact us today to find out how we can build test cases based on real data and automate the development of your testing suite in a fraction of the time that manual test creation would require.

We can help you get started with ServiceNow ATF. In just a few days, a ServiceNow ATF “Kick Start” engagement will provide you with the detail you need to scope and plan ATF on your platform.

Conclusion

Want to know more? Email [email protected] or call 1300 780 432 to reach our team.

Our team on the case

Is DevPerfOps a thing?

New technology terms are constantly being coined. One of our lead consultants answers the question: Is DevPerfOps a thing?


Hopefully it’s no surprise that traditional performance testing has struggled to adapt to the new paradigm of DevOps, or even Agile. The problems can be boiled down to a few key factors:

  1. Performance testing takes time—time to script, and time to execute. A typical performance test engagement is 4-6 weeks, during which time the rest of the project team has made significant progress. 
  2. Introducing performance testing to a CI/CD pipeline is really challenging. Tests typically run for an hour or more and can hold up the pipeline, and they are extremely sensitive to application changes as the scripts operate at the protocol level (typically HTTP requests). 
  3. Performance testing often requires a full-sized, production-like infrastructure environment. These aren’t cheap and are normally unavailable during early development, making “early performance testing” more of an aspirational idea, rather than something that is always practical.

All the software vendors will tell you that DevOps with performance testing is easy, but none of them have solved the above problems. For the most part, they have simply added a CI/CD hook without even attempting to challenge the concept of performance testing in DevOps at all.

A new world

So what would redefining the concept of performance testing look like? Allow me to introduce DevPerfOps. We would define the four main activities of a DevPerfOps engineer as:

  1. Tactical Load Tests
  2. Microbenchmarking
  3. APM and Code Reviews
  4. Container Workloads

Let’s start at the top.

Tactical Load Testing

This is can be broadly defined as running performance tests within the limitations of a CD/CI pipeline. Is it perfect? No. Is it better than not doing anything or struggling with a full end-to-end test? Absolutely, yes!

Tactical Load Testing has the following objectives:

  • Leverage CI toolkits like Jenkins for execution and results collection.
  • Use lightweight toolsets for execution (JMeter, Taurus, Gatling, LoadRunner, etc. are all perfectly good choices).
  • Configure performance tests to run for short durations. 15-20 minutes is the ideal balance. This can be achieved by minimising the run logic, reducing think times, and focusing on only testing individual business processes.
  • Expect to run your performance tests on a scaled-down environment, or a single server node. Think about what this will do to your workload profile.

Microbenchmarking

These are mini unit or integration tests designed to ensure that a component operates within a performance benchmark and that deviations are detected.

Microbenchmarking can target things like core application code, SQL queries, and Web Service integrations. There is an enormous amount of potential scope here; for example, you can write a microbenchmark test for a login, or a report execution, or a data transformation--anything goes if it makes sense.

Most importantly, microbenchmarking is a great opportunity to test early in the project lifecycle and provide early feedback.

APM and Code Reviews

In past years, having access to a good APM tool or profiler, or even the source code, has been a luxury. Not anymore--these tools are everywhere, and while a fully featured tool like AppDynamics or New Relic is beneficial for developers and operations, a lot can be achieved with low-cost tools like YourKit Profiler or VisualVM.

APM or profilers allow slow execution paths to be identified, and memory usage and CPU utilisation to be measured. Resource intensive workloads can be easily identified and performance baked into the application.

Container Workloads

Containers will one day rule the world. If you don’t know much about containers, you need to learn about them fast.

In terms of performance engineering, each container handles a small unit of workload and container orchestration engines like Kubernetes will auto-scale that container horizontally across the entire data centre. It’s not uncommon for applications to run on hundreds or thousands of containers.

What’s important here is that the smallest unit of workload is tiny, and scaling is something that a well-designed application will get for free. This allows your performance tests to scale accordingly, and it allows the whole notion of “what is application infrastructure” to be challenged. In the container world, an application is a service definition… and the rest is handled for you.

So, is DevPerfOps a thing? We think so, and we think it will only continue to grow and expand from here. It’s time that performance testing meets the needs of 2018 IT teams, and JDS can help. If you have any questions or want to discuss more, please get in touch with our performance engineering team by emailing [email protected]. If you’re looking for a quick, simple way of getting actionable insights into the performance health of your website or application, check out our current One Second Faster promotion.

Our team on the case

Read more Tech Tips

Monitoring Atlassian Suite with AppDynamics

Millions of IT professionals use JIRA, Confluence, and Bitbucket daily as the backbone of their software lifecycle. These tools are critical to getting anything done in thousands of organisations. If you’re reading this, it’s safe to guess that’s the case for your organisation too!

Application monitoring is crucial when your business relies on good application performance. Just knowing that your application is running isn’t enough; you need assurance that it’s performing optimally. This is what a JDS client that runs in-house instances of JIRA, Confluence, and Bitbucket, recently found out.

This client, a major Australian bank, started to notice slowness with JIRA, but the standard infrastructure monitoring they were using was not providing enough insight to allow them to determine the root cause.

JDS was able to instrument their Atlassian products with AppDynamics APM agents to gain insights into the performance of the applications. After deployment of the Java Agents to the applications, AppDynamics automatically populated the topology map below, known as a Flow Map. This Flow Map shows the interactions for each application, accompanied by overall application and Business Transaction health, and metrics like load, response time, and errors.

After some investigation, we found the root cause of the JIRA slowness was some Memcached backends. Once we determined the root cause and resolved the issue, operational dashboards were created to help the Operations team monitor the Atlassian application suite. Below is a screenshot of a subsection of the dashboard showing Database Response Times, Cache Response Times, and Garbage Collection information.

An overview dashboard was also created to assist with monitoring across the suite. The Dashboard has been split out to show Slow, Very Slow, and Error percentages along with Average Response Times and Call Volumes for each application. Drilldowns were also added to take the user directly to the respective application Flow Map. Using these dashboards, they can, at a glance, check the overall application health for the Atlassian products. This has helped them improve the quality of service and user experience.

The bank’s JIRA users now suffer from far fewer slowdowns, particularly during morning peaks when many hurried story updates are taking place in time for stand-ups! The DevOps team is also able to get a heads-up from AppDynamics when slowness starts to occur, rather than when performance has fallen off a cliff.

So if you’re looking for more effective ways to monitor your Atlassian products, give our AppDynamics team a call. We can develop and implement a customised solution for your business to help ensure your applications run smoothly and at peak performance.

Our team on the case

Our AppDynamics stories

5 quick tips for customising your SAP data in Splunk

Understanding how your SAP system is performing can be a time-consuming process. With multiple environments, servers, APIs, interfaces and applications, there are numerous pieces to the puzzle, and stitching together the end-to-end story requires a lot of work.

That’s where SAP PowerConnect can assist. This SAP-certified tool simplifies the process by seamlessly collating your SAP metrics into a single command console: Splunk. PowerConnect compiles and stores data from each component across your SAP landscape and presents the information in familiar, easily accessible, and customisable Splunk dashboards. When coupled with Splunk’s ability to also gather machine data from non-SAP systems, this solution provides a powerful insight mechanism to understand your end user’s experience.

Given the magnitude of information PowerConnect can collate and analyse, you may think that setting it up would take days—if not weeks—of effort for your technical team. But one of PowerConnect’s key features is its incredibly fast time to value. Whatever the size of your environment, PowerConnect can be rapidly deployed and have searchable data available in less than ten minutes. Furthermore, it is highly customisable in providing the ability to collect data from custom SAP modules and display these in meaningful context sensitive dashboards, or integrate with Splunk IT Service Intelligence.

Here are some quick tips for customising your SAP data with PowerConnect.

1. Use the out-of-the-box dashboards

SAP software runs on top of the SAP NetWeaver platform, which forms the foundation for the majority of applications developed by SAP. The PowerConnect add-on is compatible with NetWeaver versions 7.0 through to 7.5 including S/4 HANA. It runs inside SAP and extracts machine data, security events, and logs from SAP—and ingests the information into Splunk in real time.

PowerConnect can access all data and objects exposed via the SAP NetWeaver layer, including:

  • API
  • Function Modules
  • IDoc
  • Report Writer Reports
  • Change Documents
  • CCMS
  • Tables
PowerConnect has access to all the data and objects exposed via the SAP NetWeaver layer

If your SAP system uses S/4 HANA, Fiori, ECC, BW components or all the above, you can gain insight into performance and configuration with PowerConnect.

To help organise and understand the collated data, PowerConnect comes with preconfigured SAP ABAP and Splunk dashboards out of the box based on best practices and customer experiences:

Sample PowerConnect for SAP: SAP ABAP Dashboard

Sample PowerConnect for SAP: Splunk Dashboard

PowerConnect centralises all your operational data in one place, giving you a single view in real time that will help you make decisions, determine strategy, understand the end-user experience, spot trends, and report on SLAs. You can view global trends in your SAP system or drill down to concentrate on metrics from a specific server or user.

2. Set your data retention specifications

PowerConnect also gives you the ability to configure how, and how long, you store and visualise data. Using Splunk, you can generate reports from across your entire SAP landscape or focus on specific segment(s). You may have long data retention requirements or be more interested in day-to-day performance—either way, PowerConnect can take care of the unique reporting needs for your business when it comes to SAP data.

You have complete control over your data, allowing you to manage data coverage, retention, and access:

  • All data sets that are collected by PowerConnect can be turned off, so you only need to ingest data that interests you.
  • Fine grain control over ingested data is possible by disabling individual fields inside any data sets.
  • You can customise the collection interval for each data set to help manage the flow of data across your network.
  • Data sets can be directed to different indexes, allowing you to manage different data retention and archiving rates for different use cases.

3. Make dashboards for the data that matters to you

You have the full power of Splunk at your disposal to customise the default dashboards, or you can use them as a base to create your own. This means you can use custom visualisations, or pick from those available on Splunkbase to interpret and display the data you’re interested in.

Even better, you’re not limited to SAP data in your searches and on your dashboards; Splunk data from outside your SAP system can be correlated and referenced with your SAP data. For example, you may want to view firewall or load balancer metrics against user volumes, or track sales data with BW usage.

4. Compare your SAP data with other organisational data

It is also possible to ingest PowerConnect data with another Splunk app, such as IT Service Intelligence (ITSI) to create, configure, and measure Service Levels and Key Performance Indicators. SAP system metrics can feed into a centralised view of the health and key performance indicators of your IT services. ITSI can then help proactively identify issues and prioritise resolution of those affecting business-critical services. This out-of-the-box monitoring will give you a comprehensive view of how your SAP system is working.

The PowerConnect framework is extensible and can be adapted to collect metrics from custom developed function modules. Sample custom extractor code templates are provided to allow your developers to quickly extend the framework to capture your custom-developed modules. These are the modules you develop to address specific business needs that SAP doesn’t address natively. As with all custom code, the level of testing will vary, and gaining access to key metrics within these modules can help both analyse usage as well as expose any issues within the code.

5. Learn more at our PowerConnect event

If you are interested in taking advantage of PowerConnect for Splunk to understand how your SAP system is performing, come along to our PowerConnect information night in Sydney or Melbourne in May. Register below to ensure your place.

PowerConnect Explainer Video

How to maintain versatility throughout your SAP lifecycle

There are many use cases for deploying a tool to monitor your SAP system. Releasing your application between test environments, introducing additional users to your production system, or developing new functionality—all of these introduce an element of risk to your application and environment. Whether you are upgrading to SAP HANA, moving data centres, or expanding your use of ECC modules or mobile interfaces (Fiori), you can help mitigate the risk with the insights SAP PowerConnect for Splunk provides.

Upgrading SAP

Before you begin upgrades to your SAP landscape, you need to verify several prerequisites such as hardware and OS requirements, source release of the SAP system, and background process volumes. There are increased memory, session, and process requirements when performing the upgrade, which need to be managed. The SAP PowerConnect solution provides you with all key information about how your system is responding during the transition, with up-to-date process, database, and system usage information.

Triaging and correlating events or incidents is also easier than ever with PowerConnect through its ability to time series historic information. It means you can look back to a specific point in time and see what the health of the system or specific server was, the configuration settings, etc. This is a particularly useful feature for regression testing.

Supporting application and infrastructure migration

Migration poses risks. It’s critical to mitigate those risks through diligent preparation, whether it’s ensuring your current code works on the new platform or that the underlying infrastructure will be fit for purpose.

For example, when planning a migration from an ABAP-based system to an on-premise SAP HANA landscape, there are several migration strategies you can take, depending on how quickly you want to move and what data you want to bring across. With a greenfield deployment, you start from a clean setup and bring across only what you need. The other end of the spectrum is a one-step upgrade with a database migration option (DMO), where you migrate in-place.

Each option will have its own advantages and drawbacks; however, both benefit from the enhanced visibility that PowerConnect provides throughout the deployment and migration process. As code is deployed and patched, PowerConnect will highlight infrastructure resource utilisation issues, greedy processes, and errors from the NetWeaver layer. PowerConnect can also analyse custom ABAP code and investigate events through ABAP code dumps by ID or user.

Increasing user volumes

Deployments can be rolled out in stages, be it through end users or application functionality. This is an effective way to ease users onto the system and lessen the load on both end-user training and support desk tickets due to confusion. As user volume increases, you may find that people don’t behave like you thought they would—meaning your performance test results may not match up with real-world usage. In this case, PowerConnect provides the correlation between the end-user behaviour and the underlying SAP infrastructure performance. This gives you the confidence that if the system starts to experience increased load, you will know about it before it becomes an issue in production. You can also use PowerConnect to learn the new trends in user activity, and feed that information back into the testing cycle to make sure you’re testing as close to real-world scenarios as possible.

It may not be all bad news. PowerConnect can highlight unexpected user behaviour in a positive light, where you might find new users are introduced to the system, they don’t find a feature as popular as you thought they would. Hence you would then be able to turn off the feature to reduce licence usage or opt to promote the feature internally. PowerConnect will not only give you visibility into system resource usage, but also what users are doing on the system to cause that load.

Feedback across the development lifecycle

PowerConnect provides a constant feedback solution with correlation and insights throughout the application delivery lifecycle. Typically migrations, deployments, and upgrades follow a general lifecycle of planning, deploying, then business as usual, before making way for the next patch or version.

During planning and development, you want insights into user activity and the associated infrastructure performance to understand the growth of users over time.

  • With the data retention abilities of Splunk, PowerConnect can identify trends from the last hour right back to the last year and beyond. These usage trends can help define performance testing benchmarks by providing concurrent user volumes, peak periods, and what transactions the users are spending time on.
  • In the absence of response time SLAs, page load time goals can be defined based on current values from the SAP Web Page Response Times dashboard.
  • With the ability to compare parameters, PowerConnect can help you make sure your test and pre-prod environments have the same configuration as production. When the test team doesn’t have access to run RZ10 to view the parameters, a discrepancy can be easy to miss and cause unnecessary delays.

Once in production, PowerConnect also gives you client-centric and client-side insights.

  • You can view the different versions of SAP GUI that users have installed or see a world map showing the global distribution of users.
  • Splunk can even alert from a SecOps perspective, and notify you if someone logs in from a country outside your user base. You can view a list of audited logins and browse the status of user passwords.
  • The power of Splunk gives you the ability to alert or regularly report on trends in the collected data. You can be informed if multiple logins fail, or when the CPU vs Work processes is too high. Automated scripts can be triggered when searches return results so that, for example, a ServiceNow ticket can be raised along with an email alert.

Even after a feature has completed its lifecycle and is ready to be retired, PowerConnect remains rich with historical data describing usage, issues, and configuration settings in Splunk, even if that raw data disappears or has been aggregated from SAP.

Backed by the power of Splunk, and with the wealth of information being collected, the insights provided by PowerConnect will help you effectively manage your SAP system throughout your SAP lifecycle.

Implementing an electronic signature in ALM


 

This is where organisations like the FDA (Food and Drug Administration in the United States) and TGA (Therapeutic Goods Administration in Australia) come in, enforcing regulatory controls around all aspects of the manufacturing process to minimise risk and ensure a high level of quality.

These regulatory bodies understand that effective quality assurance goes much further than just regulating the raw materials and manufacturing process. Any software used to control or support the manufacturing process must also adhere to strict quality standards, because a bug in the software can lead to problems in manufacturing with real-world consequences for patients. Software quality assurance can therefore literally be a matter of life or death l.

To ensure that medical manufacturers conduct satisfactory software testing and maintain the required quality assurance standards, the FDA have published the General Principles of Software Validation document which “outlines general validation principles that the Food and Drug Administration (FDA) considers to be applicable to the validation of medical device software or the validation of software used to design, develop, or manufacture medical devices.”

The JDS solution

JDS Australia recently implemented HPE Application Lifecycle Management (ALM) for one of our clients, a manufacturer of medicine delivering more than 10,000 patient doses per week to hospitals in Australia and overseas. ALM is a software testing tool that assists with the end-to-end management of the software testing lifecycle. This includes defining functional and non-functional requirements for a given application and creating test cases to confirm those requirements are met. It also manages all aspects of test execution, the recording and managing of defects, and reporting across the entire testing effort. ALM enables an organisation to implement and enforce their test strategy in a controlled and structured manner, while providing a complete audit trail of all the testing that was performed.

To comply with the FDA requirements, our client required customisation of ALM to facilitate approvals and sign-off of various testing artefacts (test cases, test executions, and defects). The FDA stipulates that approvals must incorporate an electronic signature that validates the user’s identity to ensure the integrity of the process. As an out-of-the-box implementation, ALM does not provide users with the ability to electronically sign artefacts. JDS developed the eSignature add-in to provide this capability and ensure that our client conforms to the regulatory requirements of the FDA.

The JDS eSignature Add-in was developed in C# and consists of a small COM-aware dll file that is installed and registered on the client machine together with the standard ALM client. The eSignature component handles basic field-level checks and credential validation, while all other business rules are managed from the ALM Workflow Customisation. This gives JDS the ability to implement the electronic signature requirements as stipulated by the FDA, while giving us the flexibility to develop customer-specific customisations and implement future enhancements without the need to recompile and reinstall the eSignature component.

Let’s look at a simple test manager approval for a test case to see how it works.

To start with, new “approval” custom fields are added to the Test Plan module which may contain data such as the approval status, a reason/comment and the date and time that the approval was made. These fields are read-only with their values set through the eSignature Workflow customisation. A new “Approve Test” button is added to the toolbar. When the user clicks this button, the Test Approvals form is presented to the user who selects the appropriate approval status, provides a comment, and enters their credentials. When the OK button is clicked, the eSignature component authenticates the user in ALM using an API function from the HPE Open Test Architecture (OTA) COM library.

If the user is successfully authenticated, eSignature passes the relevant information to the ALM workflow script which sets the appropriate field values. The approvals functionality can be further customised to do things such as making the test case read-only or sending an email to the next approver in line to review and approve the test case.

As it currently stands, the eSignature has been implemented in three modules of ALM – Test Plan for test cases that require two levels of review and approval, Test Lab for test execution records that require three levels of review and approval, and Defects to manage the assignment and approvals of defects. This can easily be expanded to include other modules, such as the approvals of test requirements or software releases.

The JDS eSignature add-in has a very small footprint, is easy to install and configure, and provides our clients with the ability to effectively implement an electronic signature capability as part of their software testing strategy.

If you have compliance requirements or are seeking ways to automate your test management processes, contact our support team at JDS Australia. Our consultants are highly skilled across a range of software suites and IT platforms, and we will work with your business to develop custom solutions that work for you.

Contact us at:

T: 1300 780 432

E: [email protected]

Vugen and GitHub Integration

With the release of LoadRunner 12.53, VuGen now has built in GitHub integration. That means you not only have access to GitHub for saving versions of your script, but also other tools like bug tracking, access control and wikis.

Here’s an overview to VuGen’s GitHub integration to get you up and running.

Getting Started

First off, you’ll need a personal Git login. You can sign up for free at github.com/join.  Note that free repositories are publicly available.

You’ll also need LoadRunner 12.53 or higher from HPE.

GitHub Overview

VuGen’s GitHub integration (and GitHub in general) works by managing three versions of your script at a time.

Vugen and GitHub Integration 1

  1. The one you see in VuGen is your working copy. You can develop / replay your script as usual, and save it locally when you want.
  2. When you want a backup of your script – e.g. before doing correlation or rerecording, you can make a check point – or commit the script. This saves the script to a local staging area.
  3. After several commits, or perhaps at the end of the day, you might be ready to save everything to GitHub. To do this you Push the script.

Using GitHub integration straight from VuGen

The following example shows you how to push your first VuGen script to GitHub.

1. Starting out – Creating a Repository

Log into GitHub and create a new repository:

Vugen and GitHub Integration 2

Fill in the details and click “Create Repository”. With a free GitHub account the script will be publicly available, with a paying account you can make it private.

Vugen and GitHub Integration 3

2. VuGen Version Control – Create Local Git Repository
Now create a script in VuGen – in our example it’s ‘BankofSunshine’.

You’ll see a new ‘Version Control’ menu available in VuGen. Chose the option to ‘Create a local Git Repository’.

Vugen and GitHub Integration 4

VuGen manages the files to include so you don’t need to create a .gitignore file. If you prefer to manage it yourself, you can do that too.

3. Commit Changes to the Staging Area

Now you need to commit your script to the local repository. Do this each time you’ve made changes that you might want to push to Git Hub, or if you want to be able to roll back any changes.

When you commit, your local repository is ready to be pushed up to GitHub – but is still only available to you.

Vugen and GitHub Integration 5

4. Push Changes to GitHub

Once you are ready to save your script up to GitHub, you will need to Push the changes.

The first time you do this with your script you will need to tell VuGen some details about the GitHub repository.

Enter your details that you created in Step 1:

Vugen and GitHub Integration 7

Now your newly created script is saved to GitHub.

5. There’s no step 5.

That’s all you need to do. When you go to GitHub and click on your repository, you will see all the files that you just pushed:

Vugen and GitHub Integration 8

To keep track of changes made locally to the script, VuGen will show you which files have updated with a red tick:

Vugen and GitHub Integration 9

While you can access your scripts in ALM from the Controller, you can’t yet access your scripts in GitHub from the Controller. You’ll need to pull down a local copy before you run your test.

Now you are up and running, how about exploring more of what GitHub has to offer. Each script saved to GitHub comes with a Wiki and issues register. These could come in handy when you have large or tricky scripts or for handover to another team

Share your thoughts on VuGen GitHub integration below.

Filtered Reference Fields in ServiceNow

One common requirement JDS sees when working with ServiceNow customers is the need to dynamically filter the available choices in reference fields. This can be useful in both general form development and record producers.

Consider the following business requirement:
A national charity wants to implement a feedback form for its staff.
As staff work across a variety of areas, it’s important to match staff with the correct department, so the correct department manager can be notified about the feedback.

The charity has provided us with some sample data for testing:

Departments

Filtered Reference Fields in ServiceNow 1

Department Staff

Filtered Reference Fields in ServiceNow 2

As this feedback form is available to anyone working within the charity, JDS has built a record producer to capture feedback using the Service Catalog within ServiceNow.

Filtered Reference Fields in ServiceNow 3

To limit the available choices to only staff within a particular department, we’ll need to use a script that is triggered whenever anyone starts typing in the field or accesses a value lookup for the variable “Who would you recommend?”
Although there are numerous ways to approach this, always look for the simplest option. Rather than calling a custom script, use the Reference qualifier field in ServiceNow to set up the relationship.

Filtered Reference Fields in ServiceNow 4

Normally, fields on a form can be referenced using current.fieldname but record producers within a service catalog are slightly different, and need a slightly more qualified reference of current.variables.fieldname.

Let’s look at the results…

If we select “Community Outreach” the list of staff available under “Who would you recommend?” is limited to Jane Brown and Sarah Smith.

Filtered Reference Fields in ServiceNow 5

If we select “Youth Action” the list of staff available under “Who would you recommend?” is limited to Matthew Jones and Sarah Smith.

Filtered Reference Fields in ServiceNow 6

ServiceNow is a powerful platform that allows for advanced customisations to ensure a good fit with business requirements.

Straight-Through Processing with ServiceNow

How much time is your organisation wasting on manual processes?

What is the true cost of lost productivity?

Straight-through processing (STP) has long been recognised as the Holy Grail of business processes, as it promises to eliminate manual paper forms along with human intervention at the back end. By avoiding the duplication of effort, straight-through processing promises to drastically improve the speed and accuracy of business processes.

Although straight-through processing has been attempted by numerous enterprise applications, such as SAP, IBM and Oracle, the problem has been that no single system handles data collection in a consistent, comprehensive manner.

Single System of Record
ServiceNow is unique among enterprise applications in that it uses a single system of record with common/shared supporting processes, such as workflows and notifications. Although there are hundreds of tables in the ServiceNow database, they are all closely related, and are often based on the two core tables — CI (Configuration Item in a CMDB) and Task. In ordinary business terms, CI and Task translate into objects and actions—and ultimately, all business processes revolve around things/people, and actions applied to them.

Source: https://thisiswhatgoodlookslike.files.wordpress.com/2014/05/platform.png

By using a common architecture for all applications, ServiceNow avoids the problem of integrating with conflicting data structures/applications. In essence, everything speaks the same language.

Straight-through processing
ServiceNow has a single system of record, making it ideally suited to straight-through processing. ServiceNow has the flexibility to adapt to multiple different application requirements whilst leveraging common structures and components. For example, fleet car management, purchase orders, and timesheet entries are entirely different business processes, but in essence they deal with objects (cars/orders/people) and activities (driving/tracking expenditure/working hours), and have similar requirements (approvals, workflow, records management, notifications, common master data, etc).

Essentially, the ServiceNow Automation Platform allows different business processes to be captured without changing the underlying system.

JDS recommends deploying straight-through processing in an iterative manner, using an agile approach.

  1. Validate user input with the target system
  2. Digitise paper forms
  3. Straight-through processing
  4. Automation

By adopting an agile approach with implementation in phases, organisations can see incremental benefits throughout the project.

Phase One: Validate user input with the target system
For straight-through processing to be successful, ServiceNow needs to validate incoming information to ensure it’s compatible with the target system. To do this, ServiceNow forms need to be pre-populated with values taken from the core system.

Straight-Through Processing with ServiceNow 2

For example, organisational data such as cost centres and approvers would be integrated with ServiceNow overnight to provide defaults/selection values within ServiceNow form fields. This approach provides the best performance, as access to up-to-the-minute data is not typically required, and the data integration ultimately ensures the consistency and accuracy of straight-through processing.

Phase Two: Digitise paper forms
Once ServiceNow has default values to validate end user input at the point of entry, existing paper forms can be digitized.
In the first phase, the front-end will be transformed with forms which are pre-filled and built upon a responsive UI, whilst the back-end process is unchanged.

 

Straight-Through Processing with ServiceNow 3

 

Although it is possible to automate the entire business process at once, in practice, most organisations prefer a phased approach so they can manage change and reduce the risk of any inadvertent impact on their core system during this time of transition.
Note that this approach immediately has a positive impact on end users, as the old paper forms have been transformed and are now made available through a searchable and user-friendly service catalog, while back-end processes stay largely the same, avoiding any disruption to existing services. The organisation has the opportunity to introduce streamlined processing without causing upheaval in the back office.

Phase Three: Straight-through processing
Once the digitization of paper forms has been established, it is time to automate the process to introduce efficiency to the back office.

 

Straight-Through Processing with ServiceNow 4

As straight-through processing is often interacting with core systems associated with Finance and HR, JDS recommends establishing an approval workflow process. Instead of manually entering all the request information, back office staff now provide oversight and approval of straight-through processing.

Phase 4: Automation
There may be some business processes which involve multiple parties and integrate with multiple systems end-to-end that could be automated in their entirety, such as ordering of software or employee on-boarding. In this case, ServiceNow can streamline straight-through processing with the use of ServiceNow Orchestration.

Orchestration is implemented by workflows in ServiceNow, allowing JDS to configure complex processes that span multiple systems. These may include activities such as approval and follow-on tasks which utilise data outputs from a system called inside the workflow to determine the next action.

Straight-Through Processing with ServiceNow 5

How much is manual processes costing your organisation at the moment?

Think of how much productivity is lost because users are forced to fill out paper forms, or back office staff are required to enter information into multiple systems. By implementing straight-through processing with ServiceNow, JDS can help you streamline your business processes, saving time and money, while radically improving your customer satisfaction.

Splunk: Using Regex to Simplify Your Data

Splunk is an extremely powerful tool for extracting information from machine data, but machine data is often structured in a way that makes sense to a particular application or process while appearing as a garbled mess to the rest of us. Splunk allows you to cater for this and retrieve meaningful information using regular expressions (regex).

You can write your own regex to retrieve information from machine data, but it’s important to understand that Splunk does this behind the scenes anyway, so rather than writing your own regex, let Splunk do the heavy lifting for you.

The Splunk field extractor is a WYSIWYG regex editor.

Splunk 1

When working with something like Netscaler log files, it’s not uncommon to see the same information represented in different ways within the same log file. For example, userIDs and source IP addresses appear in two different locations on different lines.

  • Source 192.168.001.001:10426 – Destination 10.75.001.001:2598 – username:domainname x987654:int – applicationName Standard Desktop – IE10
  • Context [email protected] – SessionId: 123456- desktop.biz User x987654: Group ABCDEFG

As you can see, the log file contains the following userID and IP address, but in different formats.

  • userID = x987654
  • IP address = 192.168.001.001

The question then arises, how can we combine these into a single field within Splunk?
The answer is: regex.

You could write these regex expressions yourself, but be warned, although Splunk adheres to the pcre (php) implementation of regex, in practice there are subtle differences (such as no formal use of look forward or look back).

So how can you combine two different regex strings to build a single field in Splunk? The easiest way to let Splunk build the regex for you in the field extractor.

Splunk 2

If you work through the wizard using the Regular Expression option and select the user value in your log file (from in front of “undername:domainname”) you’ll reach the save screen.
(Be sure to use the validate step in the wizard as this will allow you to eliminate any false positives and automatically refines your regex to ensure it works accurately)
Stop when you get to the save screen.
Don’t click Finish.

Splunk 3

Copy the regular expression from this screen and save it in Notepad.
Then use the small back arrow (highlighted in red) to move back through the wizard to the Select Sample screen again.
Now go through the same process, but selecting the userID from in front of “User.”
When you get back to the save screen, you’ll notice Splunk has built another regex for this use case.

Splunk 4

Take a copy of this regex and put it in Notepad.
Those with an eagle eye will notice Splunk has inadvertently included a trailing space with this capture (underlined above). We’ll get rid of this when we merge these into a single regex using the following logic.
[First regex]|[Second regex](?PCapture regex)

Splunk 5

Essentially, all we’ve done is to join two Splunk created regex strings together using the pipe ‘|’
Copy the new joined regex expression and again click the back arrow to return to the first screen

Splunk 6

But this time, click on I prefer to write the regular expression myself.

Splunk 7

Then paste your regex and be sure to click on the Preview button to confirm the results are what you’re after.

Splunk 8

And you’re done… click Save and then Finish and you can now search on a field that combines multiple regex to ensure your field is correctly populated.

How to fix common VuGen recording problems

One of the problems that occurs on occasion when using VuGen, are problems with Internet Explorer when starting a recording of a Web HTTP/HTML protocol script.

These problems can manifest in different ways; for example, a hanging or unresponsive IE window; a windows error message; or a VuGen crash.
Read More