Tech Tips

Implementing an electronic signature in ALM

Implementing an electronic signature in ALM

When it comes to quality assurance, consumers have come to expect and demand a high standard from manufacturers. Quality failures (e.g. the reputation Ford Pinto engines had for catching fire) have the potential to cause enormous financial losses running into millions of dollars, not to mention the potential of personal injury or loss of life. Nowhere is this more apparent than in the medical industry, where mistakes can have devastating consequences.

 

This is where organisations like the FDA (Food and Drug Administration in the United States) and TGA (Therapeutic Goods Administration in Australia) come in, enforcing regulatory controls around all aspects of the manufacturing process to minimise risk and ensure a high level of quality.

These regulatory bodies understand that effective quality assurance goes much further than just regulating the raw materials and manufacturing process. Any software used to control or support the manufacturing process must also adhere to strict quality standards, because a bug in the software can lead to problems in manufacturing with real-world consequences for patients. Software quality assurance can therefore literally be a matter of life or death l.

To ensure that medical manufacturers conduct satisfactory software testing and maintain the required quality assurance standards, the FDA have published the General Principles of Software Validation document which “outlines general validation principles that the Food and Drug Administration (FDA) considers to be applicable to the validation of medical device software or the validation of software used to design, develop, or manufacture medical devices.”

The JDS solution

JDS Australia recently implemented HPE Application Lifecycle Management (ALM) for one of our clients, a manufacturer of medicine delivering more than 10,000 patient doses per week to hospitals in Australia and overseas. ALM is a software testing tool that assists with the end-to-end management of the software testing lifecycle. This includes defining functional and non-functional requirements for a given application and creating test cases to confirm those requirements are met. It also manages all aspects of test execution, the recording and managing of defects, and reporting across the entire testing effort. ALM enables an organisation to implement and enforce their test strategy in a controlled and structured manner, while providing a complete audit trail of all the testing that was performed.

To comply with the FDA requirements, our client required customisation of ALM to facilitate approvals and sign-off of various testing artefacts (test cases, test executions, and defects). The FDA stipulates that approvals must incorporate an electronic signature that validates the user’s identity to ensure the integrity of the process. As an out-of-the-box implementation, ALM does not provide users with the ability to electronically sign artefacts. JDS developed the eSignature add-in to provide this capability and ensure that our client conforms to the regulatory requirements of the FDA.

The JDS eSignature Add-in was developed in C# and consists of a small COM-aware dll file that is installed and registered on the client machine together with the standard ALM client. The eSignature component handles basic field-level checks and credential validation, while all other business rules are managed from the ALM Workflow Customisation. This gives JDS the ability to implement the electronic signature requirements as stipulated by the FDA, while giving us the flexibility to develop customer-specific customisations and implement future enhancements without the need to recompile and reinstall the eSignature component.

Let’s look at a simple test manager approval for a test case to see how it works.

To start with, new “approval” custom fields are added to the Test Plan module which may contain data such as the approval status, a reason/comment and the date and time that the approval was made. These fields are read-only with their values set through the eSignature Workflow customisation. A new “Approve Test” button is added to the toolbar. When the user clicks this button, the Test Approvals form is presented to the user who selects the appropriate approval status, provides a comment, and enters their credentials. When the OK button is clicked, the eSignature component authenticates the user in ALM using an API function from the HPE Open Test Architecture (OTA) COM library.

If the user is successfully authenticated, eSignature passes the relevant information to the ALM workflow script which sets the appropriate field values. The approvals functionality can be further customised to do things such as making the test case read-only or sending an email to the next approver in line to review and approve the test case.

As it currently stands, the eSignature has been implemented in three modules of ALM – Test Plan for test cases that require two levels of review and approval, Test Lab for test execution records that require three levels of review and approval, and Defects to manage the assignment and approvals of defects. This can easily be expanded to include other modules, such as the approvals of test requirements or software releases.

The JDS eSignature add-in has a very small footprint, is easy to install and configure, and provides our clients with the ability to effectively implement an electronic signature capability as part of their software testing strategy.

If you have compliance requirements or are seeking ways to automate your test management processes, contact our support team at JDS Australia. Our consultants are highly skilled across a range of software suites and IT platforms, and we will work with your business to develop custom solutions that work for you.

Contact us at:

T: 1300 780 432

E: contactus@jds.net.au

Our team on the case

'Do what you can, with what you have, where you are.' - Theodore Roosevelt.

Reinhardt Moller

Technical Consultant

Length of Time at JDS

9.5 years

Skills

Products: Primary: HPE ALM and UFT, Secondary: HPE BSM and Loadrunner, ServiceNow

Personal: Photography

Workplace Passion

Helping customers build solutions to tackle testing and monitoring challenges

Posted by Amy Clarke in Case Study, Energy and Resources, Government, NGO, Tech Tips, 0 comments
Your organisation deserves a good dashboard (and here’s why)

Your organisation deserves a good dashboard (and here’s why)

Cars cost a lot of money, and when a driver gets behind the wheel, they want to know that every component is working correctly. It can mean the difference between life and death—not to mention getting to your destination on time! For this reason, vehicle dashboards are painstakingly designed to be simple yet functional, so that virtually anyone can understand them at a glance.

In a similar vein, how much investment was involved in building up your organisation and its IT infrastructure—and let's not forget ongoing maintenance! The cost of system failure can mean life or death for your business, missing destinations and deadlines. In some sectors, such as health or search and rescue, it can even lead to injury or loss of life. With the consequences of lost visibility in mind, take a look at your organisation's dashboards (if they exist!). Ask yourself—are they as easy to understand as the dashboard of a car?

Most organisations would reply 'no' to that question. All too often, dashboards exist because the organisation's monitoring solution provided one out-of-the-box.  That's fine if your intended audience is all technically inclined, and understand what it means when there is a 'memory bottleneck' or the 'committed memory in use is too high'. These alerts, however, might mean nothing to upper management or the executive team, who are directly responsible for approving your team's budget. Action needs to be taken to translate the information, so that it is accessible to all your key decision-makers. So what are the first steps?

Here are three initial items to consider:

  1. Context is everything! Without context, your executive team may not understand the impact of an under-resourced ESX server that's beginning to fail. If, however, your dashboard were to show that the ESX server happens to host the core income stream systems for the organisation, you may have their attention (and funding).
  2. Visualise the data! Approximately 60% of the world population are visual thinkers, so your dashboard should be visually designed.  Find a way to visualise your data. Show the relationships and dependencies between systems. Oh, and "death to pie charts!".
  3. Invest the time and effort! Find your creative spark, and brainstorm as a team. A well-designed dashboard will pay on-going dividends with every incident managed, or business case written. Make sure you allot time to prove your work against SLAs.

If you need help with dashboard development or design, give JDS a call on 1300 780 432 and speak with one of our friendly consultants.

Our team on the case

Never burn a bridge, humility may require you to take a step backward one day.

Warren Saunders

Consultant

Length of Time at JDS

6 years

Workplace Solutions

All things Event Management (HPE Operations Management i, HPE Business Service Management, HPE Business Process Monitoring, HPE Sitescope, and custom integrations for everything in between).

Workplace Passion

  • Making complex issues easy to understand through competent technical writing, presentation delivery and creative design.
  • Finding solutions to problems which were previously placed in the “too hard” basket.

Ensure IT works.

Chris Younger

Delivery Manager

Length of Time at JDS

10 years

Skills

HP Certified Instructor, HP Business Service Management, HP LoadRunner, Splunk Sales Engineer, Certified in Service Now.

Workplace Passion

I am a big advocate for the value provided by Synthetic Business Process monitoring. When I first saw it 10 years ago in HP Business Availability Center I was impressed and I am still touting the benefits after all this time. Time and time again, I have seen it provide invaluable visibility into the user experience. These days it can be achieved using trendy newer tools such as Splunk for a fraction of the price it previously cost.

Work hard, Play hard.

Andy Erskine

Consultant

Length of Time at JDS

2.5 years

Skills

  • The CA Suite of tools, notably CA APM, CA Spectrum, CA Performance Manager
  • Splunk

Workplace Passion

Learning new applications and applying them to today’s problems.
Posted by wsaunders in Monitor, Tech Tips, 0 comments
Service portal simplicity

Service portal simplicity

The introduction of the Service Portal, using AngularJS and Bootstrap, has given ServiceNow considerable flexibility, allowing customers to develop portals to suit their own specific needs.

While attribution depends on whether you subscribe to Voltaire, Winston Churchill, or Uncle Ben from Spiderman, “With great power comes great responsibility.” And this is especially true when it comes to the Service Portal. Customers should tread lightly so as to use the flexibility of the portal correctly.

A good example of this arose during a recent customer engagement, where the requirement arose to allow some self-service portal users the ability to see all the incidents within their company. This particular customer provides services across a range of other smaller companies, and wanted key personnel in those companies to see all their incidents (without being able to see those belonging to other companies, and without being able to update or change others’ incidents). 

The temptation was to build a custom widget from scratch using AngularJS, but simplicity won the day. Instead of coding, testing, and debugging a custom widget, JDS reused an existing widget, wrapping it with a simple security check, and reduced the amount of code required to implement this requirement down to one line of HTML and two lines of server-side code.

The JDS approach was to instantiate a simple list widget on-the-fly, but only if the customer’s security requirements were met.

Normally, portal pages are designed using Service Portal’s drag-and-drop page designer, visible when you navigate to portal configuration. In this case, we’re embedding a widget within a widget.

We’ll create a new widget called secure-list that dynamically adds the simple-list-widget only if needed when a user visits the page.

Let’s look at the code—all three lines of it:

HTML

<div><sp-widget widget="data.listWidget"></sp-widget></div>

By dynamically creating a widget only if it meets certain requirements, we can control when this particular list widget is built.

Server-Side Code

(function() {
                if(gs.getUser().isMemberOf('View all incidents')){
                                data.listWidget = $sp.getWidget('widget-simple-list', {table:'incident',display_field:'number',secondary_fields:'short_description,category,priority',glyph:'user',filter:'active=true^opened_by.company=javascript:gs.getUser().getCompanyID()',sp_page:'ticket',color:'primary',size:'medium'});
                }             
})();

This code will only execute if the current user is a member of the group View all incidents, but the magic occurs in the $sp.getWidget, as this is where ServiceNow builds a widget on-the-fly. The challenge is really, ‘where can we find all these options?’

How do you know what options are needed for any particular widget, given those available change from one widget to another? The answer is surprisingly simple, and can be applied to any widget.

When viewing the service portal, administrators can use ctrl-right-click to examine any widgets on the page. In this case, we’ll examine the way the simple list widget is used by selecting instance options.

Immediately, we can see there are a lot of useful values that could be set, but how do we know which are critical, and what additional options there are?

How do we know what we should reference in our code?

Is “Maximum entries” called max, or max_entry, or max_entries? Also, are there any other configuration options that might be useful to us?

By looking at the design of the widget itself in the ServiceNow platform (by navigating to portal-> widgets), we can scroll down and see both the mandatory and optional configuration values that can be set per widget, along with the correct names to use within our script.

Looking at the actual widget we want to build on-the-fly, we can see both required fields and a bunch of optional schema fields. All we need to do to make our widget work is supply these as they relate to our requirements. Simple.

{table:'incident',display_field:'number',secondary_fields:'short_description,category,priority',glyph:'user',filter:'active=true^opened_by.company=javascript:gs.getUser().getCompanyID()',sp_page:'ticket',color:'primary',size:'medium'}

Also, notice the filter in our widget settings, as that limits results shown according to the user’s company…

opened_by.company=javascript:gs.getUser().getCompanyID()

This could be replaced with a dynamic filter to allow even more flexibility, should requirements change over time, allowing the filter to be modified without changing any code.

The moral of the story is keep things simple and…

Don't reinvent the wheel

Our team on the case

Document as you go.

Peter Cawdron

Consultant

Length of Time at JDS

5 years

Skills

ServiceNow, Loadrunner, HP BSM, Splunk.

Workplace Passion

I enjoy working with the new AngularJS portal in ServiceNow.
Posted by Peter Cawdron in ServiceNow, Tech Tips, 0 comments
Learning from real-world cloud security crises

Learning from real-world cloud security crises

In the modern interconnected world, an approach to IT security that simply plugs holes with additional firewalls or intrusion detection systems provides only limited safety. Today, the most significant ‘holes’ in a company's IT infrastructure are those which are required to be there—these are the access points that allow your customers to interact, access, and update applications and data. But while these access points are open to your customers, they are also accessible to parties that intend on harm or exploitation of your business.

98.9% of attacks take less than a day to compromise an IT system or application.

JDS’ application security testing service assesses your application’s controls, makes recommendations to remedy identified issues, and removes factors that could aid an attack upon your business.  Our security testing encompasses applications and environments specialising in web, mobile, and cloud applications.

JDS provides our security testing clients with reports containing both technical definitions of the security issues located and, importantly, the high-level business context for the vulnerability. This includes scenario modelling in easily digestible language, enabling your business to make appropriate and timely business decisions and reduce your organisational risk profile.

Learn more about our comprehensive approach to security analysis and testing.

Real-world crises: when things go wrong

A rigorous approach

JDS adopts the Open Web Application Security Project (OWASP) methodology for application security testing. This ensures all web, mobile, and cloud applications undergo a comprehensive assessment.

Posted by Amy Clarke in Tech Tips, 0 comments
What’s new in ServiceNow for 2017?

What’s new in ServiceNow for 2017?

ServiceNow has released its latest version, Istanbul, which introduces a game changing innovation for the platform—Automated Test Framework to provide regression testing.

The Automated Test Framework will drastically simplify future upgrades as customers will be able to have a high level of confidence in testing releases prior to upgrades quickly and efficiently. This will allow customers to stay in-sync with the latest releases from ServiceNow without fear of broken functionality adversely impacting their business processes.

What is the Automated Test Framework?

The Automated Test Framework is a module within ServiceNow that allows customers to define their own tests against any other ServiceNow module, including bespoke modules developed in-house or by third-parties such as JDS.

Tests can be reused in test suites to provide comprehensive coverage of a variety of business processes. In this example, a test has been developed to verify that critical incidents are properly identified in the upgraded version. Notice how each step contains a detailed description of what the test will do in plain English.

Be sure to always include a validation step!

Opening a form, setting values, and successfully submitting that form is not enough to ensure a test is successful. The key to a successful regression testing is to have a point of validation, or confirmation that the test has produced the expected outcome. In this case, validating that the priority of the incident has been set to critical.

When defining test steps, ServiceNow will walk you through a wizard that will show you all the possible automated steps that can be undertaken. Note that ServiceNow will automatically insert this next step after the last step, unless you specify otherwise.

The automated framework can open new or existing records and undertake any action an end-user would normally complete.

Also, note how it is possible to run automated regression tests directly against the server and not just through a regular ServiceNow form.

Be careful when defining server tests as, ideally, automated regression tests should be run through a form, mimicking a transaction undertaken by an actual user. This ensures UI policies and actions are all taken into account in the test results. Tests conducted directly against the server will include business rules, but not UI scripts/actions, which may lead to inconsistent results.

Server tests are ideal for testing integration with third-party systems via web services.

Test results include screenshots so you can see precisely what occurred during your test run. Also, notice that the test has been run against a particular browser engine, and the overall time for the entire process has been captured.

 

 

The individual runtime of each step can be eye-balled from the start-time, although it is possible to have this automatically calculated if need be.

Please note, whether you are creating a new record or modifying an existing record, the results of your automated regression test will NOT become part of that particular table’s records. For example, if you insert an new incident, you will see screenshots of the new incident complete with a new incident number, but if you go to the incident table, it won’t be present. If you update an existing incident, the update will be visible in the test results, but the actual record in the table will not change.

When it comes to best practices, JDS recommends structuring your tests carefully to ensure your coverage is comprehensive and represents an end-to-end business transaction, taking into account the impact on workflows.

If you have any questions about Automated Test Frameworks in ServiceNow, please contact JDS.

 

Posted by Peter Cawdron in ServiceNow, Tech Tips, 0 comments
Learning Analytics

Learning Analytics

CAUDIT Analysis Part 3

View Part 1 Here
View Part 2 Here

Since 2006, the Council of Australian University Directors of Information Technology (CAUDIT) has undertaken an annual survey of higher education institutions in Australia and New Zealand to determine the Top Ten issues affecting their usage of Information Technology (IT).

To help our customers get the most out of this report, JDS has analysed the results and is offering a series of insights into how best to address these pressing issues and associated challenges.

Issue 8: Learning Analytics

student-success

Supporting improved student progress through establishing & utilising learning analytics.”

Every interaction from students, staff and researchers, through any of an institution’s systems, is a data point of interest. These points of interest are a wealth of information that can be used to improve services and experiences institution wide.

Numerous studies show the importance of basing decisions on quantifiable measurements. It allows you to target, change and measure outcomes. For example, measuring student performance beyond the standard result orientated structure can allow an institution to intervene earlier in a student’s progress. One study showed that a university was able to intervene as early as two weeks into a semester (Sclater, 2016). They identified students who were at high risk of dropping out and helped these students adjust their program.

These areas may be outside of the typical ICT purview, but are becoming increasingly ICT driven. ICT tends to own the analytics platform and should be responsible for ensuring it meets an institutes requirements.

CAUDIT Ranking Trend: 2014 – | 2015 #17 | 2016 #8
 

Challenges and Considerations

challenges
Challenge: Manage increased data inflow from a vast array of sources across campuses.
Students and teachers can bring in a multitude of devices for use on campus, including laptops, tablet, smartphones and more. They can also access institutional facilities, applications, systems, and data whilst at university, work, home. These are just a few examples of different institutional data points. Combining all of these devices across a variety of locations depicts how data points are increasing exponentially.

Each data point when analysed in the right manner, holds valuable information – information that determines how an institution should respond and/or intervene to ensure it is providing the right solution to the matter at hand. In order to gain this value, an institution should first understand how to manage many and different data inputs.

JDS has the experience in building and implementing platforms that can provide a universal solution – a solution that can collect, index and analyse any machine data at massive scale, without limitation. Institutions can then search through this data in real time, correlate and analyse with the benefit of machine learning to better understand institutional trends and ultimately make better informed decisions.

Challenge: Cross collaboration across institutional stakeholders (eg. students, teachers, admin, etc.)
An institution such as a university has a wide variety of internal and external stakeholders, all with the need to share information between one another. This information also often originates from a large variety of sources.

The challenge here is making this data from students, teachers, researchers and administration staff accessible to each other in a controlled, normalised manner. These stakeholders will also have different questions to ask of the same data, each with their own requirements. ICT does not necessarily have a deep understanding nor require a deep understanding of the content of this data, but they can provide a single platform for collaboration. A common platform can address these issues and allowing cross collaboration between stakeholders while improving an institutions products and services.

With the help of JDS, ICT can provide such a platform that allows ease of generating dashboards, reports and alerts. JDS can enable the institution to not only ask various questions of the same data but also create visualisations and periodic reports to help interpret and use the information gleamed.

JDS has implemented such solutions across various Australian organisations, unleashing the powerful data collection functionalities of a variety of industry leading monitoring tools. We have our own proprietary solution, JDash, purpose build to help organisations collaborate to understand service impact levels, increase visibility into IT environments, allow efficient decision making and provide cross silo data access and visualisations. JDash is also fully customisable to various institutional needs and requirements.

Challenge: Using analytics for early intervention with student difficulties.
In a recent article in The Australian, the attrition rate from some universities was above 20%. Numerous factors contribute to this, which can ultimately be reduced through early prevention. Institution can make better informed decisions if they have the right data at the right time.

The CAUDIT report highlighted the need to identify student performance early. This allows an institution to intervene by either advising a student of a different direction or providing the right resources to handle their difficulties.

In order to provide this level of insight into student drop-out rates and early intervention, institutions need to build and manage a learning analytics platform. A learning analytics platforms collects, measures, analyses, and reports data on the progress of learners within context.

JDS can assist institutions with building a centralised, scalable, and secure learning analytics platform. This platform can ingest the growing availability of big datasets and digital signals from students interacting with numerous services. From the centralised platform, institutions can interpret the information to make decisions on intervention and ultimately reduce attrition rates of students.

About JDS

about-jds
With extensive experience across the higher education sector, JDS assists institutions of all sizes to streamline operations, boost performance and gain competitive advantage in an environment of tight funding and increasing student expectations. With more than 10 years’ experience in optimising IT service performance and availability, JDS is the partner of choice for trusted IT solutions and services across many leading higher education institutions.

 

ensure it works with

 

Posted by Joseph Banks in Higher Education, News, Tech Tips, 0 comments
Educational Technology

Educational Technology

CAUDIT Analysis Part 2

View Part 1 Here
View Part 3 Here

Since 2006, the Council of Australian University Directors of Information Technology (CAUDIT) has undertaken an annual survey of higher education institutions in Australia and New Zealand to determine the Top Ten issues affecting their usage of Information Technology (IT).

To help our customers get the most out of this report, JDS has analysed the results and is offering a series of insights into how best to address these pressing issues and associated challenges.

Our second insight featured here discusses the importance of educational technology in maximising the student experience and helping universities compete in the digital age.

 

Issue 4: Educational Technology

student-success

Supporting the use of innovative technology in teaching in learning.”

ICT in higher education is subject to the same disruption that’s affecting other industries. The difference is, universities are also at the frontline of generational disruptions including ‘Digital Natives’.

Such generational and technological shifts bring a primary challenge to institutions like yours. The question is, how do you maintain an agile technology platform and reliable services. These services can transcend technology shifts and allow students to mix and match their own personal requirements with what the university has to offer.

Ultimately, universities must offer emerging technologies and services to students that can be used by both digital and non-digital natives. What doesn’t change however, is that both groups are seeking tools that are easy to use, fast to access and always available.

CAUDIT Ranking Trend: 2016 #4 | 2015 – | 2014 –
 

Challenges and Considerations

challenges
Challenge: Embrace emerging technologies throughout learning spaces as well as virtual and remote laboratories.

The dramatic rise of the Bring Your Own Device (BYOD) phenomenon in both the public and private sector presents both challenges and opportunities. Today, organisations and education institutions alike are struggling with deploying technologies that are in sync with what people use in their everyday lives.

The key to resolving this challenge is to have a core platform that offers, tracks and manages these new services. To achieve this, a robust alignment of new technologies is needed with current processes and existing tools/apps. Unfortunately, we’re seeing agencies re-invent the wheel just to get something out there to meet student expectations.

Having an agile platform in place will allow you to standardise new product offerings while at the same time accelerating the adoption of emerging technologies. This platform should facilitate the on-boarding of technologies with minimal effort – backed by orchestration and automation capabilities. In turn, this will give your institution a clear understanding of how technologies are being used and the agility to shift rapidly into new area as required.

Challenge: Provide reliable and robust systems for learning that are interactive, collaborative and personalised.

So you’ve got the latest and greatest technologies…but they’re not always working. What now?

The objective here is that on top of facilitating the use of new technologies through a platform that allows you to distribute, track and manage it, you need to ensure that it is operating efficiently.

Bear in mind, that ‘fast to deliver’ does not always translate to ‘fast to use’. Systems that are deployed need to be reliable so they don’t create more unplanned work.

Unfortunately, unplanned work will impact your institution. For example, full-time employees will be removed from project work to concentrate on fixing issues in production. This has the effect of slowing down the process of adopting and offering emerging technologies – making ICT absent from the innovation discussion.

JDS provides numerous services to ensure the performance of your services – ranging from testing and diagnosing to monitoring. In cases where there are performance issues, we can help you diagnose and narrow down the root-cause of issues. Proceeding to production, JDS can give you visibility into the performance and availability of your systems so you can keep an eye on how your services are being used, and address any issues early – reducing unplanned work.

About JDS

about-jds
With extensive experience across the higher education sector, JDS assists institutions of all sizes to streamline operations, boost performance and gain competitive advantage in an environment of tight funding and increasing student expectations. With more than 10 years’ experience in optimising IT service performance and availability, JDS is the partner of choice for trusted IT solutions and services across many leading higher education institutions.

 

ensure it works with

 

Posted by Joseph Banks in Higher Education, News, Tech Tips, 0 comments
Static Variables and Pointers in ServiceNow

Static Variables and Pointers in ServiceNow

When scripting in ServiceNow, be careful how you create your variables. If you refer to an object such as a gliderecord field/column, you are creating a pointer to the ever-present current value rather than a static value at a point in time. Perhaps an example will help explain…

In our fictitious example, Ranner is the head of marketing and wants to know the first and last person with a first name starting with “Ra…” Obscure, I know, but it will demonstrate the point.

So we develop a script we can test using the Background Script module in ServiceNow to check a subset of records to make sure our logic is correct. We grab the first value and the last after cycling through the table.

var userGlide = new GlideRecord('sys_user');
userGlide.addEncodedQuery('first_nameSTARTSWITHRA');
userGlide.orderBy('first_name');
userGlide.setLimit('10');
userGlide.query();
 
var i = 0;
var firstUser, lastUser;
 
while (userGlide.next()) {
 
if (i==0) {
firstUser = userGlide.first_name;
gs.info('The firstUser is ' + firstUser );
}
 
gs.info('Iteration ' + i + ' current user = ' + userGlide.first_name + '\t firstUser = ' + firstUser );
 
if (!userGlide.hasNext()) {
lastUser = userGlide.first_name;
gs.info('The last user is ' + lastUser);
}
i++;
}
gs.info('-----------------------------------------------');
 
gs.info('First user is ' + firstUser );
gs.info('Last user is ' + lastUser );

If you look at the output when this runs as a Background Script, you can see there are inconsistent results. The firstUser variable changes every time the gliderecord changes even though it was set only once at the beginning.

*** Script: The firstUser is Ra
*** Script: Iteration 0 current user = Ra firstUser = Ra
*** Script: Iteration 1 current user = Ra-cheal firstUser = Ra-cheal
*** Script: Iteration 2 current user = Raamana firstUser = Raamana
*** Script: Iteration 3 current user = Raamon firstUser = Raamon
*** Script: Iteration 4 current user = Rabi firstUser = Rabi
*** Script: Iteration 5 current user = Rabia firstUser = Rabia
*** Script: Iteration 6 current user = Rabin firstUser = Rabin
*** Script: Iteration 7 current user = Racahael firstUser = Racahael
*** Script: Iteration 8 current user = Rachael firstUser = Rachael
*** Script: Iteration 9 current user = Rachael firstUser = Rachael
*** Script: The last user is Rachael
*** Script: -----------------------------------------------
*** Script: First user is Rachael
*** Script: Last user is Rachael

The reason for this is although we defined our variable at the start, all we did was assign a pointer to the glide record, not a static value. Every time the glide record moves onto a new record, so does the value within our variable.

The solution is to make sure we’re only grabbing a value and not an object from the gliderecord, by adding .getValue() to our script

var userGlide = new GlideRecord('sys_user');
userGlide.addEncodedQuery('first_nameSTARTSWITHRA');
userGlide.orderBy('first_name');
userGlide.setLimit('10');
userGlide.query();
 
var i = 0;
var firstUser, lastUser;
 
while (userGlide.next()) {
 
if (i==0) {
firstUser = userGlide.getValue('first_name');
gs.info('The firstUser is ' + firstUser );
}
 
gs.info('Iteration ' + i + ' current user = ' + userGlide.first_name + '\t firstUser = ' + firstUser );
 
if (!userGlide.hasNext()) {
lastUser = userGlide.getValue('first_name');
gs.info('The last user is ' + lastUser);
}
i++;
}
gs.info('-----------------------------------------------');
 
gs.info('First user is ' + firstUser );
gs.info('Last user is ' + lastUser );

Now our results are correct

*** Script: The firstUser is Ra
*** Script: Iteration 0 current user = Ra firstUser = Ra
*** Script: Iteration 1 current user = Ra-cheal firstUser = Ra
*** Script: Iteration 2 current user = Raamana firstUser = Ra
*** Script: Iteration 3 current user = Raamon firstUser = Ra
*** Script: Iteration 4 current user = Rabi firstUser = Ra
*** Script: Iteration 5 current user = Rabia firstUser = Ra
*** Script: Iteration 6 current user = Rabin firstUser = Ra
*** Script: Iteration 7 current user = Racahael firstUser = Ra
*** Script: Iteration 8 current user = Rachael firstUser = Ra
*** Script: Iteration 9 current user = Rachael firstUser = Ra
*** Script: The last user is Rachael
*** Script: -----------------------------------------------
*** Script: First user is Ra
*** Script: Last user is Rachael

When working with glide records, be sure to use the getValue() to avoid headaches with pointers.
Posted by Peter Cawdron in Tech Tips, 1 comment
Student Success Technologies

Student Success Technologies

CAUDIT Analysis Part 1

View Part 2 Here
View Part 3 Here

Since 2006, the Council of Australian University Directors of Information Technology (CAUDIT) has undertaken an annual survey of higher education institutions in Australia and New Zealand to determine the Top Ten issues affecting their usage of Information Technology (IT).

To help our customers get the most out of this report, JDS has analysed the results and is offering a series of insights into how best to address these pressing issues and associated challenges.

Here we take a look at the first technology-related issue.

 

Issue 1: Student Success Technologies

student-success

Improving student outcomes through an institutional approach that strategically leverages technology.”

For the third year in a row, “student success technologies” has been identified as the highest priority issue within the higher education sector in Australia and New Zealand. In addition to having the appropriate technologies in place, focus is turning to the analysis of the vast volume of data that most institutions collect with these technologies. Doing this effectively and in real time can assist in the decision making process ensuring appropriate actions and limited resources are used to their maximum benefit.
In many cases, institutions have already invested in technologies to support student success, but often these are not being leveraged to their fullest potential, or have not been integrated with other available technologies to provide an improved solution to support student, teacher and researcher success. If persisting with these technologies, the attraction and benefit of new and emerging technologies may not be fully realised if built upon this below-performing foundation.

 

It is one thing for institutes to provide technologies that support student success but useless if those technologies are not accessible, available or performing at the expected levels.

 

CAUDIT Ranking Trend: 2014 #1 | 2015 #1 | 2016 #1
 

Challenges and Considerations

challenges
Challenge: Analysing multiple disparate solutions, each providing insight into different aspects of student, faculty, and administration performance.

Many institutions today have a range of applications supporting different aspects of the student experience (for example: student learning portals, ERP and student administration systems, email and document management, BYOD enablers, etc). While consolidating this information and interpreting it through intelligent dashboards can help display the data, it won’t be enough to find correlations and provide beneficial analysis.

JDS can assist with designing and implementing solutions to combine real-time data from multiple disparate sources to provide a unified view via dashboards, analyse trends and interpret complex data relationships to improve the overall student experience.

Challenge: Ensuring that investments made in technology enablers and new emerging technologies are actually being used and are being used effectively.

It is one thing for an institution to commission supporting technologies and deploy them for use, however it is all for nothing if those systems are either infrequently available when required or performing so poorly that they effectively become unusable. IT departments and vendors alike are often guilty of assuming that the way a service has been designed is the way the service will actually be used. Understanding how services are actually being consumed, both in real-time and through historic analysis, can lead to proactive modification of essential services to improve effectiveness.

JDS has a rich history of providing and supporting End-User-Monitoring (EUM) and Application Performance Management (APM) solutions that improve service performance and availability and provide valuable insight into the effectiveness of delivered services.

Challenge: Using data as information to assist with early intervention to ensure student academic completion.

Institutions have access to large volumes of data on their student population and their progress, both current and historic, that are often maintained in multiple, disparate applications and technologies. Combining this internal sourced data with additional external sources (e.g. QILT) improves identification of students requiring additional assistance or those likely to require future assistance.

Being able to combine or ‘dice’ and ‘splice’ this information – across dimensions of discipline, faculty, geographic location, learning methods or any other demographic – can provide valuable insight into where to invest limited support resources to maximise the likelihood of student retention and success.
JDS can assist by leveraging industry best-of-breed technologies to interrogate, consolidate and analyse these disparate data silos to deliver information that can be acted on to improve overall graduation rates.

Challenge: Managing multiple arrangements and contracts with service and vendor providers

With new technologies and cloud point solutions, institutions face the challenge of managing multiple relationships between internal service providers and external partners and vendors. Services being delivered require monitoring and the technologies that underpin them will necessitate upgrades, patches and monitoring of SLAs. New processes need to be created and maintained. Help desks require additional training and knowledge base articles to be created.

 

JDS can assist institutions through the identification of routine processes that can be automated and ensure that new applications and services are integrated into their current service management solution.

 

About JDS

about-jds
With extensive experience across the higher education sector, JDS assists institutions of all sizes to streamline operations, boost performance and gain competitive advantage in an environment of tight funding and increasing student expectations. With more than 10 years’ experience in optimising IT service performance and availability, JDS is the partner of choice for trusted IT solutions and services across many leading higher education institutions.

 

ensure it works with

 

Posted by Joseph Banks in Higher Education, News, Tech Tips, 0 comments
Citrix and web client engagement on an Enterprise system

Citrix and web client engagement on an Enterprise system

JDS were engaged by a leading superannuation firm to conduct performance testing of their enterprise applications migrating to a new platform. This was part of a merger with a larger superannuation firm. The larger superannuation firm was unaware of their application performance needs and until recent times, performance was not always a high priority during the test lifecycle.

JDS were brought in to provide:

  • Guidance on performance testing best practice
  • Assistance with performance testing applications before the migration of each individual super fund across to the new platform
  • Understanding the impact on performance for each fund prior to migration

During the engagement there were multiple challenges which the consultants faced. Listed below is a few key challenges encountered, providing general tips for performance testing Citrix.

Synchronisation

You should have synchronisation points prior to ANY user interaction i.e. mouse click or key stroke. This will ensure the correct timing of your scripts during replay. You don’t want to be clicking on windows or buttons that don’t exist or haven’t completely loaded yet e.g.

ctrx_sync_on_window(“Warning Message”, ACTIVATE, 359, 346, 312, 123, “”, CTRX_LAST);
ctrx_key(“ENTER_KEY”, 0, “”, CTRX_LAST);

Screen resolution and depth

Set your desktop colour settings to 16bit. A higher colour setting adds unneeded complexity to bitmap syncs, making them less robust. Ensure that the display settings are identical for the controller and all load generators. Use the “Windows Classic” theme and disable all the “Effects” (Fading, ClearType, etc.).

Recording

Your transactions should follow the pattern of:

  • Start Transaction
  • Do something
  • Synchronise
  • Check that it worked
  • End Transaction

If you synchronise outside of your transaction timers, then the response times you measure will not include the time it took for the application to complete the action.

Runtime settings

JDS recommends the following runtime settings for Citrix:

Logging

  • Enable Logging = Checked
  • Only send messages when an error occurs = Selected
  • Extended logging -> Parameter substitution = Checked
  • Extended logging -> Data returned by server = Checked

Citrix 1

 

Think Time

Think time should not be needed if synchronisation has been added correctly

  • Ignore think time = Selected

Citrix 2

Miscellaneous

  • Error Handling -> Fail open transactions on lr_error_message = Checked
  • Error Handling -> Generate snapshot on error = Checked
  • Multithreading -> Run Vuser as a process = Selected

Citrix 3

ICA files

At times you may need to build your own ICA files. Create the connection in the Citrix program neighbourhood. Then get the wfclient.ini file out of C:\Documents and Settings\username\Application Data\ICAClient and rename it to an .ica file. Then add it to the script with files -> add files to script. Use the ICA file option for BPMs/load generators over the “native” VuGen Citrix login details for playback whenever possible as this gives you control over both the resolution and colour depth.

Citrix server setup

Make sure the MetaFrame server (1.8, XP, 3, or 4) is installed. Check the manual to ensure the version you are installing is supported. Citrix sessions should always begin with a new connection, rather than picking up from wherever a previously disconnected session left off, which will most likely not be where the script expects it to be.

Black screen of death

Black snapshots may appear during record or replay when using Citrix Presentation Server 4.0 and 4.5 (before Rollup Pack 3). As a potential workaround, on the Citrix server select Start Menu > Settings > Control Panel > Administrative Tools > Terminal Services Configuration > Server Settings > Licensing and change the setting Per User or Per Device to the alternative setting (i.e. If it is set to Per User, change it to Per Device and vice versa.)

Lossy Compression

A script might play back successfully in VuGen on the Load Generator, however when running it in a scenario on the same load generator, it could fail on every single image check. This is probably a result of lossy compression, make sure to disable it on the Citrix server.

Script layout

Put clean-up code in vuser_end to close the connection if the actions fail. Don’t put login code in vuser_init. If the login fails in vuser_ init, you can’t clean-up anything in vuser_end because it won’t run after a failed vuser_init.

 

JDS found performance issues with the applications during performance tests however these issues leaned towards functional performance issues more than volume. These issues were still investigated to provide an understanding of why the applications were experiencing performance problems.

The performance team then worked with action teams to assist with any possible performance resolutions for example:

  • Database indexing
  • Improvements to method calls
  • Improving Database queries
Posted by Lionel Lim in Financial Services, Tech Tips, 0 comments
Understanding Outbound Web Services in ServiceNow

Understanding Outbound Web Services in ServiceNow

Understanding Outbound Web Services in ServiceNow

ServiceNow has the ability to manage both inbound and outbound web services, but as they’re handled slightly differently, they can be confusing. The term “inbound” and “outbound” does not describe the type of HTTP method being used but rather the point of origin for the web service. For example, both inbound and outbound services can use GET and POST.

Inbound web services are designed to provide third parties with the ability to retrieve (GET) or create (POST) data in ServiceNow, while outbound web services allow ServiceNow to initiate a transaction with a third party (also using either GET or POST, etc.)

Although they are closely related, inbound and outbound web services are handled differently through the ServiceNow GUI even though they’re essentially the same under the covers. Most developers are comfortable with inbound web services because ServiceNow provides a walk-through wizard for them, but they struggle with outbound services, even though these can be handled in the same way.

If a third-party wants to send information to ServiceNow using web services, then an inbound web service will allow them to POST that information. By directing incoming web services to a staging table, you can massage and transform incoming information into a format that suits your ServiceNow instance and avoid duplicates.

snow2

If ServiceNow wants to retrieve information from a third-party using web services, then an outbound web service will allow ServiceNow to GET that information. In the same way as inbound web services, outbound GET web services should write to a staging table to the incoming data can be sanitized and duplicates avoided.

 

snow3

 

In both scenarios, the end result is the same—ServiceNow is populated with information from a third-party.

This tutorial assumes you have a basic knowledge of how to create applications in ServiceNow as we’re going to make an outbound web service to retrieve a weather report for use in ServiceNow.

To complete this tutorial, you will need to sign up for a free account with http://openweathermap.org/api and subscribe to the Current weather data.

Step One: Create an outbound web service

The external web service made available by Open Weather is:

http://api.openweathermap.org/data/2.5/forecast/city?q=Brisbane,AU&APPID={yourAppID}

If you sign up for an account with Open Weather and view the results in a browser, you’ll get a wall of JSON data in response, but if you copy and paste this into Notepad++ using the JSON formatter plug in, you’ll see there’s structure to the data. This is important, as this will allow us to extract information later on.

snow4

As you can see, the temperature is in Kelvin rather than either Celsius or Fahrenheit, as 294K relates to a balmy 20C or 70F.

Also, the date time (dt) is using a numeric format known as Epoch time, so we’re going to need to transform the results from our outbound web service before they’re going to be useful.

Creating an outbound web service is easy enough, simply provide the web service with a name like outbound_get_weather and the end point URL, and tell ServiceNow what to accept and what the content type will be.

http://api.openweathermap.org/data/2.5/forecast/city

show5

Under HTTP Methods further down the page, you’ll see get, delete, put, post. Delete all but get as all we want to do is to get information from open weather.
Edit the HTTP method get and add the query parameters for “q” and “APPID.” Don’t forget to set the order in which you want these parameters to occur.

  • q is the query parameter for the city, so we’re going to set this to be a variable rather than a static value. Make this ${q}
  • APPID is a static value, so this can be hard coded to match the application ID you got from Open Weather

Also, notice I’ve added a Variable Substitution and set this to be Sydney,AU, which allows me to click test and see fresh results for Sydney.

snow6

This is important as when we get to our scripted REST web service, we want to be able to change q to refer to other cities as appropriate.

Step Two: A Tale of Two Tables

Next, we’ll create a regular ServiceNow table called current weather  with three columns to capture the data from our web services. This is the table that will be referenced by users and/or other applications.

snow7

Our second table is a staging table. This is where the raw results of our web service will reside.

When you make an inbound web service, ServiceNow automatically creates a staging table, but when it comes to outbound web services, you need to create a staging table for yourself and manually assign a transform map.

Be careful with this step. You must extend the Import Set Row table to create a valid staging transform table for your web service.

snow8

Notice how our data types in the staging table match those we saw in the web service results, so the epoch date is being stored as an integer.

Step Three: Transform

Now we can set up the transformation from our staging table to the final table in ServiceNow. Under System Import Sets click on Create Transform Map and build the following transform.

snow9

Initially, there’s only one source field we can match against the target field, for the others, we’ll need to use transform scripts.

To convert Kelvin to Celsius, add a new Field Map as follows, using this script…

var Celsius = source.temperature – 273.15;
return Celsius;

snow10

To convert the epoch date to a date ServiceNow will recognize, add another field map using…

snow11

var regularDate = new Date(0); // 0 forces javascript to use an epoch date
regularDate.setUTCSeconds(source.weather_date_in_epoch);
var formattedDate = regularDate.getFullYear() + ‘-‘ + (regularDate.getMonth()+1) + ‘-‘ + regularDate.getDate() + ‘ ‘ + regularDate.getHours() + ‘:’ + regularDate.getMinutes() + ‘:’ + regularDate.getSeconds();
return formattedDate;

When you’re finished, your transform map should appear as…

snow12

Step Four: Tying it all together

We’re going to create a scripted REST API to bring everything together.

snow13

The resource we’re developing here is going to be called outbound_request_for_weather and will require some scripting, but the basic logic is pretty simple.

snow14

The logic of our outbound request for the weather is…

  • Time the transaction and record that in the application log
  • Use the ServiceNow RESTMessageV2 library to undertake the web service
  • Delete any old records
  • Set a custom parameter to grab the weather for a specific location
  • Retrieve the results from the response body and add these to the staging table
  • Our transform map will then run automatically in the background and populate the actual table with results

Here’s the script I used.

(function process(/*RESTAPIRequest*/ request, /*RESTAPIResponse*/ response) {

var requestBody, responseBody, status, sm;
try{
//Time the overall transaction so there’s a record of how long our outbound webservice takes
var startTimer = new Date();

sm = new sn_ws.RESTMessageV2(“outbound_get_weather”, “get”);
//sm.setBasicAuth(“admin”,”admin”);
sm.setStringParameter(“q”, “Townsville,AU”);
sm.setHttpTimeout(10000) //In milliseconds. Wait at most 10 seconds for response from http request.

var getresponse = sm.execute();
var responseBody = getresponse.getBody();

//clear out old records
var currentweather = new GlideRecord(“x_48036_weather_current_weather”);
currentweather.query();
currentweather.deleteMultiple();

var gr = new GlideRecord(“x_48036_weather_staging_current_weather”);
gr.query();
gr.deleteMultiple();

//Convert the response into JSON
var JSONdata = new global.JSON().decode(responseBody);

//Loop through the records and add to the staging table
for (var i = 0; i < JSONdata.list.length; i++){
gr.setValue(“weather_date_in_epoch”,JSONdata.list[i].dt);
gr.setValue(“temperature”,JSONdata.list[i].main.temp );
gr.insert();
}
//Post a message to the application log noting how long this activity took
var endTimer = new Date() – startTimer;
gs.info(‘***Weather updated in ‘ + endTimer + ‘ msec’);
} catch(ex) {
responseBody = “An error has occurred”;
status = ‘500’;
}
})(request, response);

Notice how the JSONdata.list[i].main.temp matches the structure of web service results we saw earlier.

If we look at the current weather table, we can see our results.

snow15

If we look at the application log we can see how long this activity took.

snow16

This script can now be set up under the System Definition as a Scheduled Job and run as often as needed.

In summary, we built our outbound web service using the same components as found in an inbound web service.

snow17

One common question that arises when dealing with web services is, “What approach should I use?” The answer is… think about who is initiating the web service and what do you want to accomplish.

snow18

Posted by Peter Cawdron in Tech Tips, 0 comments
How to solve SSL 3 recording issues in HPE VuGen

How to solve SSL 3 recording issues in HPE VuGen

With web application security becoming more important you may find servers refusing to accept SSL 3.0 protocol due to security vulnerabilities such as POODLE (https://en.wikipedia.org/wiki/POODLE ).

Older versions of VuGen will refuse to record the application and display an error page similar to below giving vague information to what the problem is.

SSL3a updated

VuGen 12.50 will now show a popup giving a hint where the problem is:

SSL3b

Using Wireshark it become clear that the issue is with the SSL handshake:

SSL3c

Compared with a successful secure handshake recording when using the browser:

SSL3d

By default VuGen has the following “Recording -> Network -> Mapping and Filtering” settings:

SSL3e

The problem is that VuGen will not try later TLS versions after the first handshake has failed unlike a browser which will start from the highest TLS version and work down until the server accepts the handshake:

SSL3f

Simple solution is to change the VuGen Recording -> Network -> Mapping and Filtering to at least TLS 1.0

Posted by David Batty in Hewlett Packard Enterprise, Tech Tips, 0 comments