2017-02-03



alvinashcraft
shared this story
from .NET – Simple Talk.

A software application can be robust, scalable, easily maintained, and easy to use, but if it doesn’t do what the customer needs then it is useless. The success of a new product or software application is at the mercy of how precisely it fulfils the requirements of its users. Even if a company has successfully gone live with an innovative product, but has failed to ensure that it meets the requirements of the end users because its suitability was never properly checked, the consequences will be likely to be failure, poor consumer experience, brand deterioration and major financial loss because of having to identify defects and fix them. Despite all this, there is little enthusiasm in the industry for User Acceptance testing (UAT), and project governance continues to make the repeated mistake of undervaluing the process, whether using Agile or Waterfall methodologies.

Why UAT is critical to Software Application Projects.

Without effective UAT (User Acceptance Testing), the chances of the success of a development project are severely diminished. That it is why it is such an important part of the delivery process. UAT can be carried out using an agile approach. In Agile, it is not an independent phase, but instead an iterative collaboration between all the project stakeholders, ensuring continuous test and feedback cycles.

For User Acceptance testing, a Requirements Subject Matter Expert (Requirements SME), or Business Domain expert, ensures that the requirements continues, throughout the delivery process, to be correctly understood and documented within the delivery team. These can then be verified by means of UAT. It is a small investment to make to maximize their revenue and increase brand reputation in the market. It removes unpleasant surprises after release and deployment, and gives Governance a far better idea of development progress. It takes care to ensure that business processes that are essential for the business or which carry a risk of financial or reputational loss are rigorously checked.

User Accepted Testing (UAT) is a structured testing process that makes sure that all user requirements are performing as the user wants and expects. Does the product follow the company’s business regulations for new registration? Does the vendor product screen display under ten seconds when the user selects it? Do the online forms support a department’s legal obligations? Does the business flow handle exceptions and alternative flows? These are all valid UAT requirements which need to be fleshed out meticulously with their corresponding acceptance criteria. The software or product must satisfy these criteria, before the customer or user will accept it.

Requirements SME, who possess direct knowledge of end-to-end business processes, are a key part of the governance of an IT project. They can manage stakeholders to simulate accurate business scenarios, understand the key system functionalities, are able to lead training efforts and can accurately assess the severity of defects and risks on the business.

They will also have a critical role to prepare UAT and to design the overall structure of the tests, by means of a series of concise unambiguous requirements, which form the base for test cases. By shepherding real business users through the process of running these tests, UAT reveals all the shortcomings of the application under development. This is only effective if there are concise user requirements, which will evolve over time as they are reviewed and refined in software projects.

Any application development has to meet its requirements. As well as ensuring that these are all identified and understood, a Requirements SME needs to specify the acceptance criteria in terms that can be understood by the entire team. As soon as an issue or defect is identified, it is fixed or resolved before the product is deployed, thereby saving time and cost.

The Warning Signs

Concentrating on other types of testing such as unit testing, integration testing or performance tests at the expense of UAT because of project time constraints

If testing focuses on the technical functionality of the software instead of looking at the complete end-to-end testing process, defects will only be discovered after the product has shifted to production, and will be likely to cost at least ten times to fix than had it been captured earlier in the process

Domain experts and business staff who lack the know-how and cannot accurately identify scenarios relating to the operation of the whole business process which needed to be tested.

With no expert input to extensively prepare and manage a user acceptance testing process and risk assessment errors will not be captured before the system goes live, the software application will not behave as the end users anticipate and the developers will be swamped by change requests from the business at the worst possible time to find and resolve major defects.

Requirements Subject Matter Experts must understand business and application software requirements and critical business flows, and be able to competently prepare and validate data which are realistic and pertinent to the business.

UAT is seen as part of the deployment process rather than development

UAT should be an integral part of the development process so that development teams can get immediate warning of any misunderstanding of the acceptance criteria. This allows teams to make changes or a modification to the criteria at a point in the application lifecycle where it is easy to do so.

There is no clear UAT plan in place

An effective UAT plan needs to detail out the key focus areas, do a risk assessment and prepare the user acceptance test cases to test the system. These should use real world test scenarios for the applications and be in sufficient detail to implement scenario-based testing to conduct the UAT process.

There is no Change Control process

Change control is as important as issue tracking in development. It makes all changes visible to the team and makes sure that team members can bring their expertise to bear on decisions on any proposed changes. This provides a better understanding of their impact and risks are understood. If Change control interferes with progress, it means that there are too many changes. Complaining about change control is like complaining about the loud noise of a fire alarm when the building is in flames.

UAT defect management is ad-hoc

There must be an effective defect tracking and management processs that allows the delivery team to review test progress and metrics with the testing team, determine priorities and to resolve all defects found during UAT testing. All high-priority defects that would prevent a deployment can be quickly.

Testing is done in isolation by specialized testers

UAT involves testers, developers, business representatives, Governance, administrators and third parties. It is an aspect of development. If, for example, compliance experts aren’t involved, then it is easy to miss a statutory requirement that affects the basic test scenarios. If business representatives aren’t involved, then obvious defects can easily be missed. All team members that are involved in the UAT process need a clear understanding of their responsibility, roles and tasks. All test needs and models must be discussed with the developers, QA team representatives, database administrators, Operations staff and the testing team.

Ambiguity in the definitions of requirements

Ambiguous requirements often leads to testers and developers to unknowingly misinterpret them, which causes requirement errors to slip through UAT and morph rapidly into defects. These defects surface after implementation, at a time when it costs tenfold to pursue and resolve the same defect. It is therefore important to capture and define these shifting, ambiguous requirements and flesh out meticulous acceptance criteria so that the intended results are visible to all. Ambiguity can be one of a range of critters. These include

Ambiguous terms : subjective or vague terms that cannot be measured

Conflicting requirements : Two or requirements that conflict each other

Incomplete requirements : Missing values, business rules, etc.

Missing requirements : Possible missing requirements that have not been defined

Unclear requirements : Requirements that can be interpreted in multiple ways

Glossary : Term is not found in the glossary reference document

Grammar, spelling and wording : Spelling mistake, grammar, rewording suggestion

Key Steps When Implementing an End-to-End UAT Process

User Acceptance Testing is a challenging part of software delivery. If the process is fudged, it is too easy for buried issues to materialize quickly and escalate to the point that they risk the successful delivery of the development project. Because software delivery relies on being able to change rapidly in response to changes in the business requirements and a better understanding of the business domain, UAT must rapidly change its criteria and scenarios to remain in step. This means that it is a process with its own lifecycle. It can be agile, but it must never lose precision. We will discuss the process in a series of sections.



Planning

The UAT plan should be an accurate description of how to conduct the User Acceptance Test, couched in plain language that assumes the minimum of technical language. The test plan describes the effort and resources, entry and exit criteria for UAT, scheduling, test scenarios, test cases strategy, risk management and assessment and timelines of UAT testing. Because priorities will be assigned to the business requirements, the planning document is used to confirm these priorities with the business and gain both feedback and agreement on critical business flows, the scope and complexity of the testing and defining acceptance criteria.

The Test Plan is referenced by team members on how the testing activity is going to be approached and should be current at all times, easily accessed by the team. Here is a suggestion for the framework of the plan

Intended Audience

Write a description of the target audience to identify the consumers which also prevents this document from being exercised inappropriately.

In Scope

Describe the areas, scenarios or functionality that have to be tested and are within the scope of this UAT.

Out of Scope

Outline the high level functionality, scenarios, areas that will be excluded from scope, especially where the intended audience might assume the inclusion of those areas. E.g. What aspects, including business Requirements, flow, and functionality are not going to be tested in UAT for this project?

Assumptions

List all the conditions that are expected to be true so you can carry out the UAT.

Test Documentation

Test Cases, Test data, Setting up the environment, Test execution, how many Test cycles are required, timelines for cycles, test scenarios

Roles and Responsibilities

All team members are listed, their roles and responsibilities, and their contact details. As an example:
Sophia Segal.
Role: UAT Lead
Responsibilities:

Provides management, supervision and logistics

Agrees on goals and objectives.

Obtains resources

Reporting

Champion of UAT interests

Leads and communicates with team

Identifies target user group

Evaluates UAT performance

Deliverables

What documents are necessary and the estimated time frames? What is expected from each document? E.g. UAT Scenarios, Test Cases, Test Results, Defect Log, Defect Management Plan, Metrics, Risk identification.

Environment

What are the environment requirements? E.g. hardware, software, test data, staffing, skills training.
What actions to follow if an error occurs. Describe the environment in which the UAT will be executed.

Tools

What tools including software are going to be used for Productivity and Support?
Tools for incident management, test scripts automation tool, e.g. Mantis, Jira.
How to login for the UAT process.

Defect Management

When and who do we report the defects to? Is there a defect management procedure to follow? Do we have a baseline? How are we going to report these defects? Do we provide screenshots? Do we provide metrics?

Risk Management

Identify risks that may affect the execution of the UAT, assess likelihood and quantify project impact, detail a mitigation strategy and contingency plan for each risk. Any critical risks known or vulnerable system areas.

Communication

How are you going to communicate the schedules to the users? How are you going to communicate the defect management process? Will there be regular status meetings or email updates?

Documentation Control

Outline what system repository, documents will be saved to. Everyone on the team should be made aware of and adhere to document version control, to avoid issues such as test cases referencing old versions of requirements which will make test results inaccurate and traceability confusing.

Enter criteria

What is the criteria that must be met before UAT can begin. E.g. Business requirements must be available or User stories or use cases must be available.

UAT Environment

it is usually important to execute the tests in a test environment that simulates the real world production environment as closely as possible. This means that, resources will need to be installed, setup and configuration of the test environment needed to be in place, including hardware that was required for the execution of the testing and suitable test data in place. The testing environment should be ready before execution can begin, as sometimes there is more than one environment running, which can be confusing if a defect is discovered, but cannot be targeted to its environment, if multiple environments exist. Also it is important to agree on the data required for testing and if scripts will needed to be run, or will UAT be using data sets.

Understanding your target User Group

Incentive Level

IT skills level

Test Complexity

UAT Training Effort

Low

Low

Low

High

Medium

Low

Low – Moderate

Medium – High

Medium

Medium

Low – High

Medium

High

Low

Low – Moderate

Medium – Low

High

High

Low – High

Low

The target users must be engaged if the testing and project is to be successful. There are plenty of influences that can prevent this happening. Users could be already fully-occupied in the workplace; their own jobs might be jeopardized by successful introduction of the application. Or they might not be completely convinced by the importance of the end-product and so lack the desire or incentive to collaborate. Alternatively, they may not have grasped enough IT skills to accomplish the test scenarios and will need some training. In the past, I have characterized users and grouped them in terms of their computer skills, incentive level and training effort and marketed those to the users. It is important to gauge on a user’s strength, weaknesses and desire to understand direction, to be aware of key user roles and form the right balance of IT skills and training effort.

User engagement and transparency can be increased by collaborating with users to flesh out their critical business concerns and devoting UAT efforts in these key areas, or by developing additional step-by-step scenarios or collaboratively walking through process pain points in detail.

I cannot stress how important it is to engage with users earlier on. It also helps to identify change agents from the user group, who can verify business users are all well represented in UAT and help in the running smoothly of UAT and champions its value.

Designing Test Cases

Test cases must be real-life scenarios that the user needs to perform using the application. For composing Test Cases, I always trace back to the business requirements document. The Business Requirements should be available from a signed-off Business Requirements Document or Use Case and reference an actual version number of the document as stored in the documents repository, to make sure that the most current document is being accessed. I have also facilitated sessions with users adopting Requirements Traceability Matrix, Functional Specifications, Use Cases and Business Process workflow diagrams as input for creating test cases. This is also a great technique for requirements validation, scope definition, identifying task inputs, outputs and all triggers to illustrate dependencies and impact analysis for test cases and test scenarios.

Each test case covers a specific scenario of the software and captures all the steps accurately which the user has to carry out, with respect to high level business flows and also be able to verify if the software application or product is working as predetermined.

These test cases can be very detailed with refined step-by-step instructions or captured at a higher level. The challenge with high-level test cases is that there are usually multiple business flows on functionality, including exceptions and alternate flows which could easily be missed and so not properly tested. It is important to get the right level of granularity especially if there is a new critical path through the application that needs testing or many flows which need to be captured.

It is also important to capture, with the users, not just the ‘happy path’ but also the negative paths that capture exceptions and alternative flows of accomplishing a user task. This ensures the whole scope of how the user behaves with the system and what they need to achieved is considered and needs to be logically well thought out.

Test Case ID

Test Case Name

Test Description

Pre-condition

Step

Execution Steps

Expected Result

Test Data

001

Login Page

A Registered user should successfully login to Vendor Launch site.

The user should be registered, have a use rid and password

01

Open Vendor Launch Login Screen

The Vendor login Launch screen is displayed

001

02

Enter Userid

Userid is correct

Correct Userid (6-12 characters)

001

03

Enter password

Password is correct

Correct password (4-16 characters)

001

04

Select Enter button

All mandatory fields validated and user is successful in logging in to Vendor Launch site.

All Test Scenarios should be documented if they are a Pass or Fail. The above Test Case could have multiple scenarios for exceptions and alternative flows and additional steps would need to be captured for the following scenarios.

user cannot login

userid or password is incorrect

field is left blank

the user is not registered

uppercase is not permitted for password

password is being displayed

steps on displaying error messages

displaying information messages

exiting the screen

navigating to a different application

printing and reporting capabilities

Execution

In my last project, Beta Testing took place in the customer’s environment and involved extensive testing by users. They executed all steps written in the scenarios, and, depending on the results returned, checked either Pass or Fail. This process was repeated for all Test Cases. For any failed Test Cases, defects were raised. The users provided feedback, which led to refinements of the software application being tested. During execution, screenshots were taken to capture how abnormalities were triggered. This helps when testers, developers or business teams are dispersed. I also took note of any timestamps when defects occurred, to cross-reference this with any dependencies in the backend. Executing and documenting the results should happen simultaneously to avoid losing any data and consolidate an overview of critical issues and metrics.

Evaluating UAT Results

How many open defects are there that need to be fixed? How many defects need to be re-tested? How many defects are closed? How many defects were captured in the first release? This are some of the analysis that is required during this phase. The evaluation phase is an extensive process, since each test case needs to be scrupulously analyzed, placed into quantifiable context, and referencing pre-defined acceptance criteria, evaluated to see if it has been tested adequately.

The quantitative and qualitative data collected was documented and translated into meaningful insights. For example how many defects are ‘In Test’? If this is higher than expected, should we acquire more resources? Do we have high-priority defects in an open state all assigned to one person? If yes, then we need to find more development resources. How many defects have been raised and how many have been resolved? I also documented status reports to illustrate high-priority insights including metrics on test coverage, top priority defects, defect statistics and added filter parameters to reports to allow stakeholders to find data that was important to them.

How do we calculate Metrics?

Defect Detention Percentage (DDP)

DDP = Number of defects found during UAT / Number of defects found in total.

DDP measures the value of testing. For example, if the number of defects found during UAT is 70 and the number of defects found going live is 30. That’s a DDP of 70%, which is satisfactory. Additionally, DDP can be used to measure the quality of testing when making changes in the UAT cycle. Example of these changes are switching user groups, testing environments or between different releases. Always measure the defects before any change, and then after, to measure the quality of testing.

Completion of Acceptance Criteria

Defining concise acceptance criteria is key to a complete Test Case. Not only does it clearly illustrate what the user expects from a scenario, how the requirement should be met, but also verifies the quality and scope of a test case scenario and exit criteria. This is calculated by counting all the acceptance criteria, including scenarios, and dividing the number by how many acceptance criteria have been completed with the expected results.

Measuring the cost of a defect

Usually, as an initial step I list all defects in a UAT release. As I always have a defect management process in place, for one defect found, I list the time taken from defect identification to close, in hours and multiply this by the hourly rate. Below is an example for one defect.

Defect 001

Description of Task

Number of hours
dedicated to defect

Defect Identified and being analyzed

1

Defect in progress

3

Defect resolved and being tested

2

Documenting of defect

2

Update Testing Plan documentation

1

Update software with fixed defect

2

Communication of defect to team

2

Run software with revised, fixed defect

2

Log results and impact analysis

3

Total Number of hours/effort

18

From above, in total 18 hours were resourced to this one defect. This does not take into consideration, the hours needed if the defect was not resolved in the first cycle. In this example, the cost of the defect is
Cost of defect = Total Number of hours/effort * average hourly rate
From this, you can calculate the total costs of all defects in a release.

Managing Defects.

Defects caught in UAT, are because the product or software is tested and analyzed from a user viewpoint and is based solely on requirements. Because of this, previous testing methods tend to miss defects which can be critical.

It is important to have a comprehensive process for tracking defects. This is important, because the team have to make sure all fixes are complete before the product is delivered to the customer. Defects should be re-tested, once fixed, as this not only generates user confidence, due to managing a transparent defect process but also confirms that you are fixing what has been agreed by all.

For one of my UAT projects I managed, all data feeds successfully loaded into the data warehouse. There was one defect. It was a data issue, where a unique product number format needed some manipulation to conform to the company-wide format. This was resolved quickly by using a systematic defect management process for defect status, tracking and communication, including tracking the actual release the defects were fixed in. Any critical defects which remained open were considered risks and pushed on to Risk Management.

Open

Identified bugs are assigned. If there are any familiarities with previous bugs, I make a reference.

In Progress

Bug has a description and been assigned.

Resolved

The bug has been resolved and the updated version sent to retest.

Re-Test

The bug is being tested again, after repair.

Re-opened

The repair has not worked. The bug goes back to status ‘Open’

Closed

The bug is resolved. Closed and released.

Requirements Defect Severity classification

An additional technique, illustrated below, measures severity of requirement defects and is very important to assess critical issues and provides a useful lens for prioritizing repairs in a software project. These comments are usually noted in requirement documents or technical documents such as Test Cases which originate from Requirement documents.

Comment Type
Definitions.

Type 1

Comments on a possible gap or miss in defining requirements with no work around. Type 1 issues can become a show stopper.

Type 2

Comments on adding more information or clarity to requirements. Type 2 issues are not show stoppers but can cause ambiguity that might lead to confusion for developers and testers.

Type 3

Comments on semantics and grammar and other defects of lesser importance. Type 3 issues are usually about the quality of the document.

Risk Management

Not every project requires comprehensive testing, however, projects that I have managed, involved extensive validation because of business functionality that carries with it a risk of financial loss if it fails, or has errors, and so risk assessment was required. I reference the master project plan to begin a list of risks. As a further technique to cover all the potential risks associated with UAT, I perform a risk assessment and documented the probability of risk occurrence, risk impact and risk mitigation. It is important to manage and control any unresolved defects and low risk from UAT, as they can easily metamorphosis into dangerous risks.

Risk Assessment

Risk assessment involves identifying the critical systems and business areas most vulnerable to risk, so that resources can be assigned where the risk impact is highest. In this scenario, the highest risk impact has a direct correlation to defects that potentially have serious consequences. For all the risks, assign a score to each one, probability of occurrence and its impact on the user and contingency plans if the risks occur. Compile a risk report, assigning each risk a unique id and a description and review it once a week with the team. Any new risks, derived from open defects, should be raised as they are identified. Other risks can be, and usually are, adjusted in terms of significance as more information comes to the surface about them.

Risk

Probability

Impact

Mitigation

Defects are found late in the UAT cycle

High

High

Defect management plan is at hand to ensure clear communication and resolving.

Types of Risk

Ambiguous requirements

Natural disaster

Third party products and services

New versions of interfacing

The test team’s ability to use technology required for the test effort.

Working across multiple sites

UAT environment not available

Adoption of any new technologies

Schedule slippage and its impact on the test schedule

Changing requirements

Sign off

The users of the application should be presented with a formal test completion report where critical metrics and results from UAT are presented. They will then need to make a decision whether these results are acceptable and in accordance with their criteria and expectations and, if so, approve the UAT. This is also called the Go/No-Go decision. Users should be sent an email for sign-off and post sign-off, stating whether the product or software application is ready for deployment.

Sign-off from the users is a key phase in the UAT life cycle, as it not only illustrates that business and IT are on agreement and aligned, the results delivered are ultimately what the business requires, but moving forward, if the results from the UAT are not accepted or what the business really asked for, in reality, it will deliver zero business value.

Agile or Waterfall.

Agile UAT is usually conducted concurrently with development, and reviewed with users at each Sprint and any defects are resolved before the next iteration. In waterfall, the UAT is performed after development, before product is deployed.

Just as a UAT test plan is key, when using a waterfall approach; with agile, a comprehensive product backlog provides a dashboard for all the product information you need, to share with your team.

Each backlog item, when drilled down, should detail the user story requirements , and capture what functionality the user needs. It should also include, defined acceptance criteria, signalling when this functionality is complete.

The important take away, is regardless of the project approach, collaboration with your user group to validate un-ambiguous, testable requirements is critical, in order to to add value to a product or software requirements project

Conclusion.

There must be no surprises during deployment, unless they are pleasant ones. Software deployment is generally fraught because User Acceptance Tests are usually mismanaged, and left until the end of a software development phase, or skimmed through chaotically by untrained staff. The consequences are horrible, because defects turn up in the production system, and can cause direct financial loss to the business. Additionally, this is the worst time to put them right, so the cost of the project will increase rapidly.

Whether in waterfall or incremental Agile software applications, it is clear that the success of a new product or software application will depend on how well it meets the requirements and demands of its users. Expert-led UAT practice is needed to check whether a system meets the business requirements of the users. This practice brings out all-important functionality and business problems while mitigating risks and reducing high costs of redevelopment.

The post User Acceptance Testing and the Application Lifecycle appeared first on Simple Talk.

Show more