2012-07-05



Last week I spent a couple days in Boston, MA attending a testing conference that a vendor of mine puts on every year. I don't often get to attend events like this as work is normally insanely busy (notice the 6 weeks since I've blogged here). It was great to take some time out and learn about ways one of my departments (I manage BA and QA teams) could have a bigger impact on the organization.

(Note, the picture to the right was me driving the water taxi across Boston Harbor, from the airport to the hotel. To say that this was an AWESOME way to start out the trip would be the biggest of understatements. If you are ever flying into Boston and staying downtown, I highly recommend this relatively inexpensive and gorgeous method of travel. I can't guarantee you'll have the option of driving the boat, but it is awesome if you can swing it.)

What you will see here is a slightly redacted version of my notes from the two days. I've removed any reference to the vendor who hosted the event and to any of the clients who also presented at the conference. I thought this information was incredibly valuable and wanted to share it with all of you, but since I didn't ask before hand if this was ok, I have decided to impersonalize it a bit. This doesn't in any way change the content, I just wanted to say all this up front so you all know why its so 'generic'.

Session Highlights

Imagine Apple decides to enter our business. What would they change? What would they keep the same? These kinds of questions can be used to ensure we keep our edge. Think about how Apple tests software and the level of quality they achieve across their product lines. We don't know their processes, but we need to be thinking about what we can do to achieve similar success.

Continuous integration is the way to improve QA. Push the testing process left. Don't test during QA, but start testing when requirements are elicited. Test the requirements themselves, create test scripts and have the business sign off on those at the same time as the requirements.

Always need to know the state of the code at any point in the project. Use continuous integration dashboards to see when issues have crept into the code. Always make sure that at the start of the cycle the automation is as solid as possible to get a good baseline. Check status regularly to identify issues prior to the manual tests beginning.

Lots of open defects cause lots of management work to track and resolve. Big backlogs are bad because they need more people to do the management of the backlog. If you are not going to fix something anytime soon, close the defect and search for it if you need it in the future.

A stable code base is one of the most valuable assets in your organization. Delaying testing to the end of the project destroys code stability.

Effectiveness of your lab is the number of scripts you can run each day. The more efficiently you test, the faster you can code and release.

Gartner analyst who spoke knows a highly productive dev team that has zero defects and processes millions in transactions every night. New release every month.

Done, done and done. It's coded, it's tested and all tests passed. Its important to know what 'done' is for your organization.

Don't track partial complete, its either done or it's not. Gives a false sense of progress. Focus should be on working software. Similar point to using continuous integration from earlier.

Behavior driven development requires writing high level test scripts that the user can read and test themselves. These should be written before coding and at the same time as requirements and approved by the business.

QA should focus on automating test and validating the solution, not 'doing testing'.

Should focus on 'being agile' and not necessarily 'Agile.' We can get these results without Agile development processes, but it is probably a good idea to transition to Agile development anyway.

Start out with writing automated tests for code that is changing, new code and for code that is broken. If the code doesn't change and it is working fine, don't write tests for it.

Start measuring days of QA downtime due to environment and code issues. Very important metric that can help build a business case for implementing automation.

Less than 25% of the organizations represented here have any continuous integration environment set up and running. About 60% have no automation at all. Another 30% have some automation and the remaining 10% have a significant amounts of their tests automated. No one had more than 60% of their test suite automated.

Even large companies don't have any experience in mobile or cloud. Even a large financial services company didn't really have any experience with this. Most people had issues with testing mobile, especially Android, due to device fragmentation.

Spend more time on defect prevention than detection. Spend more time teaching developers how to better test. Less QA resources and more devs who do testing while writing code. QA needs to become evangelists for quality.

A test package should be completed before a development item can be dropped into the dev backlog. This consists of what scripts need to be run and if any new scripts need to be created.

T-shaped individual.  Deep knowledge in testing but broad in other areas of project roles.

A large online travel company

It takes 1-3 testers 3-4 days to certify a release. It used to take a large team 6-8 weeks to do the same.

Set standards for cross-browser and load testing across all teams.

They now use a common tool to see defect rates and environment health across teams.

They use a Jenkins server with selenium for automation testing. Junit for java unit tests. Chibot for cross browser testing. Testrail for test case management.

They use a host data provider for mocked services. This helps them test integrations without being reliant on their partners to have active test environments.

Got rid of HP quality center. Was not a good fit for agile processes.

They do multiple daily releases to production. Did over 800 releases to production last year.

Started this process change in 2008.

Collocation of quality team with developers. If you're developing elsewhere, you need to have testing with the dev team.

The feature teams are responsible for quality, not the QA department head. If the teams say if quality is high then the release is done.

With less, do more. Positive version of the saying 'do more with less'.

They have gone from 200+ QA people down to 130 and are trying to get down to 30 by the end of this year. Vendor will have additional testing resources offshore with the offshore devs, but the company will only have 30 internal people who will focus on quality measurement, not actual testing.

They use Git for code management. They have multiple code repositories; they don't have a single, large code base but multiple smaller code bases. Jenkins will do integration testing between code bases.

$500M in revenue. 1400 employees worldwide, 700 in IT. 130 currently in QA.

A large financial services firm

Set up a governance committee to define timelines for setting up better quality processes. Have advisors from the business and set metrics and practices in business terms.

Give status of systems as integrated not individually. Take a platform approach to measuring value to the business.

Create a quality performance index. Need to be able to always answer the questions, "What would work if we release tomorrow? What are our gaps?"

Need to do regular quality reviews of our releases.

Need to measure duration after code for the release lands that it takes to fix all defects.

Find quality champions in all roles in the organization.

Break functional scripts into smaller chunks and make them into unit tests. Easier to maintain and less chance of failure when code changes.

Better metrics when functional testing with devs, prior to regression. Better root cause analysis.

Achieved a reduced testing duration of 2.5 weeks. 95% pass rate on first day of functional testing. 10% reduction in cost of quality for release.

A medium-sized health insurance company

Enterprise quality score. PMO tracks results but quality division defines the score. Weekly reported to executive leadership team. They have a dashboard for this purpose. They track Pre and Post production to know if any quality issues are impacting customers.

Measurements they use: Number of open defects, fix time for defects, budget run rate, defect severity.

The weights for each measure are different for each project, but each project uses the same measures.

Each measure 1-5, with larger number being best.

They track this with a field in defect system. The business participates in the defect process and helps to set this score.

Use ReplayDirector to see how defects occur.

Summary

If I could sum the whole two days up, it would be to say three things: 1) move testing to as early in the project lifecycle as you can, 2) always be looking for ways to innovate your testing strategy and 3) be advocates for quality in your organizations.



Show more