Saturday, December 24, 2011

TASTRO – Tester’s Astro – What do your test signs foretell?



TASTRO – Tester’s Astro – What do your test signs foretell?
Aries

Your stars look good for coming week however you might face an environment downtime. Why not make a quick checklist on how to set it up?
Taurus

Avoid calls during Rahu Kaal. Those who have calls with your on-site coordinators during this time, Beware!!
Gemini

The planetary movements suggest that the build scheduled on week day will be delivered to the Testing team on Friday after sunset. You have plenty of time to read Rapid Software Testing Appendix and practice new testing ideas.
Cancer

Though you think you know it, you have no certainty until you try. You might see a surprise appreciation for your smart work. Smart work could mean, you use oracles to test.
Leo

You are worried with unplanned work load. Read the book – Lessons Learned in Software Testing at home and you may see the change in office.
Virgo

Worried why you your productivity appears to have come down? How long has it been since you took a break? Quick breaks between test sessions are important.
Libra

In the busy times, be prepared to work late, eat pizza for dinner at work, and work for some weekends. Don't wanna do that? Go beyond test cases, you will find more bugs.
Scorpio

There is a chance that your relationship with developers would go sour in the coming week. So treat them with chocolates. Developers are rich source of information for a tester.
Sagittarius

Your customers would be under the influence of aggressive Mars. You would be forced to test whatever is thrown at you. Check for the mission to be achieved to avoid falling into traps
Capricorn

You would be trying to achieve the stars by clicking here and there with your monkey paws. Stop doing that and your career could get better.
Aquarius

Your managers would somehow have a strong notion that you have just been marking those test cases as “pass” without executing them. Honesty is important for a good tester.
Pisces

When you have crashed the software and waiting for the system to boot, prepare your own test idea cheat sheet. For those who do, future has been bright.

Isn't this awesome? I feel testers like Rrajesh Barde are a huge boon to our industry. The beauty of my consulting was, I felt there wasn't just one Rrajesh Barde I met but many. I may cover about others in future posts.

A couple of years ago, I used to go to a consulting assignment as though I am superior and I consult people because they were inferior. These days, I go to consulting to get humbled by people like the ones I met.

Please, everybody, stretch out your creativity, you would find an Andy Glover or Rrajesh Barde in you. For those who want to follow Rrajesh's blog, here is the link. He came out with another concept called Bug Burji (Burji is a dish made out of Egg and we call it Egg Burji, Rrajesh made a Bug Burji out of it). Rrajesh, you inspire me. I hope after reading your work, a couple of others may join me in admiring your work and contribution.

I am telling myself that I was born to witness this beginning of the golden era of software testing. Don't know if you can even see what I am experiencing.

Read more »

Friday, December 23, 2011

Does Quality Assurance Remove Need for Quality Control?

“If QA (Quality Assurance) is done then why do we need to perform QC (Quality Control)?”, this thought may come to our mind some times and looks a valid point too.  This means if we have followed all the pre-defined processes, policies and standards correctly and completely then why do we need to perform a round of QC?

In my opinion QC is required after QA is done. While in ‘QA’ we define the processes, policies, strategies, establish standards, developing checklists etc. to be used and followed through out the life cycle of a project. And while in QC we follow all those defined processes, standards and policies to make sure that the project has been developed with high quality and at least meets customer’s expectations.

QA does not assure quality, rather it creates and ensures the processes are being followed to assure quality. QC does not control quality, rather it measures quality. QC measurement results can be utilized to correct/modify QA processes which can be successfully implemented in new projects as well.
Quality control activities are focused on the deliverable itself. Quality assurance activities are focused on the processes used to create the deliverable. 

QA and QC are both powerful techniques which can be used to ensure that the deliverables meet high quality expectations of customers.

E.g.: we have to use an Issue tracking system to log the bugs during testing a web application. QA would include defining the standard for adding a bug and what all details should be there in a bug, like summary of the issue, where it is observed, steps to reproduce the bugs, screenshots etc. This is a process to create deliverable ‘bug–report’. When a bug is actually added in issue tracking system based on these standards then that bug report is our deliverable.
Now, suppose some time at later stage of project we realize that adding ‘probable root cause’ to the bug based on tester’s analysis would provide some more insight to the Dev team, then we will update our pre-defined process and finally it will be reflected in our bug reports as well. This is how QC gives inputs to QA to further improve the QA.

Read more »

Thursday, December 22, 2011

Google's Record Playback Framework


At GTAC, folks asked how well the Record/Playback (RPF) works in the Browser Integrated Test Environment (BITE). We were originally skeptical ourselves, but figured somebody should try. Here is some anecdotal data and some background on how we started measuring the quality of RPF.
The idea is to just let users use the application in the browser, record their actions, and save them as a javascript to play back as a regression test or repro later. Like most test tools, especially code generating ones, it works most of the time but its not perfect. Po Hu had an early version working, and decided to test this out on a real world product. Po, the developer of RPF, worked with the chrome web store team to see how an early version would work for them. Why chrome web store? It is a website with lots of data-driven UX, authentication, file upload, and it was changing all the time and breaking existing Selenium scripts: a pretty hard web testing problem, only targeted the chrome browser, and most importantly they were sitting 20 feet from us.

Before sharing with the chrome web store test developer Wensi Liu, we invested a bit of time in doing something we thought was clever: fuzzy matching and inline updating of the test scripts. Selenium rocks, but after an initial regression suite is created, many teams end up spending a lot of time simply maintaining their Selenium tests as the products constantly change. Rather than simply fail like the existing Selenium automation would do when a certain element isn’t found, and require some manual DOM inspection, updating the Java code and re-deploying, re-running, re-reviewing the test code what if the test script just kept running and updates to the code could be as simple as point and click? We would keep track of all the attributes in the element recorded, and when executing we would calculate the percent match between the recorded attributes and values and those found while running. If the match isn’t exact, but within tolerances (say only its parent node or class attribute had changed), we would log a warning and keep executing the test case. If the next test steps appeared to be working as well, the tests would keep executing during test passes only log warnings, or if in debug mode, they would pause and allow for a quick update of the matching rule with point and click via the BITE UI. We figured this might reduce the number of false-positive test failures and make updating them much quicker.

We were wrong, but in a good way!

We talked to the tester after a few days of leaving him alone with RPF. He’d already re-created most of his Selenium suite of tests in RPF, and the tests were already breaking because of product changes (its a tough life for a tester at google to keep up with the developers rate of change). He seemed happy, so we asked him how this new fuzzy matching fanciness was working, or not. Wensi was like “oh yeah, that? Don’t know. Didn’t really use it...”. We started to think how our update UX could have been confusing or not discoverable, or broken. Instead, Wensi said that when a test broke, it was just far easier to re-record the script. He had to re-test the product anyway, so why not turn recording on when he manually verified things were still working, remove the old test and save this newly recorded script for replay later?

During that first week of trying out RPF, Wensi found:
  • 77% of the features in Webstore were testable by RPF
  • Generating regression test scripts via this early version of RPF was about 8X faster than building them via Selenium/WebDriver
  • The RPF scripts caught 6 functional regressions and many more intermittent server failures.
  • Common setup routines like login should be saved as modules for reuse (a crude version of this was working soon after)
  • RPF worked on Chrome OS, where Selenium by definition could never run as it required client-side binaries. RPF worked because it was a pure cloud solution, running entirely within the browser, communicating with a backend on the web.
  • Bugs filed via bite, provided a simple link, which would install BITE on the developers machine and re-execute the repros on their side. No need for manually crafted repro steps. This was cool.
  • Wensi wished RPF was cross browser. It only worked in Chrome, but people did occasionally visit the site with a non-Chrome browser.
So, we knew we were onto something interesting and continued development. In the near term though, chrome web store testing went back to using Selenium because that final 23% of features required some local Java code to handle file upload and secure checkout scenarios. In hindsight, a little testability work on the server could have solved this with some AJAX calls from the client.

We performed a check of how RPF faired on some of the top sites of the web. This is shared on the BITE project wiki. This is now a little bit out of date, with lots more fixes, but it gives you a feel for what doesn’t work. Consider it Alpha quality at this point. It works for most scenarios, but there are still some serious corner cases.

Joe Muharsky drove a lot of the UX (user experience) design for BITE to turn our original and clunky developer and functional-centric UX into something intuitive. Joe’s key focus was to keep the UX out of the way until it is needed, and make things as self-discoverable and findable as possible. We’ve haven't done formal usability studies yet, but have done several experiments with external crowd testers using these tools, with minimal instructions, as well as internal dogfooders filing bugs against Google Maps with little confusion. Some of the fancier parts of RPF have some hidden easter eggs of awkwardness, but the basic record and playback scenarios seem to be obvious to folks.

RPF has graduated from the experimental centralized test team to be a formal part of the Chrome team, and used regularly for regression test passes. The team also has an eye on enabling non-coding crowd sourced testers generate regression scripts via BITE/RPF.

Read more »

Monday, December 19, 2011

From the mailbox: selecting test automation tools

From the mailbox: selecting test automation tools

A long time ago, all the way back in 1999, I wrote an article on selecting GUI test automation tools. Someone recently found it and wrote me an email to ask about getting help with evaluating tools. I decided my response might be useful for other people trying to choose tools, so I turned it into a blog post.
By the way, so much has changed since my article on GUI testing tools was published back in 1999 that my approach is a little different these days. There are so many options available now that weren’t 12 years ago, and there are new options appearing nearly every day it seems.
Back in 1999 I advocated a heavy-weight evaluation process. I helped companies evaluate commercial tools, and at the time it made sense to spend lots of time and money on the evaluation process. The cost of making a mistake in tool selection was too high.
After all, once we chose a tool we would have to pay for it, and that licensing fee became a sunk cost. Further, the cost of switching between tools was exorbitant. Tests were tool-specific and could not move from one tool to another. Thus we’d have to throw away anything we created in Tool A if we later decided to adopt Tool B. Further, any new tool would cost even more money in licensing fees. So spending a month evaluating tools before making a 6-figure investment made sense.
But now the market has changed. Open source tools are surpassing commercial tools, so the license fee is less of an issue. There are still commercial tools, but I always recommend looking at the open source tools first to see if there’s anything that fits before diving into commercial tool evaluations.
So here’s my quick and dirty guide to test tool selection.
If you want a tool to do functional test automation (as opposed to unit testing), you will probably need both a framework and a driver.
  • The framework is responsible for defining the format of the tests, making the connection between the tests and test automation code, executing the tests, and reporting results.
  • The driver is responsible for manipulating the interface.
So, for example, on my side project entaggle.com, I use Cucumber (framework) with Capybara (driver).
To decide what combination of framework(s) and driver(s) are right for your context…

Step 1. Identify possible frameworks…

Consideration #1: Test Format
The first thing to consider is if you need a framework that supports expressing tests in a natural language (e.g. English), or in code.
This is a question for the whole team, not just the testers or programmers. Everyone on the project must be able to at least read the functional tests. Done well, the tests can become executable requirements. So the functional testing framework needs to support test formats that work for collaboration across the whole team.
Instead of assuming what the various stakeholders want to see, ask them.
In particular, if you are contemplating expressing tests in code, make very sure to ask the business stakeholders how they feel about that. And I don’t mean ask them like, “Hey, you don’t mind the occasional semi-colon, right? It’s no big deal, right? I mean, you’re SMART ENOUGH to read CODE, right?” That kind of questioning backs the business stakeholders into a corner. They might say, “OK,” but it’s only because they’ve been bullied.
I mean mock up some samples and ask like this: “Hey, here’s an example of some tests for our system written in a framework we’re considering using. Can you read this? What do you think it’s testing?” If they are comfortable with the tests, the format is probably going to work. If not, consider other frameworks.
Note that the reason that it’s useful to express expectations in English isn’t to dumb down the tests. This isn’t about making it possible for non-technical people to do all the automation.
Even with frameworks that express tests in natural language, There is still programming involved. Test automation is still inherently about programming.
But by separating the essence of the tests from the test support code, we’re able to separate the concerns in a way that makes it easier to collaborate on the tests, and further the tests become more maintainable and reusable.
When I explain all that, people sometimes ask me, “OK, that’s fine, but what’s the EASIEST test automation tool to learn?” Usually they’re thinking that “easy” is synonymous with “record and playback.”
Such kinds of easy paths may look inviting, but it’s a trap leads into a deep dark swamp from which there may be no escape. None of the tools I’ve talked about do record and playback. Yes, there is a Selenium recorder. I do not recommend using it except as a way to learn.
So natural language tests facilitate collaboration. But I’ve seen organizations write acceptance tests in Java with JUnit using Selenium as the driver and still get a high degree of collaboration. The important thing is the collaboration, not the test format.
In fact, there are advantages to expressing tests in code.
Using the same unit testing framework for the functional tests and the code-facing tests removes one layer of abstraction. That can reduce the complexity of the tests and make it easier for the technical folks to create and update the tests.
But the times I have seen this work well for the organization is when the business people were all technology savvy so they were able to read the tests just fine even when expressed in Java rather than English.
Consideration #2: Programming Language
The next consideration is the production code language.
If your production code is written in… And you want to express expectations in natural language, consider… Or you want to express expectations in code, consider…
Java Robot Framework, JBehave, Fitnesse, Concordion JUnit, TestNG
Ruby Cucumber Test::Unit, RSpec
.NET Specflow NUnit

By the way, the tools I’ve mentioned so far are not even remotely close to a comprehensive list. There are lots more tools listed on the AA-FTT spreadsheet. (The AA-FTT is the Agile Alliance Functional Testing Tools group. It’s a program of the Agile Alliance. The spreadsheet came out of work that the AA-FTT community did. If you need help interpreting the spreadsheet, you can ask questions about it on the AA-FTT mail list.)
So, why consider the language that the production code is written in? I advocate choosing a tool that will allow you to write the test automation code in the same language (or at least one of the same languages if there are several) as the production code for a number of reasons:
  1. The programmers will already know the language. This is a huge boon for getting the programmers to collaborate on functional test automation.
  2. It’s probably a real programming language with a real IDE that supports automated refactoring and other kinds of good programming groovy-ness. It’s critical to treat test automation code with the same level of care as production code. Test automation code should be well factored to increase maintainability, remove duplication, and exhibit SOLID principles.
  3. It increases the probability that you’ll be able to bypass the GUI for setting up conditions and data. You may even be able to leverage test helper code from the unit tests. For example, on entaggle.com, I have some data generation code that is shared between the unit tests and the acceptance tests. Such reuse drastically cuts down on the cost of creating and maintaining automated tests.
Consideration #3: The Ecosystem
Finally, as you are considering frameworks, consider also the ecosystem in which that framework will live. I personally dismiss any test framework that does not play nicely with both the source control system and the automated build process or continuous integration server. That means at a bare minimum:
  • All assets must be flat files, no binaries. So no assets stored in databases, and no XLS spreadsheets (though comma separated values or .CSV files can be OK). In short, if you can’t read all the assets in a plain old text editor like Notepad, you’re going to run into problems with versioning.
  • It can execute from a command line and return an exit code of 0 if everything passes or some other number if there’s a failure. (You may need more than this to kick off the tests from the automated build and report results, but the exit code criteria is absolutely critical.)

Step 2. Choose your driver(s)…

A driver is just a library that knows how to manipulate the interface you’re testing against. You may actually need more than one driver depending on the interfaces in the system you’re testing. You might need one driver to handle web stuff while another driver can manipulate Windows apps.
Note that the awesome thing about the way test tools work these days is that you can use multiple drivers with any given functional testing framework. In fact, you can use multiple drivers all in a single test. Or you can have a test that executes against multiple interfaces. Not a copy of the test, but actually the same test. By separating concerns, separating the framework from the driver, we make it possible for tests to be completely driver agnostic.
Choosing drivers is often a matter of just finding the most popular driver for your particular technical context. It’s hard for me to offer advice on which drivers are good because there are so many more drivers available than I know about. Most of the work I do these days is web-based. So I use Selenium / WebDriver.
To find a specific driver for a specific kind of interface, look at the tools spreadsheet or ask on the AA-FTT mail list.

Step 3. Experiment

Don’t worry about choosing The One Right tool. Choose something that fits your basic criteria and see how it works in practice. These days it’s so much less costly to experiment and see how things go working with the tool on real stuff than to do an extensive tool evaluation.
How can this possibly be? First, lots of organizations are figuring out that the licensing costs are no longer an issue. Open source tools rule. Better yet, if you go with a tool that lets you express tests in natural language it’s really not that hard to convert tests from one framework to another. I converted a small set of Robot Framework tests to Cucumber and it took me almost no time to convert the tests themselves. The formats were remarkably similar. The test automation code took a little longer, but there was less of it.
Given that the cost of making a mistake on tool choice is so low, I recommend experimenting freely. Try a tool for a couple weeks on real tests for your real project. If it works well for the team, awesome. If not, try a different one.
But whatever you do, don’t spend a month (or more) in meetings speculating about what tools will work. Just pick something to start with so you can try and see right away. (As you all know, empirical evidence trumps speculation. 
Eventually, if you are in a larger organization, you might find that a proliferation of testing frameworks becomes a problem. It may be necessary to reduce the number of technologies that have to be supported and make reporting consistent across teams.
But beware premature standardization. Back in 1999, choosing a single tool gave large organizations an economy of scale. They could negotiate better deals on licenses and run everyone through the same training classes. Such economies of scale are evaporating in the open source world where license deals are irrelevant and training is much more likely to be informal and community-based.
So even in a large organization I advocate experimenting extensively before standardizing.
Also, it’s worth noting that while I can see a need to standardize on a testing framework, I see much less need to standardize on drivers. So be careful about what aspects of the test automation ecosystem you standardize on.

Read more »