Saturday, December 17, 2011

Static Methods

Static Methods

Static methods are procedural in nature and they have no place in OO world. I can already hear the screams, so let me explain why, but first we need to agree that global variables and state is evil. If you agree with previous statement than for a static method to do something interesting it needs to have some arguments, otherwise it will always return a constant. Call to a staticMethod() must always return the same thing, if there is no global state. (Time and random, has global state, so that does not count and object instantiation may have different instance but the object graph will be wired the same way.)

This means that for a static method to do something interesting it needs to have arguments. But in that case I will argue that the method simply belongs on one of its arguments. Example: Math.abs(-3) should really be -3.abs(). Now that does not imply that -3 needs to be object, only that the compiler needs to do the magic on my behalf, which BTW, Ruby got right. If you have multiple arguments you should choose the argument with which method interacts the most.

But most justifications for static methods argue that they are "utility methods". Let's say that you want to have toCamelCase() method to convert string "my_workspace" to "myWorkspace". Most developers will solve this as StringUtil.toCamelCase("my_workspace"). But, again, I am going to argue that the method simply belongs to the String class and should be "my_workspace".toCamelCase(). But we can't extend the String class in Java, so we are stuck, but in many other OO languages you can add methods to existing classes.

In the end I am sometimes (handful of times per year) forced to write static methods due to limitation of the language. But that is a rare event since static methods are death to testability. What I do find, is that in most projects static methods are rampant.

Instance Methods

So you got rid of all of your static methods but your codes still is procedural. OO says that code and data live together. So when one looks at code one can judge how OO it is without understanding what the code does, simply by looking at the relationship of data and code.
class Database {
 // some fields declared here
 boolean isDirty(Cache cache, Object obj) {
   for (Object cachedObj : cache.getObjects) {
     if (cachedObj.equals(obj))
       return false;
   }
   return true;
 }
}

The problem here is the method may as well be static! It is in the wrong place, and you can tell this because it does not interact with any of the data in the Database, instead it interacts with the data in cache which it fetches by calling the getObjects() method. My guess is that this method belongs to one of its arguments most likely Cache. If you move it to Cache you well notice that the Cache will no longer need the getObjects() method since the for loop can access the internal state of the Cache directly. Hey, we simplified the code (moved one method, deleted one method) and we have made Demeter happy.

The funny thing about the getter methods is that it usually means that the code where the data is processed is outside of the class which has the data. In other words the code and data are not together.
class Authenticator {
 Ldap ldap;
 Cookie login(User user) {
   if (user.isSuperUser()) {
     if ( ldap.auth(user.getUser(),
            user.getPassword()) )
       return new Cookie(user.getActingAsUser());
   } else (user.isAgent) {
       return new Cookie(user.getActingAsUser());
   } else {
     if ( ldap.auth(user.getUser(),
            user.getPassword()) )
       return new Cookie(user.getUser());
   }
   return null;
 }
}

Now I don't know if this code is well written or not, but I do know that the login() method has a very high affinity to user. It interacts with the user a lot more than it interacts with its own state. Except it does not interact with user, it uses it as a dumb storage for data. Again, code lives with data is being violated. I believe that the method should be on the object with which it interacts the most, in this case on User. So lets have a look:
class User {
 String user;
 String password;
 boolean isAgent;
 boolean isSuperUser;
 String actingAsUser;

 Cookie login(Ldap ldap) {
   if (isSuperUser) {
     if ( ldap.auth(user, password) )
       return new Cookie(actingAsUser);
   } else (user.isAgent) {
       return new Cookie(actingAsUser);
   } else {
     if ( ldap.auth(user, password) )
       return new Cookie(user);
   }
   return null;
 }
}

Ok we are making progress, notice how the need for all of the getters has disappeared, (and in this simplified example the need for the Authenticator class disappears) but there is still something wrong. The ifs branch on internal state of the object. My guess is that this code-base is riddled with if (user.isSuperUser()). The issue is that if you add a new flag you have to remember to change all of the ifs which are dispersed all over the code-base. Whenever I see If or switch on a flag I can almost always know that polymorphism is in order.
class User {
 String user;
 String password;

 Cookie login(Ldap ldap) {
   if ( ldap.auth(user, password) )
     return new Cookie(user);
   return null;
 }
}

class SuperUser extends User {
 String actingAsUser;

 Cookie login(Ldap ldap) {
   if ( ldap.auth(user, password) )
     return new Cookie(actingAsUser);
   return null;
 }
}

class AgentUser extends User {
 String actingAsUser;

 Cookie login(Ldap ldap) {
   return new Cookie(actingAsUser);
 }
}

Now that we took advantage of polymorphism, each different kind of user knows how to log in and we can easily add new kind of user type to the system. Also notice how the user no longer has all of the flag fields which were controlling the ifs to give the user different behavior. The ifs and flags have disappeared.

Now this begs the question: should the User know about the Ldap? There are actually two questions in there. 1) should User have a field reference to Ldap? and 2) should User have compile time dependency on Ldap?

Should User have a field reference to Ldap? The answer is no, because you may want to serialize the user to database but you don't want to serialize the Ldap. See here.

Should User have compile time dependency on Ldap? This is more complicated, but in general the answer depends on weather or not you are planning on reusing the User on a different project, since compile time dependencies are transitive in strongly typed languages. My experience is that everyone always writes code that one day they will reuse it, but that day never comes, and when it does, usually the code is entangled in other ways anyway, so code reuse after the fact just does not happen. (developing a library is different since code reuse is an explicit goal.) My point is that a lot of people pay the price of "what if" but never get any benefit out of it. Therefore don't worry abut it and make the User depend on Ldap. Permalink | Links to this post | 19 comments

The Advantages of Unit Testing Early

by Shyam Seshadri

Nowadays, when I talk with (read: rant at) anyone about why they should do test driven development or write unit tests, my spiel has gotten extremely similar and redundant to the point that I don't have to think about it anymore. But even when I do pairing with skeptics, even as I cajole and coax testable code or some specific refactorings out of them, I wonder, why is it that I have to convince you of the worth of testing ? Shouldn't it be obvious ?

And sadly, it isn't. Not to many people. To many people, I come advocating the rise of the devil itself. To others, it is this redundant, totally useless thing that is covered by the manual testers anyway. The general opinion seems to be, "I'm a software engineer. It is my job to write software. Nowhere in the job description does it say that I have to write these unit tests." Well, to be fair, I haven't heard that too many times, but they might as well be thinking it, given their investment in writing unit tests. And last time I checked, an engineer's role is to deliver a working software. How do you even prove that your software works without having some unit tests to back you up ? Do you pull it up and go through it step by step, and start cursing when it breaks ? Because without unit tests, the odds are that it will.

But writing unit tests as you develop isn't just to prove that your code works (though that is a great portion of it). There are so many more benefits to writing unit tests. Lets talk in depth about a few of these below.

Instantaneous Gratification

The biggest and most obvious reason for writing unit tests (either as you go along, or before you even write code) is instantaneous gratification. When I write code (write, not spike. That is a whole different ball game that I won't get into now), I love to know that it works and does what it should do. If you are writing a smaller component of a bigger app (especially one that isn't complete yet), how are you even supposed to know if what you just painstakingly wrote even works or not ? Even the best engineers make mistakes.

Whereas with unit tests, I can write my code. Then just hit my shortcut keys to run my tests, and voila, within a second or two, I have the results, telling me that everything passed (in the ideal case) or what failed and at which line, so I know exactly what I need to work on. It just gives you a safety net to fall back on, so you don't have to remember all the ways it is supposed to work in. Something tells you if it is or not.

Also, doing Test Driven Development when developing is one of the best ways to keep track of what you are working on. I have times when I am churning out code and tests, one after the other, before I need to take a break. The concept of TDD is that I write a failing test, and then I write just enough code to pass that test. So when I take a break, I make it a point to leave at a failing test, so that when I come back, I can jump right back into writing the code to get it to pass. I don't have to spend 15 - 20 minutes reading through the code to figure out where I left off. My asserts usually tell me exactly what I need to do.

Imposing Modularity / Reusability

The very first rule of reusable code is that you have to be able to instantiate an instance of the class before you can use it. And guess what ? With unit tests, you almost always have to instantiate an instance of the class under test. Therefore, writing a unit test is always a first great step in making code reusable. And the minute you start writing unit tests, most likely, you will start running into the common pain points of not having injectable dependencies (Unless of course, you are one of the converts, in which case, good for you!).

Which brings me to the next point. Once you start having to jump through fiery hoops to set up your class just right to test it, you will start to realize when a class is getting bloated, or when a certain component belongs in its own class. For instance, why test the House when what you really want to test is the Kitchen it contains. So if the Kitchen class was initially part of the House, when you start writing unit tests, it becomes obvious enough that it belongs separately. Before long, you have modular classes which are small and self contained and can be tested independently without effort. And it definitely helps keep the code base cleaner and more comprehensible.

Refactoring Safety Net

Any project, no matter what you do, usually ends up at a juncture where the requirements change on you. And you are left with the option of refactoring your codebase to add / change it, or rewrite from scratch. One, never rewrite from scratch, always refactor. Its always faster when you refactor, no matter what you may think. Two, what do you do when you have to refactor and you don't have unit tests ? How do you know you haven't horribly broken something in that refactor ? Granted, IDE's such as Eclipse and IntelliJ have made refactoring much more convenient, but adding new functionality or editing existing features is never simple.

More often than not, we end up changing some undocumented way the existing code behaved, and blow up 10 different things (it takes skill to blow up more, believe me, I have tried). And its often something as simple as changing the way a variable is set or unset. In those cases, having unittests (remember those things you were supposed to have written?) to confirm that your refactoring broke nothing is godsend. I can't tell you the amount of times I have had to refactor a legacy code base without this safety net. The only way to ensure I did it correct was to write these large integration tests (because again, no unit tests usually tends to increase the coupling and reduce modularity, even in the most well designed code bases) which verified things at a higher level and pray fervently that I broke nothing. Then I would spend a few minutes bringing up the app everytime, and clicking on random things to make sure nothing blew up. A complete waste of my time when I could have known the same thing by just running my unit tests.

Documentation

Finally, one of my favorite advantages to doing TDD or writing unit tests as I code. I have a short memory for code I have written. I could look back at the code I wrote two days ago, and have no clue what I was thinking. In those cases, all I have to do is go look at the test for a particular method, and that almost always will tell me what that method takes in as parameters, and what all it should be doing. A well constructed set of tests tell you about valid and invalid inputs, state that it should modify and output that it may return.

Now this is useful for people like me with short memory spans. But it is also useful, say, when you have a new person joining the team. We had this cushion the last time someone joined our team for a short period of time, and when we asked him to add a particular check to a method, we just pointed him to the tests for that method, which basically told him what the method does. He was able to understand the requirements, and go ahead and add the check with minimal handholding. And the tests give a safety net so he doesn't break anything else while he was at it.

Also useful is the fact that later, when someone comes marching through your door, demanding you fix this bug, you can always make sure whether it was a a bug (in which case, you are obviously missing a test case) or if it was a feature that they have now changed the requirements on (in which case you already have a test which proves it was your intent to do it, and thus not a bug).
Permalink | Links to this post | 13 comments

Software Testing Categorization

by Miško Hevery


You hear people talking about small/medium/large/unit/integration/functional/scenario tests but do most of us really know what is meant by that? Here is how I think about tests.

Unit/Small

Lets start with unit test. The best definition I can find is that it is a test which runs super-fast (under 1 ms) and when it fails you don't need debugger to figure out what is wrong. Now this has some implications. Under 1 ms means that your test cannot do any I/O. The reason this is important is that you want to run ALL (thousands) of your unit-tests every time you modify anything, preferably on each save. My patience is two seconds max. In two seconds I want to make sure that all of my unit tests ran and nothing broke. This is a great world to be in, since if tests go red you just hit Ctrl-Z few times to undo what you have done and try again. The immediate feedback is addictive. Not needing a debugger implies that the test is localized (hence the word unit, as in single class).

The purpose of the unit-test is to check the conditional logic in your code, your 'ifs' and 'loops'. This is where the majority of your bugs come from (see theory of bugs). Which is why if you do no other testing, unit tests are the best bang for your buck! Unit tests, also make sure that you have testable code. If you have unit-testable code than all other testing levels will be testable as well.

A KeyedMultiStackTest.java is what I would consider great unit test example from Testability Explorer. Notice how each test tells a story. It is not testMethodA, testMethodB, etc, rather each test is a scenario. Notice how at the beginning the test are normal operations you would expect but as you get to the bottom of the file the test become little stranger. It is because those are weird corner cases which I have discovered later. Now the funny thing about KeyedMultiStack.java is that I had to rewrite this class three times. Since I could not get it to work under all of the test cases. One of the test was always failing, until I realized that my algorithm was fundamentally flawed. By this time I had most of the project working and this is a key class for byte-code analysis process. How would you feel about ripping out something so fundamental out of your system and rewriting it from scratch? It took me two days to rewrite it until all of my test passed again. After the rewrite the overall application still worked. This is where you have an AHa! moment, when you realize just how amazing unit-tests are.

Does each class need a unit test? A qualified no. Many classes get tested indirectly when testing something else. Usually simple value objects do not have tests of their own. But don't confuse not having tests and not having test coverage. All classes/methods should have test coverage. If you TDD, than this is automatic.

Medium/Functional

So you proved that each class works individually, but how do you know that they work together? For this we need to wire related classes together just as they would be in production and exercise some basic execution paths through it. The question here we are trying to answer is not if the 'ifs' and 'loops' work, (we have already answered that,) but whether the interfaces between classes abide by their contracts. Great example of functional test is MetricComputerTest.java. Notice how the input of each test is an inner class in the test file and the output is ClassCost.java. To get the output several classes have to collaborate together to: parse byte-codes, analyze code paths, and compute costs until the final cost numbers are asserted.

Many of the classes are tested twice. Once directly throughout unit-test as described above, and once indirectly through the functional-tests. If you would remove the unit tests I would still have high confidence that the functional tests would catch most changes which would break things, but I would have no idea where to go to look for a fix, since the mistake can be in any class involved in the execution path. The no debugger needed rule is broken here. When a functional test fails, (and there are no unit tests failing) I am forced to take out my debugger. When I find the problem, I add a unit test retroactively to my unit test to 1) prove to myself that I understand the bug and 2) prevent this bug from happening again. The retroactive unit test is the reason why the unit tests at the end of KeyedMultiStackTest.java file are "strange" for a lack of a better world. They are things which I did not think of when i wrote the unit-test, but discovered when I wrote functional tests, and through lot of hours behind debugger track down to KeyedMultiStack.java class as the culprit.

Now computing metrics is just a small part of what testability explorer does, (it also does reports, and suggestions) but those are not tested in this functional test (there are other functional tests for that). You can think of functional-tests as a set of related classes which form a cohesive functional unit for the overall application. Here are some of the functional areas in testability explorer: java byte-code parsing, java source parsing, c++ parsing, cost analysis, 3 different kinds of reports, and suggestion engine. All of these have unique set of related classes which work together and need to be tested together, but for the most part are independent.

Large/End-to-End/Scenario

We have proved that: 'ifs' and 'loops' work; and that the contracts are compatible, what else can we test? There is still one class of mistake we can make. You can wire the whole thing wrong. For example, passing in null instead of report, not configuring the location of the jar file for parsing, and so on. These are not logical bugs, but wiring bugs. Luckily, wiring bugs have this nice property that they fail consistently and usually spectacularly with an exception. Here is an example of end-to-end test: TestabilityRunnerTest.java. Notice how these tests exercises the whole application, and do not assert much. What is there to assert? We have already proven that everything works, we just want to make sure that it is wired properly.
Permalink | Links to this post | 8 comments

Why are we embarrassed to admit that we don't know how to write tests?

Take your average developer and ask "do you know language/technology X?" None of us will feel any shame in admitting that we do not know X. After all there are so many languages, frameworks and technologies, how could you know them all? But what if X is writing testable code? Somehow we have trouble answering the question "do you know how to write tests?" Everyone says yes, whether or not we actually know it. It is as if there is some shame in admitting that you don't know how to write tests.
Now I am not suggesting that people knowingly lie here, it is just that they think there is nothing to it. We think: I know how to write code, I think my code is pretty good, therefore my code is testable!
I personally think that we would do a lot better if we would recognize testability as a skill in its own right. And as such skills are not innate and take years of practice to develop. We could than treat it as any other skill and freely admit that we don't know it. We could than do something about it. We could offer classes, or other materials to grow our developers, but instead we treat it like breathing. We think that any developer can write testable code.
It took me two years of writing tests first, where I had as much tests as production code, before I started to understand what is the difference between testable and hard to test code. Ask yourself, how long have you been writing tests? What percentage of the code you write is tests?
Here is a question which you can ask to prove my point: "How do you write hard to test code?" I like to ask this question in interviews and most of the time I get silence. Sometimes I get people to say, make things private. Well if visibility is your only problem, I have a RegExp for you which will solve all of your problems. The truth is a lot more complicated, the code is hard to test doe to its structure, not doe to its naming conventions or visibility. Do you know the answer?
We all start at the same place. When I first heard about testing I immediately thought about writing a framework which will pretend to be a user so that I can put the app through its paces. It is only natural to thing this way. This kind of tests are called end-to-end-tests (or scenario or large tests), and they should be the last kind of tests which you write not the first thing you think of. End-to-end-tests are great for locating wiring bugs but are pretty bad at locating logical bugs. And most of your mistakes are in logical bugs, those are the hard ones to find. I find it a bit amusing that to fight buggy code we write even more complex framework which will pretends to be the user, so now we have even more code to test.
Everyone is in search of some magic test framework, technology, the know-how, which will solve the testing woes. Well I have news for you: there is no such thing. The secret in tests is in writing testable code, not in knowing some magic on testing side. And it certainly is not in some company which will sell you some test automation framework. Let me make this super clear: The secret in testing is in writing testable-code! You need to go after your developers not your test-organization.
Now lets think about this. Most organizations have developers which write code and than a test organization to test it. So let me make sure I understand. There is a group of people which write untestable code and a group which desperately tries to put tests around the untestable code. (Oh and test-group is not allowed to change the production code.) The developers are where the mistakes are made, and testers are the ones who feel the pain. Do you think that the developers have any incentive to change their behavior if they don't feel the pain of their mistakes? Can the test-organization be effective if they can't change the production code?
It is so easy to hide behind a "framework" which needs to be built/bought and things will be better. But the root cause is the untestable code, and until we learn to admit that we don't know how to write testable code, nothing is going to change...

Read more »

Friday, December 16, 2011

Smoke Testing vs. Sanity Testing: What You Really Need to Know

 Smoke Testing vs. Sanity Testing: What You Really Need to Know

If you spend any time in forums in which new testers can be found, it won’t be long before someone asks “”What is the difference between smoke testing and sanity testing?”
“What is the difference between smoke testing and sanity testing?” is a unicorn question. That is, it’s a question that shouldn’t be answered except perhaps by questioning the question: Why does it matter to you? Who’s asking you? What would you do if I gave you an answer? Why should you trust my answer, rather than someone else’s? Have you looked it up on Google? What happens if people on Google disagree?
But if you persist and ask me, here’s what I will tell you:
The distinction between the smoke and sanity testing is not generally important. In fact, it’s one of the most trivial aspects of testing that I can think of, offhand. Yet it does point to something that is important.
Both smoke testing and sanity testing refer to a first-pass, shallow form of testing intended to establish whether a product or system can perform the most basic functions. Some people call such testing “smoke testing”; others call it “sanity testing”. “Smoke testing” derives from the hardware world; if you create an electronic circuit, power it up, and smoke comes out somewhere, the smoke test has failed. Sanity testing has no particular derivation that I’m aware of, other than the common dictionary definition of the word. Does the product behave in some crazy fashion? If so, it has failed the sanity test.
Do you see the similarity between these two forms of testing? Can you make a meaningful distinction between them? Maybe someone can. If so, let them make it. If you’re talking to some person, and that person want to make a big deal about the distinction, go with it. Some organizations make a distinction between the smoke and sanity testing; some don’t. If it seems important in your workplace, then ask in your workplace, and adapt your thinking accordingly while you’re there. If it’s important that you provide a “correct” answer on someone’s idiotic certification exam, give them the answer they want according to their “body of knowledge”. Otherwise, it’s not important. Don’t worry about it.
Here’s what is important: wherever you find yourself in your testing career, people will use language that has evolved as part of the culture of that organization. Some consultancies or certification mills or standards bodies claim the goal of providing “a common worldwide standard language for testing”. This is as fruitless and as pointless a goal as a common worldwide standard language for humanity. Throughout all of human history history, people have developed different languages to address things that were important in their cultures and societies and environments. Those languages continue to develop as change happens. This is not a bad thing. This is a good thing.
There is no term in testing of which I am aware whose meaning is universally understood and accepted. There’s nothing either wrong or unusual about that. It’s largely true outside the testing world too. Pick an English word at random, and odds are you’ll find multiple meanings for it. Examples:
  • Pick (choose, plectrum for a guitar)

  • English (a language, spin on a billiard ball)

  • word (a unit of speech, a 32-bit value)

  • random (without a definite path, of equal probability)

  • odds (probability, numbers not divisible by two)

  • multiple (more than one, divisible by)

  • meaning (interpretation, significance)
Never mind the shades and nuances of interpretation within each meaning of each word! And notice that “never mind”, in this context, is being used ironically. Here, “never mind” doesn’t mean “forget” or “ignore”; here, it really means the opposite: “also pay attention to”!
Not only is there no universally accepted term for anything, there’s no universally accepted authority that could authoritatively declare or enforce a given meaning for all time. (Some might point to law, claiming that there are specific terms which have solid interpretations. If that were true, we wouldn’t need courts or lawyers.)
If you find yourself in conversation (or in an interview) with someone who asks you “Do you do X?”, and you’re not sure what X is by their definition, a smart and pragmatic reply starts with, “I may do X, but not necessarily by that name.” After that,
  • You can offer to describe your notion of X (if you have one).

  • You can describe something that you do that could be interpreted as X. That can be risky, so offer this too: “Since I don’t know what you mean by X, here’s something that I do. I think it sounds similar to X, or could be interpreted as X. But I’d like to make sure that we both recognize that we could have different interpretations of what X means.”

  • You can say, “I’d like to avoid the possibility that we might be talking at cross-purposes. If you can describe what X means to you, I can tell you about my experiences doing similar things, if I’ve done them. What does X mean to you?” Upon hearing their defintion of X, then truthfully describe your experience, or say that you haven’t done it.
If you searched online for an answer to the smoke vs. sanity question, you’d find dozens, hundreds of answers from dozens, hundreds of people. (Ironically, the very post that introduces the notion of the unicorn question includes, in the second-to-last paragraph, a description of a smoke test. Or a sanity test. Whatever.) The people who answer the smoke vs. sanity question don’t agree, and neither do their answers. Yet many, even most, of the people will seem very sure of their own answers. People will have their own firm ideas about how many angels can fit on the head of a pin, too. However, there is no “correct” definition for either term outside of a specific context, since there is no authority that is univerally accepted. If someone claimed to be a universally accepted authority, I’d reject the claim, which would put an instant end to the claim of universal acceptance.
With the possibile exception of the skills of memorization, there is no testing skill involved in memorizing someone’s term for something. Terms and their meanings are slippery, indistinct, controversial, and context-dependent. The real testing skill is in learning to deal with the risk of ambiguity and miscommunication, and the power of expressing ourselves in many ways.

Read more »

Thursday, December 15, 2011

ScriptCover makes Javascript coverage analysis easy

By Ekaterina Kamenskaya, Software Engineer in Test, YouTube



Today we introduce the Javascript coverage analysis tool, ScriptCover. It is a Chrome extension that provides line-by-line Javascript code coverage statistics for web pages in real time without any user modifications required. The results are collected both when the page loads and as users interact with it. The tool reports details about total web page coverage and for each external/internal script, as well as annotated code sources with individually highlighted executed lines.


Short report in Chrome extension’s popup, detailing both overall scores and per-script coverage.


Main features:
  • Report current and previous total Javascript coverage percentages and total number of instrumented code instructions.
  • Report Javascript coverage per individual instruction for each internal and external script.
  • Display detailed reports with annotated Javascript source code.
  • Recalculate coverage statistics while loading the page and on user actions.


Sample of annotated source code from detailed report. First two columns are line number and number of times each instruction has been executed.

Here are the benefits of ScriptCover over other existing tools:
  • Per instructions coverage for external and internal scripts: The tool formats original external and internal Javascript code from ‘<script>’ tags to ideally place one instruction per line and then calculates and displays Javascript coverage statistics. It is useful even when the code is compressed to one line.

  • Dynamic: Users can get updated Javascript coverage statistics while the web page is loading and while interacting with the page.

  • Easy to use: Users with different levels of expertise can install and use the tool to analyse coverage. Additionally, there is no need to write tests, modify the web application’s code, save the inspected web page locally, manually change proxy settings, etc. When the extension is activated in a Chrome browser, users just navigate through web pages and get coverage statistics on the fly.

  • It’s free and open source!
         
Want to try it out? Install ScriptCover and let us know what you think.

We envision many potential features and improvements for ScriptCover. If you are passionate about code coverage, read our documentation and participate in discussion group. Your contributions to the project’s design, code base and feature requests are welcome!

Read more »

Wednesday, December 14, 2011

Unleash the QualityBots


Are you a website developer that wants to know if Chrome updates will break your website before they reach the stable release channel? Have you ever wished there was an easy way to compare how your website appears in all channels of Chrome? Now you can!

QualityBots is a new open source tool for web developers created by the Web Testing team at Google. It’s a comparison tool that examines web pages across different Chrome channels using pixel-based DOM analysis. As new versions of Chrome are pushed, QualityBots serves as an early warning system for breakages. Additionally, it helps developers quickly and easily understand how their pages appear across Chrome channels.



QualityBots is built on top of Google AppEngine for the frontend and Amazon EC2 for the backend workers that crawl the web pages. Using QualityBots requires an Amazon EC2 account to run the virtual machines that will crawl public web pages with different versions of Chrome. The tool provides a web frontend where users can log on and request URLs that they want to crawl, see the results from the latest run on a dashboard, and drill down to get detailed information about what elements on the page are causing the trouble.

Developers and testers can use these results to identify sites that need attention due to a high amount of change and to highlight the pages that can be safely ignored when they render identically across Chrome channels. This saves time and the need for tedious compatibility testing of sites when nothing has changed.



We hope that interested website developers will take a deeper look and even join the project at the QualityBots project page. Feedback is more than welcome at qualitybots-discuss@googlegroups.com.

Read more »

Tuesday, December 13, 2011

The tester who tested his testers


There were two multimedia products under test, lets say Bino and Nino. A tester was testing both of them. Bino was started earlier than Nino and had reached a state where the subjective quality of the product was good enough. Bino on reaching a good enough state was made the benchmark and Nino's quality was tested against it. Bino was talked all over the company for its quality.

Each time the tester tested a release for Nino, he used to report Bino was still better than Nino. Of course, mulitimedia quality is all about subjective view but as testers we need to quantify; why we rate something poor.

For some releases, Nino was not as good as Bino and hence the testers view was acceptable. After a few such iterations, the manager felt Nino's quality had improved a lot but the tester kept rating Bino to be much better than Nino.

The manager performed a trick to find out "what is going wrong with the testers decision despite the quality seems to have improved?"

He labelled Bino as Nino and Nino as Bino and gave a release of Nino to the tester.

This time the tester gave a report saying "Bino is still better than Nino" which means the tester was biased to Bino and had formed a pre-concieved notion that Bino is the benchmark.

Now, that was a great story about "bias of a tester" and the risk of "bias" on a tester. The manager was none other than my current manager Srinivasa.

Now, some of you might form an opinion that my manager was doing a micro management. Actually if you look closer, my manager helped the tester to come out of the bias.

Now my manager has set an example to other test managers as to how they can help their testers to look into themselves. I am sure if your manager gets to read this, he would have something to learn.

This story reminds me of one of the lesson I learnt from James Bach - As a good tester, you should never say "I am sure" since what you say is what you have observed/conjecture/infer but the truth could be different.

As a tester, you do not know whether you are being tested by the product or manager and it is recommended to say "I conjecture/infer/looks like Bino is still better than Nino".

Read more »

Rapid answers to rapid fire situations a tester faces

All the testers I have spoke to (in India) have these issues in common -

1. I have never got the release on planned time and my manager expects me to do a good job.

2. When we are not given enough time and the customer finds a bug, the managers come running and question, "how did it slip?".

3. I am getting no time to improve my skills as a tester, work is so much that I dont even get time to check my mails.

4. I quit my previous job thinking a new place would be better but it looks to me the new one is taking a lot of my time in generating metrics/preparing report/creating graphs than testing.

5. I am watching a script run for days together, in a few days I might feel the script will get smarter than me. ( for me not doing anything other than watching it play)

The Test Managers I have come across have these issues -
1. I am unable to gain confidence on my testers report and I need to keep my fingers crossed for each release.

2. How can I measure the productivity of testers? I use - number of test cases executed per hour or number of bugs found for a release or number of bugs found by the customer to evaluate a tester.

The customer have these issues -

1. I want as many tests to be automated, that gives me more confidence.

2. I am not happy about the testing that has been done by the company I have outsourced testing to.

There is a way you can deal with these situations my dear testers, test managers and customers!

Play football (soccer)
Aren't you thinking "This guy is stupid?"

Well, you might not think that after going through this post carefully and completely.

When you are on field as a member of a soccer team and the ball comes to you at a heated time where the opponent is two goals up, you are at a situation described above.

1. You start to think where you are standing in the field.
2. Where are your team members?
3. How many opponents are trying to attack you?
4. Whom should you pass the ball to?
5. How far is the goal?
6. Is your coach watching you?
7. Are your country fans going to kill you if you dont help to fetch a goal?
8. Will you be selected for the next match if you fail to give a good pass?
.
.
Diego Maradona, Pele, Ronaldo, Baichung Bhutia... handles these situations good enough and so they are the best. If you need to be good enough, you too need to be one among Maradona, Pele or Ronaldo.

Yes, I have started to like you as you got the hidden message saying, "A tester needs skill to handle and win these situations".

"Pradeep, is knowing definitions, getting certified and finding more bugs enough to handle these situations?"

Ah! you broke the ice by that question. Time to say, " you might be wrong" and if you want me to say the way, its "Rapid Software Testing" .

The complexity of products has grown as a monster in the past few years and you still want to keep following the approach that was formulated long back?

In some countries like India, we are angry against the government sometimes (many times, actually) for having companies/ tax/sales Act dating 1952 - 1957 which does not suit the 2006 and 2007, without even looking at ourselves that we haven't changed the age old traditional testing process.

Wait a minute, let me grin and get back to writing. ( he he ,
My pay slip has so many columns than the one mentioned in Salary Act!)

The picture you see in this post is of James Bach's identity plate. What he means by that is "Testers light the way"
. The project or product has a lot of dark corners and areas, which testers light the way they go and help the management take better decisions. ( Note: Its not the process that lights the way, sometimes testers might even light the darkness present in the process, accept them to improve your organization and product quality)

Its high time for you to ask me "Hey get me onto Rapid Software Testing"

Here it goes - Rapid Software Testing Slides by James Bach and Michael Bolton .

Rapid Software Testing - makes you skilled, if you are a tester. If you are a Manager, it gives you ample information in a short time that helps you take better decision on the product/release. If you are a customer, you would want this to happen in the company where you have outsourced your testing work.

Remember, I talked about the football/soccer game to give another hidden message that - Rapid Software Testing encourages skilled testing and also encourages an entire team to be skilled and tells you how to tap the skill of a team member to give better information about the quality of a product.

Yesterday, Manjunath, a tester from IBM - Bangalore met me and he couldn't believe the demo of Rapid Software testing, I gave him. When he left, he had to say, "This is great! I am a CSTE certified tester and we never get through all this that really makes a tester skilled" .He also said,
"It looks to me that certification just shows that a person has interest towards testing". I was happy that he re-stated what certification means to him.

The reason why I had to put "It looks to me" in bold is because I passed on a couple of lessons that I learn't from James Bach and Michael Bolton to that tester

Read more »

Monday, December 12, 2011

Excuses for testers when bugs are caught in later testing cycles/UAT/Production

In my experience, I have seen that unlike developers, testers are not much clever.  One of the biggest pain-point for testers is that whenever a bug is caught in UAT or caught in later stages then blame is unfairly put on testing teams. This further results into unfair yearly reviews and appraisals of testing teams.
I have seen that in many mid size companies, less value is given to testing teams. Testing culture lacks in those companies. Management should understand that testing team is "also responsible" [not "only responsible"] when a bug is caught in production.
Testers should be proactive and should be able deal with such situations. They should have the excuses ready when they are asked “WHY YOU MISS THIS BUG?”. In this post, I am writing some excuses so that testers can put themselves on safer side whenever bugs are found in UAT/Production. Each excuse depends upon a situation, so give below excuses carefully.
Excuse 1: Bug is missed because this scenario is not in the test case. These test cases are reviewed and approved by Business Analyst/Product Manager/XXXX person.
Excuse 2: Testing team already reported a similar bug which is not fixed yet. That's why we got this new bug in UAT/Production. [most common excuse].
Excuse 3: The bug is occurring because of the last minute changes in the application by development team. Project management team should come up with a strategy so that we can avoid last minute changes.
Excuse 4: Bug is missed because this scenario/rule is not mentioned in the requirement document [most common excuse].
Excuse 5: Testing is done in hurry because enough time was not given to testing team. Project management team should be proactive and should make an effective plan.
Excuse 6: Bug is missed because we (testers) did not test this functionality in later testing cycles. This functionality is not included in the testing scope.
Excuse 7: Bug is missed because we tested only that functionality which we got in the list of impacted areas. Whenever a change is made in the existing functionality then Development Team/Development Lead/Manager should give testing team the detailed list of impacted areas so that all impacted areas can be tested and bugs can be avoided in UAT/Production.
Excuse 8: This was the same bug which we got in our testing environment but at that time it was inconsistent. We reported it once but then both dev and testing teams were not able to replicate again.
Excuse 9: This bug might be occurring because Developers were fixing the bugs on Testing environment on the same time when testers were testing. Testing team cannot be blamed. Project management team should come up with a strategy so that we can avoid changes directly on QA/Testing environment.
Excuse 10: Why this is a bug? This is working as designed. Please show us which section of requirement/specification document states this rule. [Attn testers: Make this excuse only when you are sure that there are discrepancies in specification document].
Excuse 11: This bug is occurring when user selects a specific value in the dropdown/test data. It is working fine with other values. Exhaustive testing is impossible.

Read more »

The most challenging software testing quiz online

The most challenging software testing quiz online

This quiz created by me challenges a tester to answer several common and uncommon questions. At the end those who finish the quiz get to know what prize they win. ( which means, there are a lot of prizes to be won)

Last time I announced I would give free books - Lessons Learned in Software Testing and I did give it to those testers who proved to me that they deserved it by working on an exercise I gave them. This time I am keeping it as a secret because those who finish the quiz will get to know their prize when they see their performance and results. I have some questions that are easy, some which are moderate and some that only experts can crack - so whoever you are - you have a challenge.

Update me your score because if your score is more than other testers who have taken this quiz then you have a chance to win the many hundred dollars that I plan to give as a bumper prize. So here is the link to The Most Challenging Software Testing Quiz (I could ever think of creating)

Good luck!

Update: 12th October 1300 IST: The quiz is updated with HTML formatting and looks more great to my eyes. Thanks to Adam Goucher for his suggestions. 3 more very challenging questions are added. So you might want to re-take and check your score. If there is any betterment of your score update me so that you dont miss the grand prize.

Update: 12th October 2345 IST: If you dont see your comment for this post it is because I felt your comment contained some clue about the answers which might distract new people taking the quiz. In case you want to blog about this quiz please make sure you don't give the answers or let the people know the answers or experience as they might lose the excitement and opportunity to learn.

Read more »

Sunday, December 11, 2011

Truth about test plan document & test case document



Truth About Test Plan Documents

98% of test plan documents that are created are not updated, maintained or cared beyond sign off.
The first 5 pages of a test plan document contains history that doesn't interest even those whose names are mentioned in it.
Scope of testing section is the most funniest part of the whole document. Some times when testers report serious problems, some people cite it as out of scope. Hey, its still in the product.
Customers worldwide could have saved millions of dollars if their vendors didn't care about creating test plan documents.
4 years into a project, nobody knows about the test plan document.
No matter how stupid or intelligent the test plan document is, testers still write the test case they want to write.
As an offshore services company, writing test plan documents are a cool way of billing customers without actually doing testing.
Every stakeholder feels a false sense of achievement the moment they have a test plan document irrespective of whether they actually have a test plan.
The cost of reviewing a document that nobody is going to use it is really high.
Those who think they are not ready to test because the test plan document is not ready aren't testers by any means.
A simple maintainable test plan document is far superior than a detailed test plan document.
A mind map is worth a thousand good test plan documents.
Test Plan Document is a Document, not necessarily a test plan.
The next time you ask a tester to write a test plan where she knows it is not going to be used or maintained, she is not going to put her heart in it.
Some test plan documents are written in a way that it is obsolete in its first draft itself.
Some reviewers of test plan documents aim for perfection. More funny stuff, they may not even know what the product is supposed to do.
Those who know about opportunity cost are likely to write a better test plan document.
Not that you don't know.


Truth About Test Case Documents


 90% of testers haven't bothered to think why there is a "case" in "test case".
For most people on earth, a test case means a test idea that is documented.
The expected results column in test case documents are a copy paste of requirements document / stories. So much money goes into re-writing the requirements document into expected results column.
If you are already laughing at test case documentation, you may roar to a bigger laughter at trace ability matrix.
Most traditional testing services projects have 50% of their project duration spent on writing test cases. The team members in such projects complain unavailability of time to "actually" test the product. No wonder.
Unless the context demands, detailing a test case is a sin.
Detailed steps in test case documentation provided for humans to execute is something I personally consider as an act against humanity.
More than 99% ( yeah, more than 99%) testers I have met have passed a test case (or a bunch of them) without actually executing it. It is so f****** boring.
Test case documents bring more money to countries like India than what Bill Gates must have invested in setting up an office in that country.
Those testers who don't know to test without test case documentation aren't testers.
More than 98% of projects I have consulted in India didn't have testers doing "test design". Here is a way: Take the requirement and write at least two or three tests to "check" if that requirement can be marked a Pass. That is all the design that happens.
Test case documents are actually "Check case" documents.
If there wasn't check case documents, software testing as an industry would have attracted more talent and helped in building more passion for the craft.
Businessmen love test case documentation. Testers hate it. Businessmen hire testers to write documentation. Testers trade their time for money, end up writing documentation for money.
Test case pass percentage is a great way to fool stakeholders. People love to be fooled.
I personally can write test case documentation for any buggy product and make it look like a bug free product. 
If all test case documents created so far were printed and burnt, we'd have fire for the next one thousand years.
If you rate testers based on how many test cases they write per day, you'd always find people who can meet the number you want them to achieve.
As someone said, "Testing at its best is, sampling". If you start writing and detailing the samples, you will have fewer samples than what you can have and you will never get to know about the product.
If X test cases documentation takes Y hours, the amount of time spent on reviewing it and getting to sign off is 10Y. So, if X goes to 10X, we have 100Y hours of work spent on test case documentation.
Some projects have great test case documents and no time to run them all.
If you do a lot of documentation, you cant ship software, you can definitely ship documents.
If you are hiring people who need detailed test scripts to test software, your hiring has ton of bugs in it.
Those business people who ask testers to write "how long will this test case take to execute" and make estimations of test cycle complete time, should be executed.
It is about Opportunity cost and Opportunity or Cost.
No user has ever bought a product because the product was developed with lots of test case documentation.
99.999999999% of test case documentation I have seen so far doesn't care what the users really want.
If testers read 1000000 words in a test case document the first time they were executing, they only read 10000 the next time and 1000 that next time and 100 for the next. Later, they don't need it.
Some people think test plan document = test case document.
The service most companies sell is test documentation, not testing.
All good testers I have met so far, treat other testers as intelligent as they are and don't punish humans with detailed test scripts.
Test cases don't assure repeatability of testing, at best it assures repeatability of testers getting bored.
Funny that expected result of a test case should ideally be, "Software should go kaboom" BUT it is mostly, "We should see a boat sailing smooth as the day is bright and clear and the waters are not turbulent".
Just that, I know.

Read more »