Saturday, December 10, 2011

What is Agile testing?


What is Agile testing?

Agile testing is a software testing practice that follows the principles of the agile manifesto, emphasizing testing from the perspective of customers who will utilize the system. Agile testing does not emphasize rigidly defined testing procedures, but rather focuses on testing iteratively against newly developed code until quality is achieved from an end customer's perspective. In other words, the emphasis is shifted from "testers as quality police" to something more like "entire project team working toward demonstrable quality."

Agile testing involves testing from the customer perspective as early as possible, testing early and often as code becomes available and stable enough from module/unit level testing.

Since working increments of the software are released often in agile software development, there is also a need to test often. This is commonly done by using automated acceptance testing to minimize the amount of manual labour involved. Doing only manual testing in agile development may result in either buggy software or slipping schedules because it may not be possible to test the entire build manually before each release.

HP Quality Center Agile Accelerator

The HP Quality Center Agile Accelerator is designed to help projects manage Agile development using HP Quality Center 10.0. It can be imported into HP Quality Center 10.0 as a base project to manage both development and testing efforts within the same HP Quality Center Project. It comes with pre-built Agile user roles and related privileges, pre-defined Agile process workflows, configurations and rules to help you manage projects driven by Agile methodology. It also facilitates Agile reporting allowing you to track progress burn-down, burn-up and velocity.

Agile Accelerator Benefits

  • Supports multiple Agile practices including Scrum/XP methodology: Sprint, Backlog, User story
  • Reduces calculation effort including tasks, estimation, planning, and spent hours
  • Improves information visibility across all user groups such as product managers, project engineers, Scrum Master, and others.
  • Encompasses the full project lifecycle from planning to delivery, development and testing ensuring application meets promised requirements
  • Reports project progress and delivered value through Burn-up charts and Burn-down charts

Read more »

Friday, December 9, 2011

How Many Verifications Per Test?

How Many Verifications Per Test?

Whether writing manual or automated tests you may have asked yourself how much stuff you should include in each test. Sometimes you may write tests with multiple steps that look like this…

Test #1
Step 1 - Do A. Expect B.
Step 2 - Do C. Expect D.
Step 3 - Do E. Expect F.

Or instead, you may write three separate one step tests…

Test #2
Step 1 - Do A. Expect B.

Test #3
Step 1 - Do C. Expect D.

Test #4
Step 1 - Do E. Expect F.

Finally, you may even do this…

Test #5
Step 1 - Do A. Do C. Do E. Expect F.

Read more »

How often do we apply blink testing?



How often do we apply blink testing?

I apply blink testing any time I can arrange to be confronted with a blizzard of data: comparing screens (I glance at a million pixels, then at another million pixels, and in an instant I see the tiny difference between them), scrolling through huge log files, or watching an extremely rapid process take place. Anything that seems overwhelming to take in triggers me to consider a blink test. By the way, when I say “blink test” I’m talking mostly about a blink oracle.

When presented with an application to test, what triggers (if any) lead you to choose model-based testing? or do you always think through state transitions?

Everyone, always, is doing model-based testing if that term means “testing according to a model.” The only possible exception to that would be testing by accident. As soon as you test purposefully, that means you already have a model in mind.

Automatically generating tests according to a specified model is the narrower definition of model-based testing that people like Google’s Harry Robinson prefer. What triggers that for me is whenever I want to meticulously cover a test space that I can conveniently describe with a tractable handful of variables (or if I see a regular and simple notation to describe those tests, regardless of the number of variables). I then write a little program in Perl to generate the test ideas. I may also try to automate those tests. For instance, I wrote a program to generate tests for the example I used in my talk. It produced 152 state transition cases, each consisting of a start state, three actions, and an expected end state. Like this: TRANSITION SEQUENCES: 1 launching -> (finished launching) stop start -> running 2 launching -> (finished launching) stop reset -> resetted 3 launching -> (finished launching) stop stop -> stopped 4 launching -> (finished launching) reset start -> running 5 launching -> (finished launching) reset reset -> resetted ..... 148 stopped -> stop reset start -> running 149 stopped -> stop reset reset -> resetted 150 stopped -> stop stop start -> running 151 stopped -> stop stop reset -> resetted 152 stopped -> stop stop stop -> stopped


How do you suggest that you apply this technique with a complex application, something like MS Word for instance.


I’m not sure what technique you are referring to. If you are talking about using state models to describe a system, then it’s interesting that you ask that, because I asked Harry Robinson that same question (it was even about applying it to Word) after seeing an otherwise fascinating talk about state-based testing that he gave, years ago. As I recall, he wasn’t ready to answer that question. But I am. My answer is: ask Harry. Seriously, he has actually worked with model-based testing tools at Microsoft, in the years since I challenged him on this point.

I have another answer, too. I don’t apply automated model-based testing to entire applications, I apply that method opportunistically. So, if I were testing Word, I would be looking for features of Word that seemed especially like tractable state machines, or were tractable (meaning not too many variables and complications) to test via some other kind of model. Otherwise, what I am always doing is developing models in my mind (we call it learning) and using those models to test any given product, no matter how complex it is.

Have you come across any tools for automated Model based testing?

I use Perl. I’m sure there are other kinds of tools, but I haven’t used them.

What’s too much modeling?

Too much formal modeling is when you give it more time than it is worth, or when many other interesting things don’t get done because you are obsessed with the formalisms and the cool tools. Too much attention to one kind of model will starve attention for other kinds of models that also have testing value.
Technology Notes

The technology of Webinars is still a bit immature. I used GoToWebinar, and I think the price/value arrangement is pretty good. WebEx has better features, but it’s much more expensive.

I had to wear two headsets, one to record my voice and talk to the CAMUG auditorium via Skype, the other to talk on the webinar conference line. The CAMUG people (in an auditorium) and the online people were not able to hear each other. I wasn’t able to hear the online people, but they could type questions to me. I wish the audio was all somehow integrated online. I would have wanted to record the CAMUG audio, too.

Animations happening on my screen were not smoothly displayed to the audience, but at least they were displayed.

Read more »

Thursday, December 8, 2011

Philosophers of Testing

Philosophers of Testing

Do you know what a philosopher is? I think a philosopher is someone who develops philosophy, as opposed to someone who accepts philosophy strictly ready-made from a trusted authority. By philosophy I mean an account of what the world is (Ontology) or how I can know about the world (Epistemology) or what matters about the world (Axiology).
I’m a philosopher. Yes, I’m also other things. I call myself a tester on my immigration paperwork. But a good tester is just a particular kind of philosopher, it seems to me. If you read any of my stuff on a regular basis, you are probably also a philosopher. Otherwise, how could you stand it?
I practice philosophy because I want to understand my status and my worth in this cosmos, and I don’t trust the obvious or traditional answers. I practice testing because my clients want to understand the status and worth of their products, and they don’t trust the obvious or traditional answers. See the connection?
In these pursuits, it is easy to be fooled. Self-deception is particularly common. Rene Descartes once worried that a mischievous demon might be systematically fooling him by clouding and manipulating his senses. Whereas, I once reported to a programmer that his program had frozen my system, whereupon the programmer pointed out that I was looking at a screenshot, not a live program. See the connection?
Philosophy doesn’t find bugs for me, but it improves my ability to search for them. I have more patience for the search because philosophy has taught me tolerance for ambiguity, an appreciation for complexity, and a mistrust of appearances. Jerry Weinberg once told me “A tester is someone who knows that things can be different.”
Philosophy doesn’t read specifications, but to study philosophy involves a lot of reading and criticizing of obscure texts. I have to work through the logic of arguments and notice fallacies whether I’m finding the flaws in one of Ayn Rand’s rants against skepticism, or puzzling through a state model for a timing application.
Philosophy doesn’t evaluate or report my bugs, but it does make my evaluations and reports better. This is because a big part of philosophy is rhetoric: the art of persuasion, including real-time reasoning under pressure, for an audience.
I do hope you see the connection. Few people do, but those are pretty much the few people I find talented and fascinating in this industry. So, I guess it works out.
Everyone is a Philosopher in Context-Driven Testing

As I periodically remind my readers and clients, I am a context-driven tester. That requires me to examine the relationship between my practices and the context in which I should use those practices. I don’t know how anyone could be truly context-driven without also being comfortable with philosophy.
Today, while arguing on the software-testing forum at Yahoogroups, I thought of making a list of the philosophers who strike me as the patron thinkers of the context-driven way. I invite you to suggest your own favorites. Here’s my list:
  • Protagoras, the original humanist. Protagoras understood that arguments can be constructed for any purpose, and that only humans construct them. It was Protagoras who said “man is the measure of all things.”
  • Socrates, the original tester. He describes himself in Theaetetus like so: “The triumph of my art is in thoroughly examining whether the thought which the mind of the young man brings forth is a false idol or a noble and true birth. And like the mid-wives, I am barren, and the reproach which is often made against me, that I ask questions of others and have not the wit to answer them myself, is very just - the reason is, that the god compels me to be a midwife, but does not allow me to bring forth. And therefore I am not myself at all wise, nor have I anything to show which is the invention or birth of my own soul, but those who converse with me profit.”
  • Pyrrho, the original skeptic. A Pyrrhonian skeptic is a person who believes that, since we cannot be certain of anything, inquiry must continue in all things and all respects.
  • Miyamoto Musashi, the context-driven warrior. In his Book of Five Rings, Musashi complains about other fighting schools (which he also calls schools) just as I do. He complains about attachments to particular weapons and strategies “In my doctrine,” he says, “I dislike preconceived, narrow spirit.” Musashi advises “You should not have a favourite weapon. To become over-familiar with one weapon is as much a fault as not knowing it sufficiently well. You should not copy others, but use weapons which you can handle properly.”
  • David Hume, the great skeptic. He struck the first great blows against conventional reasoning and unexamined assumptions of the then-brand-new idea of modern science.
  • C.S. Peirce, the pragmatist. Peirce is one of the founders of semiotics, which is the study of signs, symbols, signals (the testing of user interfaces benefits from that study). He questioned scientific method and coined the term “abductive inference” to describe reasoning to the best explanation for the circumstances.
  • Karl Popper, the fallibilist. Popper finished a lot of what Hume started, demonstrating a critical method for the advancement of knowledge that embraced fallibility, criticism, and problem-solving as its pillars.
  • George Polya, the modern father of heuristics. Polya wrote extensively about plausible reasoning processes for solving mathematical and engineering problems. Polya influenced computer science and also another philosopher of science, Imre Lakatos, who is famous for showing how scientific theories and terminology evolves through an often messy dialectical and heuristic process.
  • Thomas Kuhn, the father of paradigms. Kuhn argued that social factors often outweigh rational factors in guiding the development of ideas.
  • Paul Feyerabend, the philosophical iconoclast. He wrote Against Method, and its sequel Science in a Free Society. The first sentence of Against Method is (attempting to quote from memory) “Anarchy, while perhaps not a good political philosophy, is nonetheless excellent medicine for Science.” Feyerabend was arguing against “best practices” in Science. I read Feyerabend when I was 17. His zeal for questioning things that most thinkers think should not be questioned has deeply influenced my career.
  • Joseph Campbell, the syncretist. Campbell applied general systems thinking to the religions and myths of the world, drawing out commonalities and differences. Campbell’s book The Hero with A Thousand Faces helped me begin to understand how to learn from cultures that I do not belong to.
  • Richard Feynman, the practical iconoclast. Feynman’s life and work embodies the restless curiosity of a great tester.
  • Virginia Satir, the mother of family therapy. Virginia Satir’s idea of treating a family as a system strongly influenced Jerry Weinberg, who applied and expanded her ideas into a comprehensive approach to technical problem-solving.
  • Herbert Simon, the “good enough” guy. Winner of the Nobel Prize in Economics for his work on bounded rationality and heuristic reasoning in organizations. His book, The Sciences of the Artificial, is the foundation for a lot of my ideas on heuristic process improvement.
  • Richard Bach, the individualist. Richard Bach wrote Jonathan Livingston Seagull, a novel about someone trying to do something to perfection, and who formed his own community to pursue that dream. Richard Bach is my father, and I was raised on his philosophy that each individual must “find his or her true family” instead of going along with the crowd just because it pleases the crowd.
Richard was influenced by Ayn Rand, but in general he has little regard for the ideas of other philosophers. He believes our happiness requires that we each be our own philosopher.
That’s how I became a philosopher: My father believes that I must think for myself, and I always agree with my father.

Read more »

Wednesday, December 7, 2011

Question: How do you stay sharp as a tester?

Question: How do you stay sharp as a tester?

Shrini writes: How does a good tester keep his testing abilities sharpened all the times. compare it with keep our body fit as we grow old ( walking, jogging and going to Gym, eating healthyfood etc) - what you suggest for keeping “Tester health� in ‘fit and sound� condition?
Testing is analysis and problem solving. Here is what I did, this past week:
  • I solved about 50 problems from the book “Lateral Logic Puzzles” with my son.
  • Paul Jorgensen sent me an exploratory testing challenge, in the form of a spreadsheet with a bug in it. I investigated the bug and wrote a play-by-play description of what I did.
  • I wrote a Perl script to generate some experimental tests.
  • I practiced Sudoku with my Nintendo DS Sudoku game.
  • I analytically solved a conditional probability problem (the taxicab problem) that is often associated with the Representativeness Bias. This was part of working out a testing exercise based on that bias. (Then I tried the new exercise with Michael Bolton.)
  • I read some of a testing book from 1986 that Mike Kelly lent me. I’m trying to characterize the difference between “modern” testing ideas and those from 20 years ago.
  • This morning, I derived the formula for calculating the distance to the horizon based on eye level. It’s been a long time since I did trigonometry, but it was fun rediscovering sines and cosines.
  • I listened to a few hours of lectures from the Teaching Company about Neo-platonism and other philosophical trends of the dark and middle ages.
  • I skimmed several articles, including Knowledge And Software Engineering: A Methodological Framework To Symbiotic Software Process Modeling, and Blooming E-learning: Adapting Bloom’s Taxonomy into the content of e-learning course to promote life long learning through Metacognition, and Third Cybernetic Revolution: Beyond Open to Dialogic System Theories.
    It may not seem like it from the titles, but they have a lot to do with analyzing testing practices and becoming a better tester.
  • I received Pradeep’s Soundararajan’s startlingly incisive answer to the Wine Glass factoring exercise I gave him (“Describe all the dimensions of a wine glass that may be relevant to testing it.”), which helped me see more angles and subtleties to my question. Then I transpected with Michael Bolton as he worked through the same problem.
  • I worked on answers to testing questions submitted by my readers.
As you see, I stay sharp in testing by finding and solving problems, including testing problems; and reading or listening to philosophical ideas that I use to understand testing better; and by trying to help other testers learn, or by watching them learn; and by actually testing.
I’m not in a project, at the moment, for a paying client. If I were, I would be staying sharp by solving problems for my client. I do my best to find excuses to learn new things while working for pay.
When I worked at Apple Computer, I often stole away to the Donut Wheel, across the street, to read about software engineering. When I worked at Borland, I stayed late and worked on test methodology documents and articles. At SmartPatents, I learned Perl and formed my first thoughts about agile test automation.

Read more »

Tuesday, December 6, 2011

Exploratory Testing Research

Exploratory Testing Research

Good research on testing is hard to find. Why? One reason is that testing does not belong to the field of Computer Science. I mean, sure, some of it does. There is some value to describing and testing an algorithm to efficiently cover a directed graph. But covering directed graphs is not my problem, most of the time. Most of the time, my problem is how to work with other people to simplify a complex world. Most of the time, the testing problem is an exploration and modeling problem within a socially distributed cognitive system. Whew! Whatever that is, it ain’t Computer Science.
Therefore, I am delighted to present two excellent examples of rigorous scientific research into exploratory testing– both of them coming from the field of Cognitive Science.
  1. Jerry Weinberg’s 1965 Doctoral ThesisHere, Jerry runs an experiment to determine strategies people use when trying to comprehend a pattern of behavior in a system. In this case, the system is a set of symbols that keep changing, and the task is to predict the symbols that will come next. By observing the pattern of prediction made by his test subjects, Jerry is able to draw inferences about the evolution of their mental models of the system.The upshot is this: tom some extent it is possible to see how testers think while they are thinking. I use this principle to evaluate testers and coach them to think better.
  2. Collaborative Discovery in a Scientific DomainThis paper by Takeshi Okada and Herbert Simon is fantastic! They study how pairs of scientists, working together, design and conduct experiments to discover a scientific principle. This is EXACTLY the same thought process used by testers to investigate the behavior of systems.Notice how Okada and Simon collect information about the thought processes of their subjects. It’s very much like Weinberg’s approach, and shows again that it is possible to draw solid inferences and make interesting distinctions about the thought processes of testers. This is important stuff, because we need to make the case that exploratory testing is a rich activity that can be observed, evaluated, and also systematically taught and improved. These two papers deal with the observation and evaluation part, but I think they suggest ways to teach and improve.

Read more »

Stress Test Demonstration

Stress Test Demonstration

I’m experimenting with the use of BBTestAssistant to create little testing lessons. I’m starting to like BBTestAssistant quite a lot for recording my exploratory testing sessions.
Here is a six minute demonstration of one kind of stress testing that I call “instant stress testing.” This is one of the quick test heuristics I have discussed in other blog entries.
The basic idea is to look for a function in a product that loops based on some input, then give it input that will cause the loop to go on and on and on. Essentially, you are taking advantage of the product’s ability to automate itself into oblivion.

Read more »

Monday, December 5, 2011

Quick Oracle: Blink Testing

Quick Oracle: Blink Testing

Background:
  1. In testing, an “oracle” is a principle or mechanism by which we recognize a problem. This contrasts with “coverage”, which has to do with getting a problem to occur. All tests cover a product in some way. All tests must include an oracle of some kind or else you would call it a tour rather than a test. (You might also call it a test idea, test activity, or a test case fragment, but not a complete test.)
  2. A book called Blink: The Power of Thinking Without Thinking has recently been published on the subject of snap decisions. I took one look at it, flipped quickly through it, and got the point. Since the book is about making decisions based on little information, I can’t believe the author, Malcolm Gladwell, seriously expected me to sit down and read every word.
“Blink testing” represents an oracle heuristic I find quite helpful, quite often. (I used to call it “grokking”, but Michael Bolton convinced me that blink is better. The instant he suggested the name change, I felt he was right.)
What you do in blink testing is plunge yourself into an ocean of data– far too much data to comprehend. And then you comprehend it. Don’t know how to do that? Yes you do. But you may not realize that you know how.
You can do it. I can prove this to you in less than one minute. You will get “blink” in a wink.
Imagine an application that adds two numbers together. Imagine that it has two fields, one for each number, and it has a button that selects random numbers to be added. The numbers chosen are in the range -99 to 99.
Watch this application in action by looking at this movie (which is an interactive EXE packaged in a ZIP file) and ask yourself if you see any bugs. Once you think you have it,
  • How many test cases do you think that was?
  • Did it seem like a lot of data to process?
  • How did you detect the problem(s)?
  • Isn’t it great to have a brain that notices patterns automatically?
There are many examples of blink testing, including:
  • Page through a long file super rapidly (holding your thumb on the Page Down button, notice the pattern of blurry text on the screen, and look for strange variations in that pattern.
  • Take a 60,000 line log file, paste it into Excel, and set the zoom level to 8%. Scroll down and notice the pattern of line lengths. You can also use conditional formatting in Excel to turn lines red if they meet certain criteria, then notice the pattern of red flecks in the gray lines of text, as you scroll.
  • Flip back and forth rapidly between two similar bitmaps. What catches your eye? Astronomers once did this routinely to detect comets.
  • Take a five hundred page printout (it could be technical documentation, database records, or anything) and flip quickly through it. Ask yourself what draws your attention most about it. Ask yourself to identify three interesting patterns in it.
  • Convert a huge mass of data to sound in some way. Listen for unusual patterns amidst the noise.
All of these involve pattern recognition on a grand scale. Our brains love to do this; our brains are designed to do this. Yes, you will miss some things; no, you shouldn’t care that you are missing some things. This is just one technique, and you use other techniques to find those other problems. We already have test techniques that focus on trees, it also helps to look at the forest.

Read more »

To Repeat Tests or Not to Repeat

To Repeat Tests or Not to Repeat

One of the serious social diseases of the testing craft is the obsession with repetition. Is that test repeatable? Is that test process repeatable? Have we repeated those tests? These questions are often asked in a tone of worry or accusation, sometimes accompanied by rhetorical quips about the importance of a disciplined process– without explanation of how discipline requires repetition.
(Before you go on, I urge you to carefully re-read the previous paragraph, and notice that I used the word obsession. I am not arguing with repeatability, as such. Just as one can argue against an addiction to food without being against eating, what I’m trying to do is wipe out obsession. Please help me.)
There is one really good reason not to repeat a test: the value of a new test is greater than the value of an old test (all other things being equal). It’s greater because a new test can find problems that have always been in the product, and not yet found, while an old test has a non-zero likelihood of revealing the same old thing it revealed the last time you performed it. New tests always provide new information. Old tests sometimes do.
This one powerful reason to run new tests is based on the idea that testing is a sampling process, and that running a single test, whatever the test, is to collect a tiny sample of behavior from a very large population of potential behaviors. More tests means a bigger sample. Re-running tests belabors the same sample, over and over.
Test repetition is often justified based on arguments that sound like blatant discrimination against the unborn test, as if manifested tests have some kind of special citizenship denied to mere potential tests. One reason for this bias may be a lack of appreciation for the vastness of testing possibilities. If you believe that your tests already comprise all the tests that matter, you won’t have much urgency about making new ones.
Another reason may be an innappropriate analogy to scientific experiments. We were all told in 5th grade science class about the importance of the controlled, repeatable experiment to the proper conduct of science. But what we weren’t told is that a huge amount of less controlled and less easily repeated exploratory work precedes the typical controlled experiment. Otherwise, an amazing amount of time would be wasted on well controlled, but uninteresting experiments. Science embraces exploratory as well as confirmatory research.
One thought experiment I find useful is to take the arguments for repetition to their logical extreme and suppose that we have just one and only one test for a complex product. We run that test again and again. The absurdity of that image helps me see reasons to run more tests. No complex product with a high quality standard can be considered well tested unless a wide variety of tests have been performed against it.
(You probably have noticed that it’s important to consider what I mean by “test” and “run that test again and again”. Depending on how you think of it, it may well be one test would be enough, but then it would have to be an extremely complex test or one that incorporates within itself an extreme amount of variation.)

The Product is a Minefield
In order to replace obsession with informed choice, we need a way to consider a situation and decide if repetition is warranted, and how much repetition. I have found that the analogy of a minefield helps me work through those considerations.
The minefield is an evocative analogy that expresses the sampling argument: if you want to avoid stepping on a mine, walk in the footsteps of the last successful person to traverse the minefield. Repetition avoids finding a mine by limiting new contact between your feet and the ground. By the same principe, variation will increase the possibility of finding a mine.
I like this analogy because it is a meaningful and valid argument that also has important flaws that help us argue in favor of repetition. The analogy helps us explore both sides of the issue.
In my classes, I make the minefield argument and then challenge students to find problems in it. Each problem is then essentially a reason why, in a certain context, repetition might be better than variation.
I won’t force you to go through that exercise. Although, before you click on the link, below, you may want to think it through for yourself.
I know of nine interestingly distinct reasons to repeat tests. How many can you think of?

Read more »

The Simplicity of Complexity

The Simplicity of Complexity

One of the experiences I share with a lot of people in this modern world is that I forget phone numbers. I never used to. The problem is that my mobile phone remembers them for me. So, phone numbers no longer stick in my own head. If I want to call a colleague, I first look for my phone. If I can’t find my phone, I don’t make the call.
Another way of looking at this is that my life has been simplified in some ways by my mobile phone, and in some ways it has been made more complicated. I would argue that it was simpler for me when I was forced to memorize phone numbers. It was simpler in that my use of many useful phone numbers was completely independent of external equipment.
Any tool that helps me, also costs something. Any tool, agent, or organization that abstracts away a detail may also takes away a resource that I might sometimes need, and may atrophy if not used on a regular basis.
Test Tools Come at a Cost– Even if They are Free
This weekend, I attended the 5th Austin Workshop on Test Automation. This is a group of test automation people who are sharing information about test tools– specifically, open source test tools. It’s wonderful. I’m learning a lot about free stuff that might help me.
But I notice a pattern that concerns me: an apparent assumption by some of my helpful tool developer friends that a tool of theirs that handles something for me (so that I don’t have to do it myself) is obviously better than not having that tool.
So, let’s consider what is offered when someone offers me a tool that solves a problem that crops up in the course of doing a task:
  • Some capability I may not already have.
  • Possibly a new set of abstractions that help me think better about my task.
  • Possibly a higher standard of “good enough” quality in my task that I can attain because of the new capability and abstractions.
But what is also offered is this:
  • Problems in that tool.
  • New problems due to how the tool changes my task.
  • New problems due to how the tool interacts with my technological or social environment.
  • One more thing to install on all the platforms I use.
  • The necessity of paying the operating costs of the tool to the extent I choose to use it.
  • The necessity investing time to learn the tool if I choose to use it (and to keep up with that learning).
  • The necessity of investing effort in using the tool (creating tool specific artifacts, for instance) that might not pay off as well as an alternative.
  • Having invested effort, the possibility of losing that investment when the tool becomes obsolete.
  • Avoidance of the learning and mastery of details that I might get by solving the problem myself.
  • A relationship with one more thing over which I have limited influence; and a potentially weaker relationship with something else that I know today.
  • Possible dependence on the owner of the tool to keep it current.
  • Possible legal entanglements from using the tool.
  • A sense of obligation to the provider of the tool.
I find it useful to explore tools. I want to learn enough to hold in my mind a useful index of possible solutions. And of course, I use many test tools, small and large. But I’m wary of claims that a new tool will make my life simpler. I appreciate a certain simplicity in the complexity of my world.

Read more »