To be a Good Tester you must think like a Scientist

It's funny how many times this particular analogy keeps coming up. The comparison between Testers and Scientists, and the similarity between testing and the Scientific Method.

Most recently it occurred to me while reading one of my son's chapter books. FYI: "chapter" books are short books broken into chapters for kids just beginning to read on their own. Usually for kids ~ 7-8 years old. These aren't Harry Potter books.. most of them barely reach 80 pages.

The book that caught my attention is called "Jigsaw Jones #9: The Case of the Stinky Science Project" by James Preller. The main characters are in Grade 2 and in this particular story their teacher was giving a Science lesson:
"The world is full of mystery. Scientists try to discover the truth. They ask questions. They investigate. They try to learn facts. Scientists do this by using the scientific method."

The teacher then handed out sheets of paper which read:

THE SCIENTIFIC METHOD

1. Identify the problem. What do you want to know?
2. Gather information. What do you already know?
3. Make a prediction. What do you think will happen?
4. Test the prediction. Experiment!
5. Draw a conclusion based on what you learned. Why did the experiment work out the way it did?

Back when I used to teach High School Physics, I recall giving a set of steps very much like this one. I might have used the word "inferences" instead of "conclusion" but otherwise it's a pretty good list.

When you think about testing software, generally you run through the same process and set of questions. If you don't think about each of these questions, then you're probably not doing something right.

For example, here are some questions that come to mind when I think of the Scientific Method applied to testing software:

1. Identify the problem.
  • What are the risks?
  • What is the particular feature of interest?
  • What is it you want/need to 'test' and 'why'?
2. Gather information.
  • What references are around to tell you how something should work? (e.g. Online Help, manuals, specifications, requirements, standards, etc.)
  • What inferences can you deduce (or guess) about how something should work? (i.e. based on your experiences testing similar apps, or other parts of the same system, etc.)
  • What can you determine by asking other people? (e.g. customers, programmers, subject-matter experts, etc.)
3. Make a prediction.
  • Design your tests.
  • What is your hypothesis?
  • What are the expected results?
  • Think about any assumptions or biases that might influence what you observe. How can you compensate for these?
4. Test the prediction.
  • Set up the environment
  • Execute the tests
  • Be creative! Make as many observations as you can.
  • Collect data
5. Draw a conclusion based on what you learned.
  • Did you observe the expected result? Does this mean the test passed? Are you sure?
  • If the test didn't turn up the predicted result, does this mean the test failed? Are you sure?
  • Revise the test design and any assumptions based on what you observe.
  • Do you have a better understanding of the risks that drove the test in the first place?
  • Do you have any new questions or ideas of risks as a result of this test?
  • If you collect a lot of data, summarise it in a chart that can help demonstrate the trend or pattern of interest.
  • Write a few words to describe what these results mean to you. (You might not have all the information, but don't worry about that. Just say what you think it means.)

In general, I find the Scientific Method to be a very good guideline for both beginners and experienced testers alike. Wikipedia has some entries on the Scientific Method as well as a Portal. I think it's a good read. I'd recommend those pages to anyone serious about becoming a good tester.

If there are things on those pages that you aren't sure about, look them up! You might just learn something new about how to think about things that will help you do your job better.

Happy Learning!

Ubi Dubium, Ibi Occasio (Opportunitas).

Where there is doubt, there is opportunity.

That's my new motto as a Software Tester. =)

It came to me when I read a comic that I borrowed from a friend recently. You see, I'm a fan of the writer J.M. Straczynski, so my friend told me about a comic that JMS had written a few years ago called "Supreme Power" (Max Comics). If you've ever read a comic, you'll know that each issue or story usually has a separate title. Issue #8 of Supreme Power has the episode title "Ubi Dubium, Ibi Libertas" which he translates for you on the last page as: "Where there is doubt, there is freedom."

That title made sense in the context of the story, however I couldn't stop thinking about that phrase for several days afterwards. There was something about it that I liked, and yet it didn't completely fit for what I feel I do as a tester.

When there is doubt, I go to work. I have fun. I explore. I discuss and test. Most of software testing is about working in the space between the vagueness of specs or requirements and their ever-changing interpretation into working software code. So really, doubt is everywhere. Doubt is the whole thing! Where there's doubt, there's opportunity.

Over the years, I have often contemplated different analogies and ways of describing what I do as a software tester so that I could explain it to others who don't really understand the role. (For some reason, if you're not a programmer and if you're not doing Support or Sales, then most people don't really understand what else is there.)

So now I feel that I'm really close to a good analogy. Doubt is the space where I work and play in. I'm a Doubt Management Specialist or Facilitator, if you will. Someone writes up some specifications based upon what they think the customer wants to the best of their knowledge and understanding. (There's doubt.) Someone else interprets those requirements and transforms them into mathematical algorithms that perform some function on a computer. (There's more doubt. Is that like Doubt-squared? ;-) )

Enter the Software Tester, the go-between. We see the doubt in the specs and come up with some ideas (i.e. tests) to explore the meanings and possible interpretations. We see the doubt in the software as features are incomplete, don't perform as expected, are insecure in some way, unusable or not robust enough according to our interpretations and experiences as users of the technology.

If there was no doubt in the whole process, I don't think we'd have anything to do. We'd totally be out of jobs. Maybe we could be Project Managers or Programmers I suppose. ;-)

So you see, where there is doubt, there is opportunity for us. Opportunity to explore, to test, to ask questions, to find bugs, to strengthen understanding, to clarify, to add value.

If you don't see the doubt, I don't believe you are adding any value. At the end of the day, I believe the best testers are the ones who add value by reducing the doubt in the development project.

It's another way of looking at the problem. I kind of like it. What do you think?