Some people feel that the tester’s job is to break software, whereas the developer’s job for performing testing activities is to make sure software works. This dichotomy, or disconnect, can sometimes be fun. Like yesterday…
At Mimeo.com, our ability to commit to fast printing and delivery times is driven by a huge backend infrastructure of services and LOB apps to help the production people get material printed and shipped with efficiency. One of our LOB apps is a simple web app that allows an order to be reprinted (sometimes, a few copies of an order can get messed up and we need to reprint some quantity to make it right). The UI for the tool is fairly simple. You enter an order ID in the textbox, give a reason for why it needs to be reprinted, and then the quantity for how many reprints you want. The quantity must be greater than 0 and less than or equal to the quantity in the original order.
Although the implementation details don’t really matter, I will say that the UI is an ASP.NET MVC app that’s just a façade, where the underlying controller makes calls out to other services. For this sprint, we had some work to do in one of the services responsible for actually doing the reprint logic. The UI did not change at all in this sprint, only the dependent WCF service logic changed.
The UI looks very familiar to any tester. It’s basically an interview question. Testing 101. You have a textbox and a button, how do you test this? How would you, dear reader, test this?
We’re at the end of the sprint, so our feature team decided to do a big group hug pair-coding/pair-testing session where we can all give test ideas and find/fix bugs quickly. And we started off with this reprint app. I happened to be driving the session at this point. The project lead asked, “Ok, what quantity should we try?” As the words exited his mouth, I just happened to start off with “0” in the quantity box. As soon I hit the Reprint button, the project lead said, “Wut did you just do?! Did you put in 0?? Don’t do that!” So, of course, the tool accepted 0, made the service call, and it ended up reprinting the entire order. Bug.
We then setup the order to reprint again. The project lead was driving the session this time and again asked, “What quantity should we try this time?” I answered, “The quantity for this order is 200 right? Let’s try 201.” He typed it in, and again, the quantity was accepted and passed down to the service, which printed 201 copies. Bug.
At this point, the team was dying laughing. Simple UI validation checks weren’t performed, and this was a tool that has been in use for several months. One of the devs said something to the effect of how this is why we have testers who can break the software.
But is that what I was trying to do? Was I purposely trying to give inputs to break the app? My first instincts when seeing a textbox is to explore boundaries. So something that takes in a quantity (presumably an int) should be explored with at least a 0, maxQuantity+1, -1, characters. All of these should first pop some sort of validation in the UI so that we catch bad input prior to any underlying service calls. Once I verify that basic validation is happening at the UI level, then we can test the meat with real values. That’s just how I think. I wasn’t trying to break anything. I was trying to explore the behavior of the app by entering bad input in order to learn what sort of instructional message we return back to the user, and to learn how the application itself reacts to bad input.
I don’t feel that testing activities are meant to “break” the software. I also don’t feel that the activities should solely be meant to verify that things work. Exploration is key in understanding how the software works (or should work), and to identify the potential gaps in expectations.