Get certified - Transform your world of work today

My Experiments with TDD

05/16/2011 by Vinay Krishna

I started my IT journey as a coder, although I was called a developer. I worked on small and medium-sized software projects and products, and for the first few years I put most of my effort into writing code and implementing required functionality. I tried my best, of course, but usually I faced a hard time during production and after QA releases. As a result, I started stretching my working hours—along with the rest of my team—and struggling to fix the never-ending bugs. We were all spending days, nights, and weekends at work, and output was horrible. After any release, pressure was high on the development team.

I thought of the problem as a fault of estimation and planning. I raised this concern, and on the next project received additional time that matched my my estimation. To my surprise, however, I saw little improvement. Eventually I was stretching my working hours and ruining my personal life, as many of us do.

Now, I'm not trying to say here that estimation and planning don't play a major role in the success versus failure of a project. But even in cases of adequate estimation and planning, without a developer—I don't mean a coder—we cannot achieve our goal.

Positive testing

In my early days, I was performing only positive testing after writing code. By this I mean testing of any functionality with possible scenarios only. I was providing possible values in all required fields and checking to see whether the new system gave correct results or not. That seems funny now, as I look back on it.

In those days I wasn't able to understand why someone would enter values or use steps that weren't possible or supported by the system. As a result, I tried to spend more time providing training to users, or providing more detailed training material.

But soon I realized that this wasn't the correct approach. Too many factors can violate the rules: Users can change at the client's end; one can't always read and follow the steps in the user's manual; the actual way of working is often different than the proposed implementation (users are often more comfortable with an old and familiar application than with a new and improved one); and, last but not least, human error always lurks.

Ad hoc testing

I started using ad hoc testing, which was simply a small addition to my positive testing. I would try some negative or extra testing around a particular functionality that I found complicated to implement. This was bit better than positive testing, but I was still struggling to integrate different modules and components and release the product to QA/production.

Monkey testing

I then added another aspect in my testing approach to cover "whole part" testing. I started navigating through various screens and checking for functionality with some dummy, unformatted, random inputs, and I found defects and bugs. Basically, I was testing here and there, evaluating the application and trying to see whether accessing different functionalities caused any abnormalities. In fact, this approach was simply jumping around to get a feel for the entire application.

Later I came to know I was doing "monkey testing." Whether I did it well or not, it was an improvement.

Pseudo unit/integration testing

In order to follow the organization standards and best practices, I prepared unit and integration test documents, where I wrote up the test cases and gave their pass/fail status. This was a good practice, since it ensured that a particular functionality was well tested by the developer.

Here's what I experienced with this approach:

  1. Even when estimation includes enough time to write the unit and integration test document, most coders don't give it much attention.
  2. Normally, the coder prepares the document after the completion of coding.
  3. The coder uses almost all the time allocated for unit testing to coding itself.
  4. At the end, but before releasing the application, the coder starts preparing the document and by default marks all test cases as "pass," without testing.
  5. The coder writes test cases that don't cover all the scenarios.
  6. The coder uses positive, ad hoc, and monkey testing, depending upon the scenarioóor sometimes skips this stage.

Transformation from coder to developer

I was continuously trying to improve and analyze outcomes and impediments. The problem, I found, was my approach. I was focusing more on coding and much less on testing, while what was really required was a balance between these two. Changing this first required changing myself. No matter how excellent my codes were, if they couldn't handle all the possible scenarios, the application had no use.

I started respecting the testing and treating it as essential to development. This was where I began to make the transition from coder to developer. I used a variety of sources to improve my approach. Fortunately, a person who had recently joined the organization encouraged me to learn about TDD, or test-driven development. This was totally new to me. I gathered information and presented it to my team.

My first step toward TDD

I was convinced by the TDD approach, but I wasn't sure where to start. Unfortunately, I didn't have the chance to use an xUnit family tool because of time and training needs. But I was keen to start following TDD myself, so I discussed the concept with my team and set some rules:

  1. Write the unit test cases related to any functionality in the document first, prior to writing the code.
  2. Always use track changes in the document (this helps ensure that test cases are written first and tested later).
  3. Mark the status of the test case "fail," since no code will yet have been written to implement that functionality.
  4. Write enough code to implement the functionality.
  5. Test the unit test cases written for that functionality, and update the status.

It was tough to get the entire team to follow these rules. And, as I'd expected, I received strong resistance from everyone. One question raised by the team was how to write a test case without implementing the functionality. I myself had same questions about this initially, but they proved unfounded. Even the testing team writes test cases only on the basis of requirement documents. Eventually all of us agreed to follow this method, and to review it after couple of releases to find out if it was truly helpful and sensible.

After a couple of releases, these were my findings:

  1. Developers get a better understanding of functionality and are able to visualize the behavior more appropriately. Since they must write test cases prior to development, they are able to think about functionality in a way that meets the end userís expectations.
  2. Developers are able to think about more possible test scenarios, both positive and negative, and implement them accordingly in the code.
  3. Developers gain more confidence over their implementation because they have tested it well.
  4. After one or two releases, the whole team is able to understand the gaps and fill them in over the next releases. (For example, in one case the missing element turned out to be lack of business knowledge at the team level.)

Now the only pain left was in regression testing and retesting of old material because of any new changes. And this was not possible with manual processes; it called for more time. However, the process as a whole helped us stabilize our releases to a certain extent.

Using nUnit, a step toward automated unit testing

Up to this point, I was using traditional development methods in my projects. Later, I got chance to work on a project for which Agile development methods were the norm. I got nUnit for automated unit testing. Beginning to use it wasn't easy, but I'd already crossed the major hurdle: changing my mindset from coder to developer. In addition, we decided not to write nUnit test cases for all existing or old functionalities, because that required a great deal of extra time.

So we started writing the nUnit test case only for new change/implementations, and gradually it started growing. One good thing about automated unit testing is that it's less like testing than like programming, and eventually it makes testing and code reviews much easier and faster. However, in the case of UI-related testing, or wherever automated testing has limitations, I find the approach that I first presented to my team, with its five steps of writing and testing, more suitable and effective.

Learning point

My development journey continues to this day. But from the standpoint of learning, I've found one appealing thing about development: It includes both coding and testing. That's exactly what TDD emphasizes. The transformation from coder to developer is necessary in all projects.