My Experiments with TDD

16 May 2011

Vinay Krishna
Cegedim Software India Pvt Ltd

I started my IT journey as a coder, although I was called a developer. I worked on small and medium-sized software projects and products, and for the first few years I put most of my effort into writing code and implementing required functionality. I tried my best, of course, but usually I faced a hard time during production and after QA releases. As a result, I started stretching my working hours—along with the rest of my team—and struggling to fix the never-ending bugs. We were all spending days, nights, and weekends at work, and output was horrible. After any release, pressure was high on the development team.

I thought of the problem as a fault of estimation and planning. I raised this concern, and on the next project received additional time that matched my my estimation. To my surprise, however, I saw little improvement. Eventually I was stretching my working hours and ruining my personal life, as many of us do.

Now, I'm not trying to say here that estimation and planning don't play a major role in the success versus failure of a project. But even in cases of adequate estimation and planning, without a developer—I don't mean a coder—we cannot achieve our goal.

Positive testing

In my early days, I was performing only positive testing after writing code. By this I mean testing of any functionality with possible scenarios only. I was providing possible values in all required fields and checking to see whether the new system gave correct results or not. That seems funny now, as I look back on it.

In those days I wasn't able to understand why someone would enter values or use steps that weren't possible or supported by the system. As a result, I tried to spend more time providing training to users, or providing more detailed training material.

But soon I realized that this wasn't the correct approach. Too many factors can violate the rules: Users can change at the client's end; one can't always read and follow the steps in the user's manual; the actual way of working is often different than the proposed implementation (users are often more comfortable with an old and familiar application than with a new and improved one); and, last but not least, human error always lurks.

Ad hoc testing

I started using ad hoc testing, which was simply a small addition to my positive testing. I would try some negative or extra testing around a particular functionality that I found complicated to implement. This was bit better than positive testing, but I was still struggling to integrate different modules and components and release the product to QA/production.

Monkey testing

I then added another aspect in my testing approach to cover "whole part" testing. I started navigating through various screens and checking for functionality with some dummy, unformatted, random inputs, and I found defects and bugs. Basically, I was testing here and there, evaluating the application and trying to see whether accessing different functionalities caused any abnormalities. In fact, this approach was simply jumping around to get a feel for the entire application.

Later I came to know I was doing "monkey testing." Whether I did it well or not, it was an improvement.

Pseudo unit/integration testing

In order to follow the organization standards and best practices, I prepared unit and integration test documents, where I wrote up the test cases and gave their pass/fail status. This was a good practice, since it ensured that a particular functionality was well tested by the developer.

Here's what I experienced with this approach:

  1. Even when estimation includes enough time to write the unit and integration test document, most coders don't give it much attention.
  2. Normally, the coder prepares the document after the completion of coding.
  3. The coder uses almost all the time allocated for unit testing to coding itself.
  4. At the end, but before releasing the application, the coder starts preparing the document and by default marks all test cases as "pass," without testing.
  5. The coder writes test cases that don't cover all the scenarios.
  6. The coder uses positive, ad hoc, and monkey testing, depending upon the scenarioóor sometimes skips this stage.

Transformation from coder to developer

I was continuously trying to improve and analyze outcomes and impediments. The problem, I found, was my approach. I was focusing more on coding and much less on testing, while what was really required was a balance between these two. Changing this first required changing myself. No matter how excellent my codes were, if they couldn't handle all the possible scenarios, the application had no use.

I started respecting the testing and treating it as essential to development. This was where I began to make the transition from coder to developer. I used a variety of sources to improve my approach. Fortunately, a person who had recently joined the organization encouraged me to learn about TDD, or test-driven development. This was totally new to me. I gathered information and presented it to my team.

My first step toward TDD

I was convinced by the TDD approach, but I wasn't sure where to start. Unfortunately, I didn't have the chance to use an xUnit family tool because of time and training needs. But I was keen to start following TDD myself, so I discussed the concept with my team and set some rules:

  1. Write the unit test cases related to any functionality in the document first, prior to writing the code.
  2. Always use track changes in the document (this helps ensure that test cases are written first and tested later).
  3. Mark the status of the test case "fail," since no code will yet have been written to implement that functionality.
  4. Write enough code to implement the functionality.
  5. Test the unit test cases written for that functionality, and update the status.

It was tough to get the entire team to follow these rules. And, as I'd expected, I received strong resistance from everyone. One question raised by the team was how to write a test case without implementing the functionality. I myself had same questions about this initially, but they proved unfounded. Even the testing team writes test cases only on the basis of requirement documents. Eventually all of us agreed to follow this method, and to review it after couple of releases to find out if it was truly helpful and sensible.

After a couple of releases, these were my findings:

  1. Developers get a better understanding of functionality and are able to visualize the behavior more appropriately. Since they must write test cases prior to development, they are able to think about functionality in a way that meets the end userís expectations.
  2. Developers are able to think about more possible test scenarios, both positive and negative, and implement them accordingly in the code.
  3. Developers gain more confidence over their implementation because they have tested it well.
  4. After one or two releases, the whole team is able to understand the gaps and fill them in over the next releases. (For example, in one case the missing element turned out to be lack of business knowledge at the team level.)

Now the only pain left was in regression testing and retesting of old material because of any new changes. And this was not possible with manual processes; it called for more time. However, the process as a whole helped us stabilize our releases to a certain extent.

Using nUnit, a step toward automated unit testing

Up to this point, I was using traditional development methods in my projects. Later, I got chance to work on a project for which Agile development methods were the norm. I got nUnit for automated unit testing. Beginning to use it wasn't easy, but I'd already crossed the major hurdle: changing my mindset from coder to developer. In addition, we decided not to write nUnit test cases for all existing or old functionalities, because that required a great deal of extra time.

So we started writing the nUnit test case only for new change/implementations, and gradually it started growing. One good thing about automated unit testing is that it's less like testing than like programming, and eventually it makes testing and code reviews much easier and faster. However, in the case of UI-related testing, or wherever automated testing has limitations, I find the approach that I first presented to my team, with its five steps of writing and testing, more suitable and effective.

Learning point

My development journey continues to this day. But from the standpoint of learning, I've found one appealing thing about development: It includes both coding and testing. That's exactly what TDD emphasizes. The transformation from coder to developer is necessary in all projects.


Opinions represent those of the author and not of Scrum Alliance. The sharing of member-contributed content on this site does not imply endorsement of specific Scrum methods or practices beyond those taught by Scrum Alliance Certified Trainers and Coaches.



Article Rating

Current rating: 5 (1 ratings)

Comments

Ryne Sechrist, CSM, 5/17/2011 8:53:27 AM
Testing is a big part of development. Companies do not want to spend a lot of time on testing because it "eats time" from the project. However, the benefits of creating a quality product in the end, outweighs the time spent on unit testing. TDD is a great way to have testing and feature development work together. Great Post!
Marko Majkic, CSP,CSM,CSPO, 5/18/2011 4:44:15 AM
Great post! I have similar experiences struggling and convincing my team to do testing and TDD. Do you have any resources considering technique of testing, preferably Java?
Jojo Jacob, CSM, 5/24/2011 7:11:58 AM
Excellent briefing on TDD
Vinay Krishna, CSP,CSM, 5/24/2011 9:06:48 AM
@Ryne - Good point. I agree with your point, it takes time to understand the benefits of TDD by both company and developer
@Marko - Thanks for your feedback. For resources you can contact me at vinaykris(at)gmail(dot)com.
@Jojo - I am glad that you liked it
Siraj Mohmed, CSM, 5/24/2011 3:01:35 PM
Good summary on TDD although I must state there is a key factor as part of TDD which is refactoring, which all developers should take into account. This is purely to check the design of the code and any refactoring should not have any impact on on the functionality. Once refactored all previous tests should pass as normal. refactoring should take into account software practice of DRY (Don't Repeat Yourself).
Vinay Krishna, CSP,CSM, 5/25/2011 12:46:47 AM
@Siraj - That's correct. I do agree with you. I would include that experience also in next series. Thanks!
Darren Ambrose CSM CSPO, CSM,CSPO, 5/27/2011 4:23:43 PM
I'm embarking on this very journey right now. So every timely to see this post. The team are looking to create their unit tests and automated tests up front per user story prior to starting development. I'm hoping for good success!
Vinay Krishna, CSP,CSM, 5/28/2011 12:02:24 AM
@Darren - Nice to know this. Hope you would share your experiences too with us here. Remember, it takes sometime to understand and need mindset change from the team. All the very best to you and your team.
Leijia Gao, CSM, 6/1/2011 8:38:37 PM
Great thanks for your post, as I see,TDD should be a learning from the traditional industry, another lesson is using 6 sigma in coding re-factory. software is a new industry, we have many thing to learn to make it efficient,stable and strong.
Sharath Satish, CSP,CSM,CSPO, 6/2/2011 10:53:55 PM
Nice post about TDD. I am going through the process of convincing my team right now to adopt this fantastic development practice. It is a challenge but the stories we have implemented using this have turned out very well written and I was converted.
Andreea TOMOIAGA, CSM, 6/3/2011 4:30:05 AM
Excellent article, Vinay, and also great comments so far. I would like only to add also a post-measurement dimension to this TDD approach: can we know how much code is covered by tests in fact? In the projects I participated in so far I saw that it is useful and motivating to introduce code coverage measurements e.g. using Sonar, EMMA and run this as part of the continuous integration. This motivates the team. If we start slow and then cover more and more code with the tests and see a decrease in a number of bugs after every sprint then this is great. The relationship between code coverage and number of bugs is inversely proportional and in this case the team can understand the importance of tests, as a consequence of having numbers we convince also managers of importance of testing and this way testing is not seen as an extra, unnecessary thing that eats the budget. Having tests prepared for all types of scenarios, helps also in incident solving process, since bugs can be solved quickly. On long running maintenance projects with a legacy codebase without any tests at first and a full bunch of JIRA issues, this TDD practice guarantees success.
Gaunhong Zhang, CSM, 6/25/2011 11:16:01 PM
Good Post!
Gaunhong Zhang, CSM, 6/25/2011 11:18:30 PM
Good Post!I will use the nUnit to automate our unit test in my team now ,and I will show your post to my members!thank you!
Joseph S. Currivan, CSM, 7/2/2011 1:44:22 PM
This was a great read. Thank you for sharing your experiences.
Madhan Manivannan, CSM, 8/2/2011 8:52:27 AM
Thanks for a very detailed account of your experience. I was in fact exploring TDD to start with manual. Your article has given practical tips to implement. Thanks again
Rekha Menon, CSM, 8/16/2011 8:30:49 PM
Hello Vinay,

Great write up.
I have a question. How will you propose implementing TDD for an Enterprise software product development ?

Thanks !
Vinay Krishna, CSP,CSM, 8/17/2011 6:08:50 AM
Andreea - I see your points and we also apply this post-measurement dimension at certain extent. It's really a good idea.

Leijia, Sharath, Gaunhong, Joseph, Madhan - Thanks for your comments. Glad to know that you like this post.


Rekha - I have used TDD in product development. In second last paragraph of this post I have already mentioned the approach that we have adopted.
For further clarification write to me at vinaykris@gmail.com
Anonymous, 10/19/2011 3:09:43 PM
Vinay good post, I've just arrived at a place that does zero CI and testing so going to have to take them all through this journey over the coming months! I might well use some of your words.
Anonymous, 11/4/2011 4:46:25 AM
Very Nice Article.
Vinay Krishna, CSP,CSM, 11/4/2011 7:19:35 AM
@Mark - Thanks and all the best!
Bruno Eduardo Andrade de Carvalho, CSM, 12/13/2011 6:29:56 PM
Very good article!
Basavaraju Kidiyappa, CSM, 1/25/2012 11:59:13 PM
Good article!
Gagan Rana, CSM,CSPO, 3/12/2013 11:12:21 AM
Great article! I am always a fan of TDD and its offerings. Hopefully developers and their managers read this article and start using TDD with their projects.
Ganesh Doddi, CSM,CSPO, 1/8/2014 7:26:07 PM
Good article. Most beneficial and easy to understand for new teams is to practice this right from the beginning. Starting midway is a bit difficult and might lead to a bit of frustration and one of the frequent reasons people abandon it. If you have to start midway, take help from an expert, not just advise, sit with him to bring it to a reasonable position.

Once we get used to it very difficult not to use it. One area where it is not recommended is while doing Technical spikes as spikes are just meant to identify a solution and discard. But for the identified solution, TDD can be used from scratch.

Biggest critics of TDD identify additional time as the major issue. But if you use automated tests, use CI and use good tools and take help from experts where required, this can be contained. But with anything else, once the team start using TDD they will become better at it.

You must Login or Signup to comment.