Fork me on GitHub

Thursday, December 25, 2008

Which color am I?

Your rainbow is shaded white and indigo.


What it is says about you: You are a contemplative person. You appreciate cities, technology, and other great things people have created. People depend on you to make them feel secure. Friends count on you for being honest and insightful.

Find the colors of your rainbow at

Well I am happy :)...

Friday, December 12, 2008

Announcing FlashWatir and Silverlight-Selenium

Well two of my open source projects have gone live. I am really happy that I can bring them open and it will be useful to people who want to test applications with these RIA technologies. I have always been an advocate of testability and whenever a new glossy technology comes testability goes for a toss and this has been a concern to me for a long time. With these two projects I hope I have played a small part in bringing testability for flash and silverlight.

For those who want to use it flash-watir is available as a gem as well as source in SVN. Please visit the FlashWatir Google Code Project. Read the docs, play around with the source. If you have any questions or want to contribute, let me know.

Silverlight-Selenium is available as of now in Java (I know silverlight is .Net stuff :D but it won't take long to write it in other languages). Please visit the project Silverlight Selenium.

Let me know if you have any questions or feedback.

I thank Paulo Caroli for helping me out in understanding flash testability. As well pairing with me for silverlight-selenium and showing how to do TestDriven Development :). Thanks for Jeya for pairing with me and setting up the silverlight ennvironment.

P.S. Work on silverlight extension for Watir in on the way.

Friday, December 5, 2008

Test Drivers, Frameworks and Harness

Edited on 20 March 2009 - Added the last paragraph about JUnit

Wanted write about this for a long time. People tend to ask me if Watir is a framework or does Watir read Excel or how to connect to database from Watir and questions similar to that. This is not a Watir specific post but some understanding I have about these three important components of any automated testing. I wrote this because I feel that people tend to ask these questions because they don't understand the function of these three components. So here it goes...

Test Drivers - They drive the application. If you are connecting directly to domain or service layer you will not have an external one most probably, the API itself will be the driver. But if you driving tests through GUI interface or may be a remote service, then you will have a GUI driver or something like a webservice driver to drive the application. Libraries like Watir, Selenium, WebDriver or SOAP4R (to drive webservices through SOAP) fall into this category. They are not testing tools but you can very well build one on top of them.

Test frameworks - Scaffolding to start your testing. Nobody will start writing tests from scratch every time.Framework acts as the base infrastructure so that we don't need to care much about it. It will have things which can be reused across our tests but not specific to application mostly like connection to database, writing log or reading and writing to Excel. Normally we start with a generic framework and as we write tests we tend to build abstractions of our domain and slowly project specific abstractions and functions grow into the framework.

For example in JUnit, TestCase class is a framework part. It gives you assertions. Similarly Taza framework for Watir gives you generators, integrates with Rspec for assertions and reporting (It is a framework with frameworks inside ;)).

Test Harness - They are used to execute the tests we write. Like JUnit test runner or RSpec test runner. Most of the time harness will be coupled with a framework but it is possible to create a generic harness not coupled to a framework as long as the framework follows some kind of contract that harness enforces. Like in JUnit any method annotated as test will be picked up by the test runner to be executed as test.

What I have written is a view I developed when working with these components over years. You may have a different perspective. If so let me know. Your comments and views are welcome.

One more thing... A common question I get is why I use JUnit or RSpec which are meant for unit testing. Junit and RSpec are test frameworks. You can run pretty much any code inside them. It is incidental that they are used for unit testing because they are simple and light weight.

Saturday, November 29, 2008

Disabling Annoying Point Stick Mouse in Dell Laptop

In Thoughtworks we normally use Dell Laptops running windows xp for .Net development (Some use Mac with Windows partition but I have feel it becomes extremely slow). But one problem with Dell laptop is the stick mouse. Whenever we type fast, the stick mouse moves the cursor to an irrelevant location and becomes very annoying creating nonsensical sentences. So the quest is for disabling it...

I found a way to disable and this is a reference for people who need it as well myself. I tried to disable it in control panel mouse control but I don't think there is anything there for it. Mine is a Dell Latitude D620, so I checked out Dell support website and found that there is a driver called ALPS - Driver which applies for stick pointer. I downloaded and ran the installer and once the installation completes I think I restarted (Not sure but if it is not asking do not restart). After installation a touch pad controller was available in the system tray and also in control panel. I used this to disable the point stick device as well as its buttons. Now I don't have the problem of cursor randomly moving. I hope this will reduce my annoyance and will help someone who is having the same problem...

For other models of laptop, corresponding drivers are available in Dell support site. Look for driver for stick pointer or pointing stick.

Tuesday, November 25, 2008

Running Selenium RC as Windows Service

The story starts at a time when I was really tired to running FitNesse every time I start my office laptop. I wanted a way to run it perpetually and in my quest during a sleepless night found that FitNesse can indeed be run as a windows service. And after a long battle (actually a very short one) I am successfully running it now :) (For those who need that info check out FitNesse as service).

After this I got a crazy thought which is normal as far as I am concerned. Sometimes I use Selenium and Selenium RC also uses a jetty web server. Why can't I try to run selenium as a service so that I don't need to start it every time I log on (I remember this only after I see a test throwing exception :))? Well the steps are pretty simple. Same as how you make FitNesse run as a service so if you need, check out those instructions and translate the appropriate locations to Selenium.

One thing which needs to get noted is if the Selenium service is running in a account you have given or local system account without permission to interact with desktop then the tests will run and provide results but you will not be able to see them running. So if you need to see the test run visually in a browser then run in local account with permission for the service to interact with desktop. But the catch is if you run with access to desktop you will see all the messages which selenium will normally give in command line i.e. it will not run in background.

One more thing, I think most would have explored this but for people who are looking for a way to test application in Google Chrome you must be able to use Selenium RC with custom launcher. For more info checkout Selenium website.

Running of Selenium as service is an experiment I have done at night 1 PM because I was sleepless, so I hope people get the big red label warning I am trying to put here. I haven't clearly thought about the advantage or disadvantage of this method which I may do at a later post. Till then happy hacking and get some sleep :).

Sunday, October 26, 2008

Time for a change to better

This is the first personal post I am writing in this blog. Its only partly personal :).
I should have written this sometime back actually but I was a bit busy, lazy and didn't have the resources (my internet was broken)... Thats why I haven't been blogging for quiet sometime even though I had topics piling up.

Well the news is last month was my last in Cognizant. I have now joined Thoughtworks as a developer in test. It has been four days since I started and the experience had been exhilarating (I have even participated in XP Lego game). People working together in collaboration, open culture, Agile and lots of geeks, flat organization, I think finally I am at home. I don't think one can ask for more but there are lots more here.

As an Agile enthusiast and developer in test I think I will be enjoying working here. I will also be using this blog as a place to share my thoughts, experiences and learnings....

Lots of things happening on open source front. I am working on brining the jruby support to firewatir and a patch was submitted for that. I have to start working on porting schnell to webdriver as this has been pending for quiet sometime. Also a spike to drive firewatir through native code rather than jssh will be happening.

My project is also starting on Monday and there is a week induction... I hope it goes on interesting :).

Lots of things happening this year. Hoping for the best ;).

And here is me (giving a thumbs up) in the XP Lego session we had... Successfully completed.

Friday, September 26, 2008

Things I learned in the past 3 years

I started this blog more as my personal notebook to write my learnings and till now it has served its purpose as well helped me share my views to the world.

I have this habit of revisiting where ever I started and see if the things I learned has changed or evolved as a result of my work and increased understanding. For the past three years I have been in the field of testing and here are some of the things whose definitions or understandings have evolved for me as a result of my work, the things I read, people I talk with and my most precious thoughts :). This is not the complete list but something which came up in my mind as I was thinking about this. I will try to keep this list updated whenever I can find something changed.

1) Testing is an experiment conducted in controlled environment and conditions to gather information about the application.
2) Acceptance tests are expectations of the customer.
3) Regression tests or tests created by recorders are tests for invariance.
4) Agile is more about people and their communication than about techniques and tools. (Something which dawned upon me after years of seeing agile in terms of techniques and process)
5) Domain is the core of any project. The project is there to solve a problem in the domain.
6) Speak the language of the customer. Using technical language of the customer will lead to complications in communication. (Personal experience)
7) Don't try to solve people based problems by putting up a process. (Everyday experience by working with the managers in my place)
8) Testers working closely with developers and customers is the most effective way (Understood this even before I started agile)
9) Every tool or language has a legacy. The usage of the tools depend on their legacy (Interesting point I understood. Will write about this more in detail later)
10) Automation can never replace manual testing. (Exploratory testing is invaluable)
11) GUI testing is unreliable and difficult. (Learned this as a bitter lesson)

Well thats what I can think of now. There are lots more things I will remember especially when I speak with people. Will keep a note of them and update this as and when I can

Wednesday, September 24, 2008

Are table structured tests good for regression testing?

I have been seeing cases lately where people try to use Fitnesse or Fit like infrastructures or a custom excel based fixture where in the tests are written in the form of tables. The reason behind this may be they have seen the success of using Fit or Fitnesse and want to extrapolate it everywhere. The problem I have with this approach is whether this table structure is enough for meeting the conditions needed for regression testing?

I am big fan of Fit and Fitnesse and advocate, use it as a way to write executable specifications. They are really good when you work with the customer to etch concrete examples about the business domain conditions. But using them as a framework for regression testing is something I am skeptical about. I prefer to use a language like Ruby or Python or Java to write my tests using drivers like watir or webdriver or schnell. The reason behind this is more than acceptance tests, regression tests cover much more conditions in form of data driven tests as well need constructs for iterating, conditional constructs, variable storage and passing and reusing code by modules. Having these constructs will help us simplify the code we write. As well regression tests don't have very close customer involvement as acceptance tests do and in this case for a tester or an automation engineer it is easier to use a language than a table structure.

This doesn't mean that we can't do or have these things in a table driven test. But for this we need to put an extra infrastructure in place which we have to maintain as well these constructs will not look elegant or natural in a table.

Implementing the regression tests in a developer language also doesn't mean that they will be just raw code which have the user actions flow driven using watir or webdriver or schnell. These tests are also written in a way that they reflect the domain language by building proper abstractions may be as an internal domain specific language. But these abstractions need not be structured as a table as explained in the above case.

This is from my opinion and experience. Different people will have varied experiences and thoughts. I would be happy if you can share your thoughts on this as it will help to understand this problem more or may be in a different angle...

Thursday, September 18, 2008

Controllability, Observability and Testability

I was an electrical engineering major in college and control systems was one of my most favorite subjects. The fundamental and important principle I understood in it was to control any system you need controllability (to effectively direct it to a particular state) and observability (to find out the application behavior or state at any point of time).

Now working in the field of testing, automation comes into play most of the time. To automate any application testability is a important. So what does this testability mean? what does the test interface for an application represent? Testability is the ability to control and observe the application's behavior to validate it and test interface facilitates this by helping in making calls to the application and in observing the behavior or state of the application at any point of time.

Based on my understanding of control systems, testability of an application is actually the combination of controllability and observability. Test Interface is a facade to the application which helps us to control it and observe its behavior and state.

But does controllability and observability assure us testability? This is something I need to think about. For an application there can be multiple interfaces through which you can control and observe its behavior. But not all interfaces are appropriate for testing for example user interface is a bad candidate for a test interface.

So testable interface has controllability and observability but not every controllable and observable is testable. For an interface to be useful for testing it is necessary to be easy to work with it programatically as well the interface to be stable to a certain extent.

Testability and test interfaces are an important aspect for automating an application. They are also an interesting topic and I hope to explore it more in future.

Saturday, September 13, 2008

Wordle's Perspective

Wordle is in rage now and it wants to tell everyone what it thinks about your blog and web pages. I wanted to give it a try and here is my word list according to wordle that my blog concentrates one. I expected bigger testing and examples and other words but let me try it after a few more months to see what it finds out ;-) .

Monday, September 8, 2008

Why GUI tests fail a lot? (From tools perspective)

I was in a discussion today which involved brittle automation tests done through GUI as usual. The problem statement -- GUI changes continuously more specifically the properties of the object changes continuously. The reason behind this is the pages are generated dynamically at build time and so the properties for identifying the objects are not constant. Their whole test suite is GUI based and they don't want to throw away this as well don't want to rework every time a new build comes. My only choice to give advice was to work with the developers to find a way to fix one of the properties most probably html id so that it will give them a stable interface to work with automation.

But I see this pattern of problem repeatedly. Sometimes it is with the generated html or at times with manually created ones where developers change if they wanted to name something better or change some object's property (May be refactor the html). I know that automating and concentrating a lot of testing through GUI is not a good choice although it is needed at some point. I know the reason for this that the GUI changes continuously. But I needed a better way of expressing this. So I thought of the good reason why we still go for GUI automation and why we fail on this?

Why do we go for GUI automation?

Evil lies in the availability of easy to use tools which entice us to create tests quickly by record and playback and create brittle test code. They also guarantee us to test the application end to end which is needed to test the complete integrated application.

But why do the code we write dependent on the GUI fails? (GUI Testing from the tools perspective)

One of the compelling answer I found was GUI is an human interface which people like you and me use and any other  interface is programmatic one to use your application as a service. Now programmatic interfaces, when published change less because any change to its contract will break the external programs which consume them. But a GUI interface being used by users change for two reasons.

1) If the html internal properties like id or class change then that doesn't affect the user directly.

2) If the appearance, location, text or description of the object changes then the user may not be able to immediately identify it but human beings have the ability to adapt to change, understand and act accordingly.

The problem comes when a GUI automation tool tries to use the application's GUI as a programmatic interface. They try to expect a constant interface contract (i.e. the properties of the objects and objects wouldn't change time after time) which can be used to consume your application GUI as a service. But this is not possible as GUI is not meant to be consumed by programs in the first place and when the GUI change, the contract between the automation tool and the application is broken. GUI testing tools also don't have the ability to understand and adapt unless they have a built in AI in them :).

So the better solution would be to concentrate your testing through a programmatic interface which can work with domain objects directly. This will help you to create a cleaner design of the application by making our GUI layer lean with no complex domain logic. As well the testing will be faster when we bypass the GUI layer. The test code will be more domain oriented as testing will concentrate more on the actual functionality of the application rather than on the GUI layer. Using tools like FitNesse, Concordion are suitable in this context.

But is testing through GUI not necessary?

The answer is we need. We need a set of tests which can run on GUI to tests it as well gives us feedback at end to end level. As well GUI needs to be tested when we have client side components like Ajax, Applets or others. The plus side of the contract enforcement of GUI testing tools needing a constant interface is a better naming of html objects like <HtmlInput id='txtUserName'> instead of absurd one like <HtmlInput id = 'txt0012345'>.

So this again brings up the concept to have a small set of tests which work on GUI using tools like Watir or selenium but concentrate most of the tests on the programmatic interface to test the domain layer directly.

Saturday, August 30, 2008

Announcing the release of rrd-rb

I am happy and thrilled to announce the release of my second open source project rrd-rb. rrd-rb is a simple round robin database binding written in ruby. I normally use it to visualize the response times of the websites I monitor using schnell.

Round Robin database is an open source industry standard for graphing solutions and can be used to store large amounts of time series of data in a minimal space. A good tutorial about the Round Robin databases is available in

Currently as the code is still under improvement I have not made any release. The source is available in SVN and can be downloaded. As well wiki has the installation instructions. The code comes with an example usage of the API. More examples are on the way.

Sample graph generated from random numbers -

Please let me know your view as well as comments. If you wish to contribute, please let me know. Till then happy coding.

Thank you Corey Goldberg for creating the rrdpy (Python binding) which gave me an idea to do this ruby binding.

Wednesday, August 27, 2008

Building a Google Reader API in Ruby

I was getting a little bored with the usual work and was trying to pep up my work by doing some hacking. I always wanted a API for Google Reader so that I can integrate with something or may be build a console app to read the posts :). So I started my journey into the mysterious land of APIs and service interfaces.

I found that there was no API published by Google for reader. So I thought may be I can put a prototype to read my subscriptions from the Google reader and print them. I used Mechanize for this and following is the code I wrote for this.

require "rubygems"
require "mechanize"

base_url = ""
login_url =""
login_done_url = ""
reader_subscriptions_url = ""

agent =
base_page = agent.get base_url
login_page =

login_form = login_page.forms.first
login_form.Email = "your google id"
login_form.Passwd = "your google password"

rescue WWW::Mechanize::ResponseCodeError

agent.get login_done_url

subscriptions_page = agent.get reader_subscriptions_url

subscription_names = []
feed_urls = []
names ="//div[@class='subscription-title']")

names.each {|name| subscription_names << name.innerText}

urls ="//input[@class='chkbox']")
urls.each{|feed| feed_urls << feed['value'].sub!('feed/','') unless feed['value'].grep(/feed/).empty?}

subscription_names.each_with_index{|feed_name,index| puts "#{feed_name} --- #{feed_urls[index]}"}

This code will show subscription names and the url of each subscription. I was planning to add the labels into it but it needs more work. As well there is some problem in the login, it logs in but it fails with a HTTP 501 exception and so the code is surrounded with exception handling (Need to figure that out). May be in future when I find time I will try to write this as a complete Google Reader API.


Tuesday, August 26, 2008

Do we need to test hidden fields?

I got into this moral dilemma when I was writing the schnell-driver’s hidden field (input type="hidden") tests. When I was writing the schnell with htmlunit I used something close to Watir API and so I wrote the hidden field tests especially had the hidden set values as a part of the test suite. But when writing the webdriver porting I suddenly got into the question of the need to test the hidden field set value tests.

In normal programming terms when writing unit tests we will not test the private members of a class because we will be exercising them from the public interface available. So there is no need to test the private members and if we feel we need to do then it means that we have some hidden complexity in the private method’s code which needs to be refactored. In the same lines the hidden field can be considered as private member or more appropriately private variable of the form. The value which goes into a hidden field may be directly filled in to create the query string or they may be manipulated by JavaScript before being used in to form post. If we need to manipulate the hidden fields we need to know all these implementation details and ask our testing code to do them, which is not a good idea.

This leads me to a question. Do we need to test or more preferably set values in hidden fields when we test an application? In my opinion we should not as hidden fields are private members of html language. It is better to their working through the form post or the JavaScript than us to deal with them directly and test them individually.

Sunday, August 10, 2008

schnell moved to jruby 1.1.3 and htmlunit 2.2

Update - Available on 13/8/2008 as I have some issue to fix :)

I have been busy for the past two weeks on a few things. So no major work was done. But I found that htmlunit 2.2 and jruby 1.1.3 were released. So I moved schnell to the new versions and ran the unit test... Everything ran fine (Tests are the source of my happy life :)). As well I have added the support for hidden elements. The new version of schnell 0.2.2 is available in the Google Code project.

As well I moved my laptop OS from Mandriva 2007 to OpenSUSE 11.0. Will write about it seperately... I may stick to it as I have everything I need except a few quirky issues (Seperate post 8-)).

Download the latest version of schnell and enjoy hacking...

Thursday, August 7, 2008

Ruby: A Simple Web Site Monitoring Using Schnell

Update - A small correction in the notify method. Please refer the code in the blog post for the change.

I have been working on schnell for quiet sometime and am moving towards another release. But the beginning of schnell is not as an automation testing tool. I started writing schnell as a wrapper to WWW::Mechanize to help my team to do the automated monitoring of websites. We had the requirement of monitoring a set of production websites to see their status every few minutes. But the condition was not to just see if the websites are accessible but to run a quick smoke test on them and see if they work.

A typical scenario would be to check if the website is available, log into it and see if it works. Here is the sample code of the site monitor we used to have. This has taken a mature version now which I am planning to release as open source.

You need to have mechanize, rubyful_soup, rufus_scheduler, log4r for this code to run. You can also do this with the latest jruby version of schell with a few changes in the notify method (Exceptions are handled differently in jruby version). The advantage of mechanize version is it can be run on native ruby. If you are running this code on latest version use jruby instead of ruby.


require "../hui"
require "log4r"
require "rufus/scheduler"
include Log4r

class TestSite1
URL = "http://localhost:9999"
def self.monitor
browser =
browser.goto URL, "buttons1.html").click
browser.button(:value,"Click Me").click
raise "Not able to get PASS" unless browser.text.include?("PASS")

class TestSite2
URL = "http://localhost:8888"
def self.monitor
browser =
browser.goto URL, "links1.html").click,"test1").click
raise "Not able to get Links2-Pass" unless browser.text.include?("Links2-Pass")

$sitelog = 'sitelog'
$sitelog.outputters = Outputter.stdout

scheduler =

trap("INT") do
$ "stopping SiteMon..."

monitered_websites = [TestSite1,TestSite2]

def notify(website,message)
if message.include? "Unable to navigate"
$sitelog.error "#{website.const_get('URL')} #{message}"
$sitelog.warn "#{website.const_get('URL')} #{message}"

$ "starting SiteMon..."

scheduler.schedule_every "60s", :first_in => "5s" do
monitered_websites.each do |site|
rescue => ex
notify site, ex.message


There are a few things I am exploring now. Parallelism in monitoring, notification through Microsoft Office Communicator (We use mails as of now.) and a better modular structure.

I have published this code as well as the mechanize version of schnell (Used to call it hui) in GPL license as a download here. It is a very basic version of schnell and not all conditions and exceptions are covered. If you want to have better functionality, you can use the latest version with schnell in jruby. Please let me know if you have any suggestions and queries.


Thursday, July 31, 2008

Ruby: A simple tracer code to track method calls

I was working on a small piece of code and there was need for a tracer code which can find the method being called, in which order and the arguments passed into the method and the value the method returned. The methods were public instance methods and I did not need an heavy or fully functional tracer. So I spent sometime to write a small tracer for myself and give myself a small exercise in metaprogramming. Below the code which came out of it... There may be better ways to do it and I am learning them as I go along my journey of software development and testing code... Pretty long journey I suppose :).

module Tracer
def self.included(klass)
instance_methods = klass.public_instance_methods(false)
instance_methods.each do method
def self.hook_method(klass, method)
klass.class_eval do
alias_method "old_#{method}", "#{method}"
define_method(method) do *args
puts "#{method} called" + " with #{args.join(',')}" unless args.nil?
value = self.send("old_#{method}",*args)
puts "#{method} returned #{value}" unless value.nil?
variables = []
self.instance_variables.each do variable
variables << "#{variable} = #{self.instance_variable_get(variable)}"
puts "instance variables - #{variables.join(',')}"
return value

class Person
def greet(name)
@name = name
@greeting = "hello"
return "#{@greeting} #{@name}"
include Tracer

Monday, July 28, 2008

Tools in Agile projects

Last week I read a paper written by Kent Beck for Microsoft about the usage and impact of tools on Agile development project. On reading that paper I realize that how ubiquitous tools have become in Agile development that we don't realize them as a tool any more. Tools used agile projects are not heavy and bloated but tools which are lightweight, which can support frequent change and tool s which can support transition across different activities in short time. These tools will not hinder the developer from doing the core work but still assist him effectively to do the day to day work.

Agile projects use simple tools which when seen alone without the corresponding process and principle can seem extremely simple. Junit when standing alone is extremely simple which could be written by any good developer, but when combined with Test Driven Development it gives the developer the power to feel confident about the code he has written. Continuous Integration has no meaning alone without tests, but in short iterations with tests and verifications, it gives us the health of the project as a whole on an hourly basis.

All the tools used in Agile are like that for example cards, junit, burn down charts, open source tools. Tools chosen and available in common area, tools which can be put together easily and also changed easily. But when combined together and used with the process they give an enormous power to guide a complex and dynamic project towards success. Using bloated commercial tools in Agile project are a sure recipe for disaster as these tools are change and transition resistant and can't be redesigned for a different purpose than the one for which they are built. Tools which are simple to use and allow change and quick transition are the ones which we need to seek. They may have disadvantages but they also help us to stay focused on the job we have at hand rather than spending time fighting with tool for its configuration.

Sunday, July 27, 2008

Announcing Schnell release 0.2.0

It has been a busy week for me. Despite that I managed to pull together the next release of schnell. This release has a lot of new features and lots of refactoring done. There is still lot of work to be done but I believe in moving step by step or as in Agile release by release to the goal.
The latest release shows that you must be able to run around 1000 tests in a minute roughly (All the developers are staring at me for not letting them go to a coffee break during the build time :0).

New features in this release

1) Addition of collections like images, links, text_fields, buttons, etc...

2) Addition of non control elements like span, div, pre, etc...

3) Addition of text area (oh my god!! forgot to add hidden.. No issues will be there in next release)

Major refactorings are done in this release. But still a lots more to be done in so I have added a Todo file to keep track. As well I am planning to add it to the issues also to make it effective. Also a few non project issues, as the code base is growing deployment is becoming an issues. So I am planning to automate it and may be share my experience with others.

Planned for next release

1) Support for nested elements (frame, div, span, area...)

2) Support for tables, rows, cells, elements inside tables.

3) Support for hidden (Left out in this release :( )

4) Support for xpath for identification needs to be worked on.

5) Lots of refactoring especially in locator... (I feel some design pattern probaby Strategy may fit there.. Any suggestions?)

6) One known defect -- I found it my self :) (rdoc is displaying twice for each file at the same time.. No time to fix now, so in backlog. Issues added)

Lots and lots more to be worked on... Happy hacking. Expect the next release in next few weeks

Friday, July 25, 2008

Thoughts on truthfulness as agile value

I read this nice post in infoQ... actually I had to dig it from somewhere as it was already lost from my reading list. It discusses truthfulness as an agile value. I have a lot of respect for agile methodology as it is the only methodology which considers values as one of the main ingredients of software development (Others being practices and principles). Practices like TDD, Continuous integration don’t make meaning when they stand alone. They need to be based on values. Values decide why we do something and why we don’t. We have daily standup meeting as a practice because we value communication, we use test driven development and continuous integration because we value immediate feedback. But values and practices are two distinct things. Because we value communication we can’t add writing 1000 page document as practice when the message can be conveyed face to face in a meeting. To avoid such unnecessary practices and help us to add new practices when we feel something is lacking we have principles. Kent Beck gives an analogy of Principles acting as a bridge between Values and Practices.

Now coming to this post actual message, do we need truthfulness as an agile value? If we see the agile methodology the practices like Onsite customer, daily standup meeting, continuous integration, collective ownership, planning game all have truthfulness as an implicit in itself. If we are not truthful about the current status of the project, the planning game and whole team will be a failure as well we can’t have collective ownership. If we are not truthful about the estimates release planning and iteration planning won’t get done properly. If we don’t value truthfulness, we can’t work in pair as we have to admit if we make a mistake and also have to accept if the other person has a better design decision than us. But making truthfulness as an explicit value helps us to keep it in context when we make decisions.

People may argue that truthfulness is needed for every project development project whether we follow agile or not. So are the other values of Agile (Simplicity, Feedback, Communication, and Courage). But keeping these values explicit helps us to keep them at front when we make decisions and add practices make them aligned with the basic principles we have. If we have them implicit we may not take notice of them as we go along using the methodology.

In a complex process like software development it is easy to loose perspective of our values when we are working. In my opinion having the values and principles as explicit as possible is always beneficial for us so that next time we make a decision we will take all these into consideration. Truthfulness has to be always in agile to make the methodology work successful but we can make it explicit to help people use it as a guiding value in their day to day work.

Friday, July 18, 2008

Oh my God!! Estimation do change

It is sad to see so many projects get shocking news that their schedule is not according to the one planned and estimated. I work as a consultant at on site in the client locations. At times I study the feasibility of automating an application and at the end normally the manager who interacts with the client from my side urges me to estimate the project accurately so that after we get the project it won’t lead to any issues after the project starts. There are a lot of things fundamentally wrong in that opinion. The first is software development project is a dynamic system and with so many unknown factors in a project it is never possible to predict anything remotely accurate. The second is the only accurate thing we know about a plan for a project is that it is bound to change. And finally and most important of all is estimation is a prediction into future. As far as I know there isn’t anybody who can predict into future accurately unless you have extremely high levels of ESP.

Are there ways in which we can make estimations accurate? What are the mistakes we are doing currently? I write this in relation to testing projects as I work mainly on them but they are general principles of estimation. As I can get my analogy easily from the environment I am currently in I am using it. But feel free to switch it to any other scenario with similar circumstances.

When we look at the current projects, the estimation is done together for all the requirements as one big task and never seem to updated at the later stage of the project. Typically they come to months of effort or worse (to the delight of the manager) to years. At a later stage of the project changing estimation is considered as a sin in a lot of companies as it will reduce your reputation in the eyes of customer. This is quiet understandable but it has to be taken into account that estimations are vision into future. They are bound to be inaccurate if we want to do it once and follow the plan till the end. As I have been working on these for sometime I try my experiments and gather my thoughts on what work and what won't. I write my observations here. These are in no way exhaustive but a small set of practices I found useful.

1) Do estimations in short iterations. if you try to estimate for longer time the mistakes we make add up together to make our estimate more inaccurate. When working in iterations, make concrete for current iteration which is starting, plan for next iteration, anything after that is a vision. It is not a problem to have a vision of what should happen but don't expect it to work directly as planned.

2) It is okay to create a release plan by taking all the functionalities into consideration but it should be noted that release plan is bound to change. After the project starts, across the iterations the project current status should be used to change the release plan for the better rather than keeping it constant and forcing people to work by it.

3) It is better to estimate for small amount of work at a time. The more work you take in hand the longer it will take to complete, longer time period will lead to inaccurate estimations. We will be back to square one where we started with a bulky estimation.

4) Estimations are typically done by some consultant and then handed on to the project manager who will use a team of developers to complete the work. Unfortunately this won’t work most of the time for the reason that there are differences in the way each and every developer works. It will be better to get the developer together and ask them to estimate as they know their work ability. Forcing developers to choose what you think as best won’t be a good choice. Discuss with them and if you don’t feel comfortable you can ask for a justification.

5) Estimating done in iterations will also help you to understand the speed of the project as well the pain issues which are hindering it. Using the previous iteration’s experience can help you to estimate better for the next iteration. You can also use this to update the release plan to make it more realistic for the moment.

Don't strive for using more accurate algorithm, estimation is always a guess. Putting a lot of effort in making it a bit more accurate won't be very useful in real time for the effort put in the process of calculating it.

A lot of these points correspond to some of the practices in agile development. But I tried these methods when I faced problems with estimation when I was working at onsite with a client and incidentally understood that they are also used in Agile. The woes I faced when we were fighting over an estimation because it was too unrealistic made me to rethink the way in which estimation has to be approached and the above points are the result of it. I am not advocating you to follow any agile methodology even though it will be better and I am big fan and follower of it. But these methods can be used without implementing agile and still work together to make your estimates accurate. Finally I urge you to understand and approach the estimation in a humane way rather than thinking that it as a mere part of a big process.

Monday, July 14, 2008

Integrate early and soon...Please....

I have been talking with various leads and managers across my practice to understand what seems to be the reason behind the delay in the delivery of projects. I work in a testing major area as a developer in test and I just wanted to understand why there seems to be an ever increasing number of delivery failures and delays? Well there are lots of reasons

1) Not following iterative model

2) Not working closely and getting immediate feedback from the clients.

3) Too much emphasis on unnecessary and bulky process instead of delivering value

4) Too much emphasis on fancy frameworks

4) But an oft missed point is final dash to integration.

It is normal for me to work in a testing alone project, where the regression test scripts for an application are built by the team. This post is for teams who work on such projects though the general principle also applies to the whole development community.

Whenever I meet a lead of a team confident of his project success, the first question I ask them is how often they integrate and run their scripts. Their answer always seems to be we have a well defined process, a time tested framework and coding standards. Let the boys concentrate on coding now. We don’t need to worry about the integration now. We will do it before we deliver. But every time I see them struggling with the code base to glue it together to make sense out of it. Haven’t we all learned the lessons that even if we have the best process in the world, differences however subtle can disrupt it and make us scurry to find the cause? Haven’t we learned that if we don’t do something we can’t be confident of it?

Even if we follow all the best practices (Personally I don’t believe practices are context dependent) in the world in the team, there are differences in way which each person would work. As well without constant feedback that what we are doing individually works together as a single entity, there is no way we can march forward towards deadline. Integrating early and often is the only option we have in projects and it need not be only development teams but also teams creating testing code. It eliminates the doubt and risk of hassles during the delivery as well helps us to deliver on demand. Whenever during the day a piece of task is complete, integrate with the existing testing code base and run it against the application. Most of the existing continuous integration servers help you to do the task. Even if you work on purely automated testing projects which I do sometime, opt for continuous integration to see if the scripts or code you are writing can work together and not just as individual modules in developer’s machine. Using continuous integration also helps us to run the scripts in a standardized environment rather than individual ones. It also helps to catch the failures that may happen when someone’s code broke another person’s work which won’t be possible in the last minute integration.

There would be an overhead of an integration machine and build scripts which the testing team would normally not prefer. But it comes at the advantage of giving us feedback about the project’s health, eliminating the big bang integration at delivery and helping us to eliminate the errors and differences as soon as possible. When you make integration a non event, it will be surprising to find how easy it is to deliver the code without hassles.

If you want to know more about continuous integration, you can get more info from the article by Martin Fowler

Also there is a book Continuous Integration, Improving software quality and reducing risk by Paul Duvall.

Saturday, July 12, 2008

Announcing release of schnell

Tests are considered a valuable feedback in any product development. Faster the feedback, better the product quality. Typically the major tools in the market or in open source space which test the UI for functional and regression testing are dreaded in an agile project because of their slow speed.

Schnell is born out of need for fast automation tools. Built around jruby and htmlunit, it uses watir like API for scripting. The installation guide, user guide as well as examples are available in project site schnell-jruby.

Initial benchmarks show that the test suite with 54 tests, 340 assertions takes around 4 seconds to run. Watir, while running the same suite runs at around 150 seconds (Not bad :)). I wonder what the commercial tools like QTP, Winrunner take to run the same.

Checkout the project and let me know your feedback. Hope it will be useful for you.
If you face any problems, please log them in issues, I will try to work it as soon as possible.

Finally, more volunteers are needed to improve the code base and add new functionality. Some major refactorings and new features are planned for the upcoming releases. So if you want to join, drop me a mail. Happy hacking

Wednesday, July 2, 2008

What are unit tests anyway?

Well I hadn't posted for a long time... It is almost 3 AM in Shanghai and I am still not asleep. What keeps me awake is the question "What are unit tests anyway?" :).

I am an ardent fan of test first programming and doing XP for past 1 1/2 years. I am writing this for people who are new to XP or TDD. These concepts are already there for long time and I am elucidating here to show my understanding.

Well lets take off.. Unit tests are

1) Obviously tests and also source of feedback, safety net, confidence builder, saviour of the project and many other gallant titles to follow. I am not going to concentrate on this because they are already covered a lot.

2) When you are testing a class they provide the client's view of usage of the class or the interaction of the class with other classes (Being mocks). This is more interesting one. It helps us in two different ways.

One is it shows if the class, its members, the parameters glue together properly. In other words by using the class in a unit test we can see if we have structured the class properly. Some of the things like proper naming of classes and methods come out during this.

The other way is it shows the distribution of responsibility in a class. This is a difficult part of object orientation. When we see a method being called on a object we see that the method is fulfilling one of responsibility. It also make us think that whether the object should have that responsibility. Do we need to move that to a new class or to an existing one?

These questions lead to a lot of refactoring and then again unit tests help us to make them safer. The second point leads us to the Test Driven Design. The unit tests are actually shaping the classes and driving their design. Thinking about the responsibility at early stage in unit tests also leads us better responsibility distribution, to Single Responsibility principle and all such good things.

I am big fan of tests and welcome tests whenever possible. But the point of tests driving the design makes the test first a more favourable one for me.

Well there are a lot of good things about unit tests.. But with sleep fast approaching I think I have written about the most important ones.

If you want to join me or contra me please add it to the comments.

Thursday, May 8, 2008

Renewed thougths about DRY in Testing

This is in reference to my previous post on DRY and Testing. In that I made a point that DRY as a principle which must be upheld not only for development but also for testing. But there is something special about testing especially tests which distinguish them from development code. Tests are actually examples which everybody uses as guide lines for development. They are executable specification and help people to understand the software. So being used as examples demand expressibility and ease of understandability.

DRY can make code terse and compact for maintenance but when it comes to test code we must compromise some of it for understandability. This I realized when I remembered that tests are not only code but also serve as examples. Brian Marick has an interesting post in this subject in his blog.

I think this is something important I learned. But it is also necessary to keep the balance between the not letting repetition in code and helping tests be as examples.

Wednesday, May 7, 2008

Acceptance Test Driven Development of DSL

Domain Specific Languages have been very popular lately as a way of abstraction and a lot of talks have been going on about the different forms of DSLs and their role in helping communicating the intent more clearly. To have a quick introduction of DLSs take a look at the wiki entry on DSLs or entry by Martin Fowler in his bliki. ]

This is my experience about designing DSL as I have been involved in working on a fluent interface. It is a based on WATIR to facilitate testers to write more expressive scripts and we are planning to make it open source soon. The most important problem we quickly ran into when me and my friend started developing this was unable to decide what would be the best way to represent this interface. So we thought we would create a basic set of interfaces and based on the initial design we will start developing and refactor the initial set if we feel we can do better. But still we wanted to make it more effective by involving more people. So we thought it would be fun if we could try out the acceptance test driven approach for the development of this DSL.

We got a set of people interested in this and having them as customers we worked out the initial set of fluent interfaces as acceptance tests. Then we started with our development of the interfaces. As and when the people wanted to make the interfaces better or add anything new we have a discussion and once we reached a satisfactory and update the acceptance tests. As the acceptance tests guided us through the process and helped us to visualize the usage of the DSL as well was written from the customer's perspective the DSL was a success with in our company.

The most valuable lesson we learned from this was for the DSL development using customers i.e. the domain experts as the designers is most important and coding the usage of DSL as an acceptance test acts as a guide map or example for the development for the DSL. As the acceptance test models the usage we could also improve the interfaces if needed by seeing them in the tests.

If you have any thoughts related to the design of DSL and ATDD I welcome your comments.

Friday, April 25, 2008

Ruby: Builder Pattern

Today morning I was reading through the blogs when I found this interesting post from Venkat on Groovy fluency. It is actually a builder pattern for constructing emails. I thought this example to be in╦łtriguing. As I am a ruby guy (Not a geek or a hacker. I am not sure if I am qualified to call myself that :-) ) I thought if I can write this in Ruby.

The problem is to create a mailer class in which you can set from, to, subject and body and send it. The from, to, subject and body needs to be passed through a block to the send method of mailer. Venkat has refactored to a stage where you call the mailer like this.

from "Sai"
to "matz"
subject "Ruby"
body "I love ruby"

I knew the solution was in using instance_eval. The problem was how to delegate each line of code in the block passed to be executed in the context of the mailer. First I tried to yield inside the instance_eval block. Unfortunately it did not work out. Later I figured out if I pass the block to the instance_eval then it will be possible to execute the block with respect to the mailer instance. So here it goes.
class Mailer
def from(fromAddress)
puts "from : #{fromAddress}"
def to(toAddress)
puts "to : #{toAddress}"
def subject(theSubject)
puts "subject : #{theSubject}"
def body(theBody)
puts "body : #{theBody}"
def self.send(&blk)
m =
puts "sending the mail..."

from "Sai"
to "matz"
subject "Ruby"
body "I love ruby"

The pattern used here is builder pattern normally employed to construct complex objects out of different components. Take a look at GOF patterns book or Ruby design patterns for reference.
Usage as far I know are Markaby, XML builder, Shoes GUI toolkit. May be more are out there.
If you know any known usage in Ruby please add it to the comments.

Hope this post was useful as it was for me.

Tuesday, April 22, 2008

DRY and Testing projects

As an agile developer a lot of practices figure into my way of working like pairing, refactoring, TDD (Test Driven Design or Development. I prefer design.) Of all these practices the one which the most important, often mistaken to be the simplest and has definitive effect is DRY short for Don't Repeat Yourself.

The basic principle is to remove any kind of duplication in any kind of knowledge. But the catch which most people fail to get is it is not only with the code. We should not have duplication in code, data, tests, configuration, environment setup and tear down and even documentation. It is a well known fact that duplications will increase the maintenance effort of the code, lead to logical inconsistencies and poor factoring.

As a person with role of developer in a automation testing team, I have a chance to see a lot of people writing a lot of code. One of the sad things which I notice most often is when writing test code people tend to leave out practices like DRY or refactoring. The reason may partially be they are not familiar or they tend to think test code is write and throw away code, a trend partially encouraged by GUI recorders or code generators. I got the idea of writing this article from the experience I got from a pairing session with my friend while writing some code in WATIR.

We have a hybrid driven framework for WATIR internally used with in the company. One of the aspects of the framework is its ability to create a suite by selecting test cases from excel sheet. They have used three different workbooks for this purpose. One for creating the business flow from individual business components, one for the suite creation and other for the test data. The problem is there is duplication of test cases between the business flow sheet and the suite selection sheet. When I pointed out to my friend his instant reply was it was not in the code and so its not a problem. This was pretty odd to me and I explained to him that even though the test cases are low now, it is twice the job when we need to update something. And also if the test cases increase the work becomes even more difficult. So we moved the suite controlling as well as business flow to a single sheet. This removed the duplication we had as well as the maintenance attached with it.

The duplication here is subtle but these are the places we usually miss until it grows to a larger problem. Most of the time we concentrate on code and miss out to make our test data, environment, configuration, build scripts, test code DRY. I tend to see a lot of testers tend to write code for testing but fail to understand that tests are also code which need to be maintained. This leads to brittle test code and throw away test code which has data hard coded, code duplicated, non refactored. It is difficult to maintain such code and it is an over head to throw away it.

DRY is one of the most important principles of Agile because it helps to create a design which has maintainability in built to it. It is important to realize its importance and try to refactor our work at regular intervals to reduce duplication where ever it is and what ever code it may be.

Monday, April 21, 2008

Meta programming and documentation

I have been working on ruby for sometime and one of the things which attracted me to ruby is its meta programming capability. I work in DSLs a lot in Ruby and am currently working on a fluent interface for WATIR. Naturally meta programming comes into the picture in a lot of places in this implementation.

There are two problems I faced during the usage of meta programming. One is it hinders the way of using auto document generation like rdoc, the other is without proper documentation it is difficult to understand what you have written after sometime.

The first problem is more with the end user documentation. When I code an API I normally have comments in the methods and classes as here documents and at a later stage use rdoc to generate the documents. But when I use meta programming the code I am writing becomes concise but looses its meaning.

For example in one of the projects I wrote a geocoding API wrapper for yahoo geocoding service. I used a class called location for which I can set street, city, zip code or address as its attributes. Initially I wrote it as individual accessors but I found it to be too verbose and repetitive so I used method_missing to accomplish this.

class Location
def method_missing(method_name, args)
attr_name = method_name.to_s.chomp("=").to_sym
if %w[street city state zip address].include? attr_name.to_s then
@messages << attr_name.to_s
eval %(
class << self
attr_accessor :#{attr_name}
send(method_name, *args) if respond_to?(method_name)
raise "No method of name #{attr_name} error"

The job is accomplished easily but its meaning is a bit lost. I can still give the comment there and the rdoc will pick it and will use it. But still it will not be as effective as giving the normal documentation. The only solution which is ineffective that I can think of is to use dummy methods which mock the actual methods with comments during the document generation. I am not able to think of a better way.

The second problem is more on the developer's perspective. When using meta programming like this most of us feel happy that the work is done with lesser number of lines. But unfortunately the code's expressiveness is lost. So when you read the code after sometime it is difficult to understand why you used it actually. I normally follow TDD and even with the tests expressing my intent I find it difficult to understand the meta programmed code normally after a few days of coding it.

One of the important things when using meta programming is to write the code comment conveying your intent along with the code you are writing. This will go a long way in helping you at a later stage when you want to change something.

If you have anything related to this or want to discuss more in this regard please drop me a mail or add it to the comments.

Wednesday, February 27, 2008

Introduction to Rubinius

I have been working in Ruby and JRuby for quiet some time. MRI is the interpreter I have been using from the beginning for Ruby development. Now that Ruby 1.9 has come out it is really interest.
Rubinius has caught my attention for a while. It has been known to be fast as well as has some of the features which Ruby needs and removes some which Ruby doesn't need.

Though I have not worked with rubinius extensively these are the features in Rubinius I found interesting or needed in Ruby are

1) Rubinius compiles the Ruby code to rbc (Ruby compiled files) files. Ruby compiled files are used to execute there after and so the execution speed is better in Rubinius. Having first worked with python, this was one of the features which I expected by default from Ruby.
2) Rubinius has a method to use archive the source files as Ruby Archives similar to java Jar files which is a very compact way of deployment.
3) Support for concurrency in Erlang actor model. Ruby 1.9 also has this in form of fibers.
4) FFI Foreign Function Interface. Not sure what it means but need to explore more of this.
5) Additional Meta programming capability by MetaClass which is a subclass of Class. Again need to explore more. This features seems interesting as I work a lot on meta programming.
6) Also offers some runtime introspection capabilities.

Rubinius VM is not entirely compatible with MRI code.
For example object#freeze method has been removed. As well multiple assignments like a,b = 1,2 return true and not array of the RHS arguments. This reduced the need to create an array.

So this is what I have read about rubinius till now. Will write a lot more about the practical aspects when I start using it more. Till then happy hacking.

Monday, February 25, 2008


This blog will server as a learner's log. I work as a developer in test and work on Ruby, Rails, Scala, DSL's and Model Based Testing.

I will post at regular intervals on my learning depending upon the time I get away from the job. Most of this will be about Ruby and Testing, Agile. Hope it will be useful for someone besides me.