Importance Of Releasing Often

One practice that I have come to believe is important is to have regular releases of software that is reviewed by business stakeholders. By regular I mean every 2 – 6 weeks.  These do not have to be full production releases, but they should be to at least a staging or beta platform that business stakeholders can use as if it were the final product.

Regular releases are important for two reasons.

  1. Business stakeholders get to see and provide feedback on the product long before it’s complete. Anything that is going off track can be identified, and fixed earlier, which means for inherently less cost than if found later.
  2. The development team becomes practiced at releasing user ready software.  If business stakeholders use the released version it must meet a certain level of usability and quality.  If a team only releases once a year, or even less frequently, they are performing a task they do not do often, and most likely do not do well.

We have been releasing every 2 – 4 weeks on the current project I am on and it is working very well.  Through this process we have been able to very quickly react to the requests of our business stakeholders, and we are also very good at releasing updated versions of our software.

Regular releases also help teach a team to stick to a schedule, since the releases themselves become the back bone of the schedule.  In short I believe it’s helpful to release early and often to those business stakeholders who will eventually decide if what you’re working on was worth it to them.

There are some caveats to the above.  A blog post form another blog about this same topic has a comment that points out it might be a bad idea to release early and often for software that has life and death implications.  I would agree with this when it comes to shipping products for market use in those fields.  I wonder if internally it doesn’t make sense to do frequent releases to QA or internal business stakeholders though as a matter of process.  For medical or navigation software I would completely agree final releases should be thoroughly tested and ready.

Here’s another post from a different blog about releasing early that talks about the pros and cons if you are trying to market your product in a stealth like manner.  It makes the point that getting feedback from people who are not your real potential users may not be helpful.  I would agree and many times QA or internal business stakeholders are not the real end users, and their feedback may lead to features and changes the eventual end users won’t like.    I think releasing to internal folks as a proxy to real end users would be better that not having any interim releases at all, in at least you know you are making progress towards a final release that is measurable.  The post brings up some really interesting marketing aspects I’d never really thought of since I am mostly in the code these days.

The Rhino Mock Magic Of GetArgumentsForCallsMadeOn

One of the things that occasionally came in handy using Rhino Mocks previous to their AAA linq based syntax was the ability to see if operations on a mock were performed in a particular order. We could very easily set expectations inside and ‘Ordered’ block and if things happened out of order viola! you recieved an exception. Looked like the below:

       [Test]
        public void CompanySaveTest()
        {
            Company comp = new Company();
            comp.Name = "CompanyName";
            comp.Address = "Add1";
            comp.Phone = "222*111*3333";
            comp.User.Name = "Bill";

            int newCompID = 4;
            int newUserID = 2;

            MockRepository repo = new MockRepository();
            IDbManager dbMock = repo.DynamicMock<IDbManager>();

            using (repo.Ordered())
            {
                repo.Expect(dbMock.SaveComany(comp.Name, 0)).Return(newCompID);
                repo.Expect(dbMock.SaveUser(comp.User.Name, newCompID)).Return(newUserID);
                repo.Expect(dbMock.SaveComany(comp.Name, newUserID)).Return(newCompID);

            }
            
            //--Call Method Under Teset
            CompanyManger.SaveNewCompany(comp);

            repo.VerifyAllExpectations();

        }

When we switched to the AAA based syntax there was no way to reproduce the above, at least no obvious way. Luckily we did not require checking order to often, but when we did a co-op on my current project came up with a very effective alternative using the AAA syntax. The method only supports checking the order of operations called against a single mock object, but at least that’s something. Not the full ordered ability of the past, but I’ll take it when it’s handy. At least we can ensure the save calls on the company are in the right order.

The method ‘GetArgumentsForCallsMadeOn’ can be called on a mock after it has been used and it will return a list of object arrays. Each call on the mock will result in a new object[] in the list holding the parameters used in that call. More importantly the object arrays are added as each call on the mock is made, giving us a way to determine if calls were made on this mock in the appropriate order. As simple example looks like the below:

[Test]
        public void CompanySaveTest()
        {
            Company comp = new Company();
            comp.Name = "CompanyName";
            comp.Address = "Add1";
            comp.Phone = "222*111*3333";
            comp.User.Name = "Bill";

            int newCompID = 4;
            int newUserID = 2;

            MockRepository repo = new MockRepository();
            IDbManager dbMock = MockRepository.GenerateMock<IDbManager>();

            dbMock.Expect(db => db.SaveComany(comp.Name)).Repeat.Any.Return(newCompID);
           
            //--Call Method Under Teset
            CompanyManger.SaveNewCompany(comp);

            IList<object[]> args = dbMock.GetArgumentsForCallsMadeOn(db => db.SaveNewComany(Arg<string>.Is.Anything));

            //-- make sure called twice
            Assert.AreEqual(2, args[0].Length);
            
            //--check to make sure called in right order by checking updatingUserIDs of calls
            int updatingUserID = (int) args[0][1];
            Assert.AreEqual(0, updatingUserID);
            Assert.AreEqual(newUserID, updatingUserID);


        }

Now we can check order of calls on the mock object in question. Another nice thing is we can also do checks that were very cumbersume and not very readable in constraints and Arg<> checks in ‘AssertWasCalled’ methods in a much cleaner way.

//--check to make sure called in right order by checking updatingUserIDs of calls
            int updatingUserID = (int) args[0][1];
            Assert.AreEqual(0, updatingUserID);
            Assert.AreEqual(newUserID, updatingUserID);

            Company companySaved = args[0][0] as Company;
            //--check make sure arg is company
            Assert.IsNotNull(companySaved);

            //--now check what ever we like in cleaner way than constraint checking.
            Assert.AreEqual(newUserID, companySaved.LastModifiedUserID);

One thing to remember, however, is that when recording object calls on a mock the references are what it captures. When you do comparisons on reference arguments you are comparing the data as it is on the object at the end of the method. What this means is if you have an object ‘Customer’ and when passed to method ‘UpdateCustomer’ the ‘Name’ property is ‘Larry’ but the property is later changed to ‘Bill’, then the ‘Name’ property will reflect ‘Bill’, it’s state at the end of the method, whenever it is interrogated in an ‘AssertWasCalled’, ‘AssertWasNotCalled’ or ‘GetArgumentsForCallsMadeOn’ regardless of what the property value was when the method was actually called and recorded. This can be a pain when trying to do asserts, but such is life. In these cases you have to do the argument check on an expect previous to the mock being used.

Handling Dependent Expected Results

Recently the question came up as to how to handle creating expected results for a unit test when those results rely on a call to an outside helper function. As an example, in our current project we have a “Name” class that has methods for creating a string that is formattted to be first name then last name and another function to create a string that is last name comma first name. The name class looks a little bit like the below example:

public class Name
{
    public string FirstName
    { get; set; }

    public string LastName
    { get; set; }

    public string GetFirstNameLastName()
    {
        return this.FirstName + " " + this.LastName;
    }

    public string GetLastNameFirstName()
    {
        return this.LastName + ", " + this.FirstName;
    }
   
}

The question that arose was what to do in tests where some expected result had been manipulated by a helper method, such as one of the name formatting methods in our “Name” class. For instance, let say we had a report class that created one row per customer wtih some information about that customer. The first field of the report is supposed to be the customers name formatted as last name comma first name. In the code we’re using the the ‘ GetLastNameFirstName()’ method of the name object to get the string the report class is printing. Let’s say the report has a ‘ReportDataSource’ that takes a Customer object that has ‘Name’ property called customer name and sets a string ‘Name’ as such: (Below is not real just for example sake)

public class ReportDataSource
{
    public string Name;

    public void FillItem(Customer customer)
    {
        this.Name = customer.CustomerName.GetLastNameFirstName();

     }
}

Let’s say we have a unit test for this that looks like the below, the gist of the question is what should we set as the ‘excpectedName’? (Below test would be overkill itself in reality, again just for exlanatory purposes)

 [Test]
public void GetCustomerName()
{
    //--Arrange

    string expectedName = "";

    Customer cust= new Customer();
    cust.CustomerName.FirstName = "first";
    cust.CustomerName.LastName = "Last";
    
    
    ReportDataSource reportData = new ReportDataSource();

    //--Act
    reportData.FillItem(cust);

    //--Assert
    Assert.AreEqual(expectedName, reportData.Name);
}

We came up with three possiblities, we could hardcode the value so we’d have a line that looked like the below:

string expectedName = "Last, First";

Hard coding seemed fine and makes the test very explicit. The worry for some was what happened if we changed the behavior of the ‘GetLastNameFirstName()’ method? Then we would have to go manually change all the places where we’d hard coded what we expected it to do.

I agreed this was a concern, but my thinking was from a testing perspective we weren’t concerned with how the method did it’s work, we wanted to see that exactly that hard coded string was returned in our test given the input we’d provided. Someone pointed out that this wasn’t necessarily the case, we did not really want to test for our hard coded string since we wanted to rely on the Name class, and we did want any changes there to not break out test. The entire reason for centralizing name formatting was so changes in name formatting would only have to be made in one place. I had to agree.

The next thought if we wanted to make sure the Name object was involved was to mock the Name object.  This sounded like a good idea, we want to rely on the Name object and wanted our test not to break if the Name object changes it’s behavior.  However, as I thought about this it started to seem like overkill. To easily mock the Name object we’d have to extract an interface and go through the extra effort of creating and programming a mock in testing to isolate the behavior of a method that just turns around a name?

I could see this being necessary if the method did something that wasn’t repeatable.  Lets say in our method we were creating a GUID, well we’d have no idea what each new GUID would be, we’d have to mock the call to create our GUID if we wanted to test any output that contained the GUID.  Every call to the “GetLastNameFirstName()’ method will produce the exact same output given the same input.   Which lead me to the solution I liked best.

Given that the ‘GetLastNameFirstName()’ method will produce the expected result we want, and if it’s behavior changes we want to expect the new behavior, why not use the method itself to create our expected result?  Something like this:

[Test]
        public void GetCustomerName()
        {
            //--Arrange

           
            Customer cust= new Customer();
            cust.CustomerName.FirstName = "first";
            cust.CustomerName.LastName = "Last";
            
            //-- use the dependent method to create the test data since we don't want
            //   to test the dependent method just want to make sure we're always consistend with it
            string expectedName = cust.CustomerName.GetLastNameFirstName();
            
            ReportDataSource reportData = new ReportDataSource();

            //--Act
            reportData.FillItem(cust);

            //--Assert
            Assert.AreEqual(expectedName, reportData.Name);
        }

Some still preferred the idea of mocking, but this worked best for me. If the method is deterministic and doesn’t rely on anything outside the immediate application domain, why not let the method act as it’s own mock? This allowed us to not be dependent on the behavior of the methods in the Name class as any changes cause changes in our expected results. We also can avoid the extra effort for mocking. I like it!

Modeling a Persistent Command Structure

The team I work with came up with a great way of encapsulating business transactions into commands that require no data persistence, require data persistence and require transactional data persistence. We were able to create a fairly simple structure by using generics to allow our return types to vary, and using class constructors to pass in parameters. In this fashion we were able to create a business command framework to use in a predictable and testable manner in our business layer. The basic structure is defined below:

Command Class Structure
Command Class Structure

To use the CommandBase class an implementation simply overrides Execute and does it’s business. Execute is marked as abstract in fact so an implementation must do this. A constructor is also created taking whatever data the command will need to manipulate to do it’s work. To use the PersistenceCommandBase class an implementation simply overrides the method ‘Execute(datacontext data)’.  However, PersistenceCommandBase implements the ‘Execute()’ method defined in CommandBase.  PersistenceCommandBase handles the creation and disposal of the data context in the ‘Execute()” method and then makes a call to the abstract ‘Execute(dataContext)’ method.

The logic needing persistence uses the passed in dataContext.  If the ‘Execute()’ method is called on the implementation the base class handles creating and cleaning up the dataContext.  The implementer can ignore all the messy details of creating and killing a dataContext.  The ‘Execute(dataContext data)’ method can also be invoked directly passing in an already in use datacontext.  Again the concrete implementation of the PersistenceCommandBase does not have to worry about where the IDataContext came from.  Below is what a simple implementation of the PersistenceCommandBase looks like:

public abstract class PersistenceCommandBase<T> : CommandBase<T>, IPersistenceCommand<T>
    {
        public override T Execute()
        {
            using (IDataContext data = DataContextFactory.MakeDataContext())
            {
                return this.Exectue(data);
            }
           
        }
        
        public abstract T Execute(IDataContext dataContext)
        {}
    }

Essentially we are using a template approach to centralize the creation and release of persistence contexts in our application. We took the same approach and created a TransactionalCommandBase, which looks a lot like the PersistenceCommandBase, except it handles the the details of starting and committing or rolling back a transaction, it look like the below:

public abstract class TransactionalCommandBase<T> : PersistenceCommandBase<T>
    {
        public override T Execute()
        {
            using (IDataContext data = DataContextFactory.MakeDataContext())
            {
                try
                {   data.StartTransaction();
                    return this.Exectue(data);
                    data.Commit();
                }
                catch
                {   
                    data.Rollback();
                    throw;
                }
            }

        }

        public override T Execute(IDataContext dataContext)
        {
            throw new Exception("The method or operation is not implemented.");
        }
    }

The thing I love about this setup is the creator of a business transaction only has to decide to what level data persistence is required and then create a class extending from the appropriate base class. Developers can focus on creating and testing their business logic and let the base class handle the data context details.

Another bonus to our structure is we can create aggregate commands, that is a command that uses other commands. Once we have a data context, we can simply pass it to the ‘Execute(dataContext)’ method of another command. In this manner we can create transactional commands that wrap non transactional commands and have them still enlisted in our transaction. Below is an example:

 /// <summary>
    /// Non Transactional Save Command
    /// </summary>
    public class SaveCustomer : PersistenceCommandBase<Customer>
    {
        private Customer _customer = null;

        public SaveCustomer(Customer customer)
        {
            this._customer = customer;
        }

        public override Customer Execute(IDataContext dataContext)
        {
            dataContext.Save(this._customer);
        }
    }

    /// <summary>
    /// Class using non transaction command in transaction
    /// </summary>
    public class SaveManyCustomers : TransactionalCommandBase<List<Customer>>
    {
        private List<Customer> _customers = null;

        public SaveManyCustomers(List<Customer> customers)
        {
            this._customers = customers;
        }

        public override List<Customer> Execute(IDataContext dataContext)
        {
            for(int i = 0; i< this._customers.Count; i++)
            {
                SaveCustomer saveCommand = new SaveCustomer(this._customers[i]);
                this._customers[i] = saveCommand.Execute(dataContext);
            }

            return this._customers;
        }
    }

We’ve run into some issues, and have made some adjustments that I’ll address in subsequent posts. The first issue is testing. Since we’re passing what are essentially parameters to our commands in their constructors it makes for an interesting testing situation. We’ve adopted a command factory approach so we can essentially abstract the creation of the commands. Only the factory creates commands, so we can test to make sure the factory calls are made correctly.

Another issue we’ve run into is how to handle rollbacks in transactions that are not based on an exception. What happens if we want our command to just rollback? We’ve also had to address what to do if a transactional command is called directly with a dataContext that is not in a transaction? It seems that a transactional command should always run in a transaction even if the dataContext it is passed is not in one. Interesting issues I look forward to addressing here soon.

More or Less Types

One debate that seems to arise in many of the projects I work on is at what level to create types?  For instance if you have a Customer object does the Customer object look like this?

    public class Customer
    {
        public string CompanyName
        { get; set; }

        public string FirstName
        { get; set; }

        public string LastName
        { get; set; }

        public string Address1
        { get; set; }

        public string Address2
        { get; set; }
        
        public string City
        { get; set; }

        public string State
        { get; set; }

        public string City
        { get; set; }

        public string ZipCode
        { get; set; }

        public string BusinessPhone
        { get; set; }

        public string HomePhone
        { get; set; }

        public string CellPhone
        { get; set; }
   }

or like this?

   public class Customer
    {
        public string CompanyName
        { get; set; }

        public Name CustomerName
        { get; set; }
        
        public Address BusinessAdress
        { get; set; }
        
        public Phone BusinessPhone
        { get; set; }

        public Phone HomePhone
        { get; set; }

        public Phone CellPhone
        { get; set; }
   }

I am a fan of having more types then less. The second Customer implementation lets me think in terms of Name, Address and Phone objects and will guide developers to have the same structure for these objects throughout the system. Without these smaller types you can have addresses with no Address2 field in some places but not others, Names with middle initials on some classes, but not others. This is all fine until these objects need to share data, and their data doesn’t conform.

I suppose the downside is you have more class files to maintain, which really isn’t a downside at all. To use the smaller objects you have to conform to their rules, which is only a downside if the system doesn’t have any consistent rules. If this is the case I’d wonder about the system design. Some would argue you should never have any primitive types on your business objects, that everything should be it’s own class. This would nicely abstract and encapsulate, but is it overkill?

At some point a class has to have primitives. In our Customer class the Address object will be made up of string properties, should we create types for Address1, Address2. etc….? Some would argue you should, but at some point the data will be stored in a primitive. So the real question in my mind is where to stop typafying everything. What I tend to advise is to have top level business objects that rarely have primitives, but allow their support objects to be made up of them. This seems to force system wide structure on support objects, without getting into objectifying everything overkill, at least in my mind. I’d be open to using more types, but would be uncomfortable with a less type driven approach.

Another plus to a more type driven approach is you can make use of operator overloads. This allows casting of one type to another, with the adaption logic implemented in the casting operator. For instance, you can cast a string to a Phone type, and the validation logic can be implemented in the casting operator. Consumers can then easily set a string to a phone variable.