To setup or not to setup

Posted by Chris on October 20, 2006

Recently I have been trying the “BDD”:http://behaviour-driven.org/
(Behavior-Driven Development) approach to developing software. Normally, when I am doing TDD there are a couple of “house-rules” that I like to follow. These have developed over time, often following advice from either a colleague or other resource.

One such “rule” that has developed over time is that I tend not to use the SetUp and TearDown methods that the xUnit tools have. These are used to execute some piece of setup and/or teardown code before/after every test method is executed. The reason to use these is of course that you might have some code that is needed to setup the system under test, and this setup code tends to be the same for all test methods in a fixture. To keep things “DRY”:http://en.wikipedia.org/wiki/Don’t_repeat_yourself you will naturally want to put this code in a single place and have it executed along with each test. The xUnit tools have different ways of accomplishing this. NUnit for instance uses reflection to find methods marked with the SetUpAttribute in a class and executes them before each test method is executed.

As I started out doing writing unit tests these methods seemed like a great idea. As soon as I learned about them all test fixtures would have them, in fact the first thing I did when creating a new fixture was to add these methods with a “copy-paste template” (or using snippets or similar when they exist). However, after a while a feeling of something being wrong started showing. Discussing the matter with “others”:http://www.taylor.se/blog and doing some “online reading”:http://www.agileprogrammer.com/dotnetguy/articles/SetupTeardown.aspx led me to the conclusion that SetUp/TearDown was *evil incarnated*.

One of the best things about using TDD is that you almost never need to do any debugging. One of the reasons for this is that when you do some modification to your code and then run the tests, if there is anything wrong you will instantly get feedback about it from a red test. You read the name and location (fixture) of the test and take a look at it, and if you have done things right the test will tell you exactly what it does. With this you will hopefully be able to more or less immediately figure out what you did wrong, and fix it. However, if the test is not well written and does not quickly and easily tell you what it does, then you lose this feedback, or at least a part of it. So what has this got to do with SetUp/TearDown? Well, when the test called Foo in fixture Bar blows up on you and you take a look at it, you want to quickly see what it does. If you are using a SetUp method then you will not get the full picture by simply looking at the test. You also need to take a look at the SetUp, and possibly the TearDown (and then TestFixtureSetUp/TestFixtureTearDown if it is really bad). And, of course we must also know that our xUnit tool works this way, since the test method code shows no evidence of a setup method being called. So, instead of using these tools that xUnit gives us, we should instead be refactoring the common setup code into a separate method and then call that method “explicitly” from each test method. That way it is clear when we look at the test method what it does.

Here is an example of a test fixture written this way (in C# using NUnit):

using System;
using NUnit.Framework;
using SystemUnderTest;

namespace SystemUnderTest.Tests
{
	[TestFixture]
	public class AccountWithBalance100_WithoutSetup {
		private Account account;

		private void Init() {
			account = new Account(100);
		}

		[Test]
		public void Depositing50LeavesBalanceOf150() {
			Init();
			account.Deposit(50);
			Assert.AreEqual(150, account.Balance);
		}

		[Test]
		public void Withdrawing50LeavesBalanceOf50() {
			Init();
			account.Withdraw(50);
			Assert.AreEqual(50, account.Balance);
		}

		[Test]
		public void Withdrawing100LeavesBalanceOf0() {
			Init();
			account.Withdraw(100);
			Assert.AreEqual(0, account.Balance);
		}

		[Test]
		[ExpectedException(typeof(ArgumentException))]
		public void Withdrawing101ThrowsException() {
			Init();
			account.Withdraw(101);
		}
	}
}

So, that is the end of that story the, right? Well, I started out this blog entry writing about BDD, not about SetUp/TearDown. So I guess I need to tie this together now. As I said I have been trying BDD instead for a while now. Apart from calling tests specifications and fixtures contexts, there is not a whole lot different between TDD and BDD. At least on the surface, that is. The whole reason to change the terminology is to “force” people in doing TDD the right way. This means using the tests (specifications) to specify behavior, not testing bugs. This means that you will think a bit differently, depending on how you are used to thinking with TDD. It might not be a huge step for all, but for me it has made me reflect a bit.

I did not notice it until after a while, but one interesting reflection I make now is that I do not follow some of my old house-rules when specifying in BDD. Take this example, in “Boo”:http://boo.codehaus.org/ using “Specter”:http://specter.sourceforge.net/:

import System
import Specter
import SystemUnderTest

context "An account with a balance of 100":
	account as Account

	setup:
		account = Account(100)

	specify "Depositing 50 should leave a balance of 150":
		account.Deposit(50)
		account.Balance.Must.Equal(150)

	specify "Withdrawing 50 should leave a balance of 50":
		account.Withdraw(50)
		account.Balance.Must.Equal(50)

	specify "Withdrawing 100 should leave a balance of 0":
		account.Withdraw(100)
		account.Balance.Must.Equal(0)

	specify "Withdrawing 101 should throw an exception":
		{ account.Withdraw(101) }.Must.Throw(typeof(ArgumentException))

This code example shows a typical BDD context and specifications the way I have been writing them. Note the setup part. Specter, the “xUnit tool” I am using here, sees this and executes the setup code before each specification is executed. I used it without even thinking about it. The way the specs are qritten, following the *Given* _an account with a balance of 100_, *when* _a withdrawal of 50 is made_ *then* _there should be 50 left_ style, it seems so natural to setup the context in this way. I suppose it is also largely due to the way specifications is so often written using only a single row, or at least very short.

So, when I made this reflection, I thought that if you write unit tests following the one-assert-per-test-method rule, and of course write short test methods, and (maybe most important of all) create a test fixture per “situation” (or context…), then why should it not feel as natural to use setup here? Here is an example of the same tests as above but this time with the init code moved into a SetUp method.

using System;
using NUnit.Framework;
using SystemUnderTest;

namespace SystemUnderTest.Tests
{
	[TestFixture]
	public class AccountWithBalance100_WithSetup {
		private Account account;

		[SetUp]
		public void Init() {
			account = new Account(100);
		}

		[Test]
		public void Depositing50LeavesBalanceOf150() {
			account.Deposit(50);
			Assert.AreEqual(150, account.Balance);
		}

		[Test]
		public void Withdrawing50LeavesBalanceOf50() {
			account.Withdraw(50);
			Assert.AreEqual(50, account.Balance);
		}

		[Test]
		public void Withdrawing100LeavesBalanceOf0() {
			account.Withdraw(100);
			Assert.AreEqual(0, account.Balance);
		}

		[Test]
		[ExpectedException(typeof(ArgumentException))]
		public void Withdrawing101ThrowsException() {
			account.Withdraw(101);
		}
	}
}

I am not quite finished with my thinking about this, so I am not sure if I think this is better. But I do not think that one of these tests, when blowing up in the test runner, would give me less information than the ones in the example without using the SetUp method. Since I know that the failing test is in the “AccountWithBalance100″ fixture I can easily guess what the variable account holds. But I guess if the setup is more complex then it might not be as easy to name the fixture and/or understand the code.

Comments? Anyone else using BDD that find themselves using setup differently from when doing TDD?

FOR XML EXPLICIT

Posted by Chris on October 09, 2006

A couple of weeks ago I was engaged as trainer for the course “2779: Implementing a Microsoft SQL Server 2005 Database”:http://www.microsoft.com/learning/syllabi/en-us/2779afinal.mspx. The module that was by far the most difficult according to the attendees was the one on xml, with none of them having any practical experience with xml. One of the things that was specifically difficult to understand was the @FOR XML EXPLICIT@ clause to the @SELECT@ statement.

The EXPLICIT mode of the FOR XML clause is to be used when you need to create XML of a specific format that cannot be done with AUTO or RAW modes. You can use EXPLICIT to generate xml of more or less any format you wish. It is also the most complex mode to use. The AUTO and RAW modes are normally used to transform the result of an existing query from a tabular resultset into an xml stream. The key word in that sentence is existing, by which I mean that whether or not you want the results in xml or not you still use the same query. Just add the FOR XML clause and you’re good.

With the EXPLICIT mode it is not that easy. The transformation engine that creates the xml stream from the result of a query requires that the resultset is designed specifically for this task. The concept you must understand is what is called a universal table. This table will have all the information that is needed for the transformation engine to generate xml of the format you require. So what is a universal table then? I think it is easiest to start with an example:

Tag | Parent | Employee!1!Id
 1  | null   | 280

So what does the above mean? First thing to note is the column names. This metadata is used by the transformation engine to create the nodes in the resulting xml output. We must make sure that the query we run (the one we use the FOR XML EXPLICIT clause with) returns a resultset like the one above.

The first two columns must always exist with those names. They are used to describe the hierarchy of elements in the xml output. Every element that should exist in the needs to be represented by a row in the universal table. Every unique “type” of element (ie an element name at a specific level in the hierarchy of the xml fragment) needs to be uniquely idientified by an arbitrary tag number, which is specified in the first column. If the element is nested inside a parent element then the tag number of the parent element should be included in the column for that.

The rest of the column names (in this case only one) specify the names of the nodes in the resulting xml fragment. In our example we have Employee!1!Id. This cryptic combination says that the element tagged with number 1 should be called Employee, and it should have an attribute called Id. Then, for all the rows in the table the value in this column will end up in the Id attribute of element Employee. So the very simple xml fragment we get from the universal table above would look like this:


This is about as much as the course documentation says about FOR XML EXPLICIT and universal tables. Well, it contains a little more but it is quite difficult to see how to use it for more complex examples than what you can easily create with the other modes. So I decided to create an example that does a little more, but still should be quite simple to understand.

Lets say that we have a requirement to produce an xml fragment like the one below.


        
                
                
                ...
        

What we want is an xml fragment describing a specific employee (id=280 in this case) and her customers. The data we need for this can be returned by the following query (run in the AdventureWorks database):

SELECT Employee.EmployeeId Id
	, Employee.LoginID [Login]
	, SalesPerson.SalesLastYear
	, Store.CustomerID CustomerId
	, Store.Name
	, Customer.AccountNumber Account
FROM HumanResources.Employee Employee
INNER JOIN Sales.SalesPerson SalesPerson
	ON Employee.EmployeeId = SalesPerson.SalesPersonId
INNER JOIN Sales.Store Store
	ON SalesPerson.SalesPersonId = Store.SalesPersonId
INNER JOIN Sales.Customer Customer
	ON Store.CustomerId = Customer.CustomerId
WHERE Employee.EmployeeId = 280
ORDER BY EmployeeId, CustomerID

This result includes all the data that should go in the resulting xml fragment. However it does not tell transformation engine how to create it. If we would simply add FOR XML AUTO or RAW we would not at all get the result we want. What we need to do is to create a query that will return a universal table consisting of the data above but in a resultset that includes the metadata needed to create the xml. The following query will do the trick:

SELECT 1 AS Tag
	, NULL AS Parent
	, Employee.EmployeeId AS [Employee!1!Id]
	, Employee.LoginID AS [Employee!1!Login]
	, SalesPerson.SalesLastYear AS [Employee!1!SalesLastYear]
	, NULL AS [Customers!2]
	, NULL AS [Store!3!CustomerId]
	, NULL [Store!3!Name]
	, NULL [Store!3!Account]
FROM HumanResources.Employee Employee
INNER JOIN Sales.SalesPerson SalesPerson
	ON Employee.EmployeeId = SalesPerson.SalesPersonId
WHERE Employee.EmployeeId = 280
UNION ALL
SELECT 2
	, 1
	, NULL
	, NULL
	, NULL
	, NULL
	, NULL
	, NULL
	, NULL
UNION ALL
SELECT 3
	, 2
	, NULL
	, NULL
	, NULL
	, NULL
	, Store.CustomerID
	, Store.Name
	, Customer.AccountNumber
FROM Sales.Store Store
INNER JOIN Sales.Customer Customer
	ON Store.CustomerId = Customer.CustomerId
WHERE Store.SalesPersonID = 280
FOR XML EXPLICIT

As you can see, what we have is a query that is really built up of several queries (three in our case) unioned together. Each query must include all the columns that it needs itself as well as the ones necessary for the other queries. The queries assign arbitrary tag numbers to the three different kinds of elements in our expected output (remember we had Employee, Customers and Store) and also hierarchically arrange them under their direct parent element.

Each of the queries returns the data that is necessary for the elements at a specific depth in the xml tree. For the columns that specify other elements at other depths they just specify null. Note that all the column names are aliased in the first query, since the union operator will use those names for the unioned resultset. Also note specifically how the second query will always return one row and is only used to add the Customers element (that’s why the column name is only two parts and does not include an attribute name) to use as parent for the Store elements.

Unit testing internals

Posted by Chris on October 04, 2006

One of the cool things about Dotway is that we have these “competence weekends” three or four times a year, where we go away to some hotel and geek out about something more or less of technical nature. As “Andrés”:http://www.taylor.se/blog/ mentions recently we spent a weekend “learning about TDD”:http://www.taylor.se/blog/2005/11/24/talking-about-agile/. One of the things that we discussed was how to test code that is not public. Normally you place your tests in a separate assembly and reference the production code. However, that means you need to make types and members public to be able to test them. So if you want to keep the implementation details hidden (with internal access level) you will need to have the tests in the same assembly as the production code. That brings further challenges in setting up the build environment to create different builds, since you naturally do not want to include the tests in the released code.

Today I stumbled upon a .NET 2.0 feature that helps solve this. I was setting up an assembly with attributes and noted one I had not seen before:

@InternalsVisibleToAttribute (string assemblyName)@

That sounded like something interesting, so I decided to try it. I created a new solution and added a class library project called Code to it. I then added a second class library that I named Tests, which referenced Code (and NUnit.Framework). In the AssemblyInfo.cs for Code I added the following:

@[assembly: InternalsVisibleTo("Tests")]@

Now I was ready to start testing. In Code I created two classes, publicclass.cs and internalclass.cs as shown below.

// publicclass.cs
namespace Code {
  public class publicclass {
    internal bool internalmethod() {
      return true;
    }
    public bool publicmethod() {
      return true;
    }
  }
}

// internalclass.cs
namespace Code {
  internal class internalclass {
    public bool publicmethod() {
      return true;
    }

    internal bool internalmethod() {
      return true;
    }
  }
}

In the Tests project I then added InternalsVisibleToTests.cs, shown below:

// InternalsVisibleToTests.cs
using NUnit.Framework;
namespace Tests {
  [TestFixture]
  public class InteralsVisibleToTests {
    [Test]
    public void AccessPublicClassMembers() {
      Code.publicclass foo = new Code.publicclass();
      bool condition1 = foo.publicmethod();
      bool condition2 = foo.internalmethod();

      Assert.IsTrue(condition1);
      Assert.IsTrue(condition2);
    }

    [Test]
    public void AccessInternalClassMembers() {
      Code.internalclass foo = new Code.internalclass();
      bool condition1 = foo.publicmethod();
      bool condition2 = foo.internalmethod();

      Assert.IsTrue(condition1);
      Assert.IsTrue(condition2);
    }
  }
}

As you can see these tests refer to both internal types and members in the Code assembly. The solution compiles and the tests are green!

Even though my libraries often do not need this (most of the stuff is public anyway), it is a very useful technique to have access to. It should be noted of course that this means that anyone can create an assembly named Tests and access the internals in my Code assembly, but there are of course ways around that.

Coming user group meetings

Posted by Chris on October 04, 2006

This Thursday I will be in Stockholm for the “Swedish SQL Server User Group’s (SQLUG)”:http://www.sqlug.se/ final meeting of 2005. The two presentations will be given by none other than Dr. Michael Rys of Microsoft (and member of the ISO standard SQL committee as well as the XQuery committee on W3c)! I saw him at -PDC- PASS and know he is a good speaker so this will be a great meeting.

Then on Monday 12/12 the local .NET user group that I am co-founder of, “Skånsk .NET User Group (SNUG)”:http://www.snug.se/ will have our last meeting for 2005. We have some maybe not as famous speakers but still very interesting presentations to offer; Marcus Widerberg from Dotway will talk about Developing with Generics in .NET 2.0 and then Kim Gräsman from TAC (and C++ MVP) will talk about Continuous Integration and CruiseControl.NET. If you are in the Malmö/Lund/Helsingborg/Köbenhavn area and interested then “take a look at the forums”:http://cs.snug.se/forums/86/ShowPost.aspx#86.

To aggregate or not to aggregera

Posted by Chris on October 04, 2006

I do not think I have yet posted a message that is only meant for Swedish (speaking) readers. Anyway, this post is probably mostly interesting to Swedish-speaking people, but I will keep it in English anyway (except where not possible, as is natural from the following discussion).

A couple of months ago (well about half a year ago actually, don’t know why I haven’t thought about this again until now) I was preparing a presentation on SQL Server 2005 that I was to present at a breakfast seminar at Dotway. Since I was going to speak Swedish I wanted to try and include as little Swenglish as possible (very common in the Swedish IT community otherwise, though usually called ‘Svengelska’ which would be the Swedish ‘translation’ of the ‘English’ word Swenglish). So I was looking to translate as much of the terms I could. But one term where I completely stumbled was _aggregate functions_.

*What do you call Aggregate Functions in Swedish?*

Or more specifically, I was going to describe user-defined aggregate functions in SQL Server 2005, and wanted to avoid Swenglish. To make sure we have the same English definition here, what I mean by an aggregate function is a function that calculates a single value based on a set of values as input. I guess more or less as it is defined in “Princeton Wordnet”:http://wordnet.princeton.edu/perl/webwn?s=aggregate, but with a little more database context to it.

The first one that came to mind was the probably very Swenglish ‘*Aggregatfunktioner*’. To someone familiar with SQL Server it sounds very correct, but I guess that is precisely because they (and I) recognise the English word that is actually in there. And the only definition of the word ‘aggregat’ I have found in Swedish does not provide a lot of help here:

*aggregat* [agreg'a:t Uttal] aggregatet aggregat aggregaten subst.
grupp av sammanbyggda maskiner

A typical use of this is a word like ‘värmeaggregat’ (something like a heater). And that is nowhere near what I am looking for..

Next try was ‘*Aggregeringsfunktioner*’ and immediately after that came ‘*Aggregerande funktioner*’. These feel more Swedish, and the first one is actually used in the “IT word dictionary of PC World”:http://pcforalla.idg.se/tjanster/dataordboken/ (look up ‘aggregera’), as much as you want to use that as a source.. But they still do not sound quite correct (and fully Swedish), and they definitively do not feel right to say. So I actually went ahead and used the Swenglish ‘*aggregatfunktioner*’ in my presentation, which I think worked very well and everyone understood what I meant (in a typically Swenglish fashion).

Today when I somehow came to think about this again I contacted “Jesper Holmberg”:http://blogs.msdn.com/jesperh/ at Microsoft and asked him what he thought about it. Jesper works with the Swedish localization of Windows and other Microsoft products (and possibly more, sorry if I missed something) and has a very interesting blog as well. He was really quick at answering and his suggestion was ‘*mängdfunktioner*’, apparantly the recommended translation at Microsoft (remember other products, such as Excel, have aggregates as well). This is a great translation that really conveys the meaning like I wanted, but unfortunately it feels like noone ever use that word in Swedish. So I fear half of the audience (if I were to give the presentation again) would not realize what I was talking about, at least not immediately. But I do think that I will actually use ‘*mängdfunktioner*’ next time unless I get a better suggestion.

So, anyone with other ideas, please post a comment…

SQLUG – Michael Rys

Posted by Chris on October 04, 2006

So I was at the “Swedish SQL Server User Group”:http://www.sqlug.se/ meeting in Stockholm yesterday. It’s sad that there is never anyone but me from Skåne (the part of Sweden where I live) that come there, but of course it is an hour’s flight and the meeting ends at like 21.30 so it does get pretty late. It was great to chat with “Tobias Thernström”:http://www.rbam.se/?page=pages/tobias.htm and “Tibor Karaszi”:http://www.solidqualitylearning.com/blogs/Tibor/ (the founders of SQLUG) as well as “André Henriksson”:http://blogs.msdn.com/ahenrik/ and Maria Johansson from Microsoft Sweden.

As I mentioned in the “preview post”:http://www.hedgate.net/blog/2005/12/06/coming-user-group-meetings/ both presentations at the meeting was given by Dr. Michael Rys from Microsft and member of the ISO SQL committee and W3C XQuery committee. Michael knows a lot about XML and his presentation on XQuery was very interesting. Personally I do not see me storing XML in SQL Server even with the new support in SQL Server 2005. Michael mentioned scenarios where you want to store “semi-structured data” as one example where it would be a good idea to store it in an XML datatype column. But to me the word “semi-structured data” has as little meaning as “denormalization”. It does not have a specific meaning, and the general explanation “data that is not relational” is no good since all data is best managed in a relational management system in my view. Using XML and XQuery in stored procedures is a different case, where I have many thoughts with lots of further sidetracks, but that part of my brain is to incoherent right now to write anything about it. Look for more on this in the future.

The other presentation Michael gave was a general overview of SQL Server 2005, actually the same presentation that he gave on the SQL/VS2005 launch events lately. So there was not a lot of interesting news to me there, but again it was interesting to listen to Michael since he gave his personal view on different items. I asked him about the possibility of row-value constructors finally appearing in the next version of SQL Server. He agreed that it was definitely one of the important parts from the SQL standard missing in SQL Server and a top candidate for being implemented, but also recommended letting them know about it by sending an email to “sqlwish@microsoft.com”:mailto:sqlwish@microsoft.com. I guess another request for row-value constructors won’t hurt.. :)

And one last thing, regarding the next version of SQL Server. Although Bill Gates has already mentioned it in an interview and I have seen it in a couple of blogs (Microsoft and non-Microsoft), this was the first time I heard a Microsoft-employee use the codename “Katmai” for the next version of SQL Server in speech. I wonder if all of the attendees understood what he meant when he said “will be fixed in Katmai”. :)

All in all it was a great meeting and as always well worth the time and effort to go there. Now I am looking forward to the local SNUG-meeting this monday.

Driving a car or riding the bus

Posted by Chris on October 04, 2006

Here is an interesting question. You are at point A and want to get to point B. The problem is that you do not know exactly where point B is. So to help you find it you have a device that always shows you the current direction to point B. You now have two alternatives of getting to point B. You can either take your car and drive there letting the device show you the direction. Naturally you will not be able to just fix the steering wheel in the correct position and take a nap since there will be obstacles in the way, but you will be able to quickly adapt to any detours encountered. The second alternative is to get on a bus that is headed more-or-less in the correct direction. When it starts to deviate too much you can get off it and change to another bus, and if you keep doing that eventually you will get to point B (or close enough to walk).

Disregarding any environmental and economical factors, which alternative would you choose? Most of us, I think, would choose the car. Now compare this to software design. Which one of the two alternatives best resembles the way most people design software? Unfortunately the bus is a lot more common than the car. Requirements are ’set in stone’ initially even though they are not fully known (and they can’t be of course). The developers then try and implement some of the requirements and when they are done they leave it for QA (and/or the customer) to test and give them feedback on the direction. Sure, there are lots of processes and ideas on how to refine the bus trip to make it as smooth as possible, but the fact is you are on that bus and you are not driving. You can only get feedback (or at least react to it) when the bus stops, and as soon as you get on the next bus you are once again working in unknown land until the next stop.

The way we use the car instead of the bus trip in software design is that we increase the frequency of feedback. By testing first, continuously integrating, evolving the design and ‘requirements’ and always communicating with the customer (in fact putting the customer in the passenger seat of our car) we get feedback as often as possible which lets us adapt to the current direction we need to be moving in.

Dynamic Management Objects in SQL Server 2005

Posted by Chris on October 04, 2006

Performance tuning and troubleshooting in SQL Server has always been something of a black art. To be effective at it you need to know how to use a large set of tools, including Profiler, Perfmon, DBCC commands and stored procedures. Sometimes it can seem almost random which tool you should use for a specific issue. They will often affect performance themselves, so you might not always be able to use them. [...] SQL Server 2000 can be seen as a black box that can be quite difficult to penetrate. SQL Server 2005 changes all this by introducing the new Dynamic Management Objects.

I have written a short “article on Dynamic Management Objects (DMVs and DMFs) in SQL Server 2005″:/writings/dynamic-management-objects for the “Quest Pipelines newsletter”:http://www.quest-pipelines.com/newsletter-v6/newsletter_1205.htm.

Agile advice from P&P

Posted by Chris on October 04, 2006

I have been listening to an excellent webcast called “Lessons Learned from the Warroom”:http://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?EventID=1032286000&EventCategory=5&culture=en-US&CountryCode=US. It features “Peter Provost”:http://www.peterprovost.org/, “Brian Button”:http://www.agileprogrammer.com/oneagilecoder, “Brad Wilson”:http://www.agileprogrammer.com/dotnetguy and “Darrell Snow”:http://blogs.msdn.com/darrellsnow (all working in the Patterns & Practices group at Microsoft) discussing what they have learned from working by agile methodologies. Among the projects they have worked on following agile principles are the “Enterprise Library”:http://www.gotdotnet.com/codegallery/codegallery.aspx?id=295a464a-6072-4e25-94e2-91be63527327 and the “Composite UI Application Block (CAB)”:http://www.gotdotnet.com/codegallery/codegallery.aspx?id=22f72167-af95-44ce-a6ca-f2eafbf2653c, so they have aquired quite a lot of experience.

If you have not listened to the webcast I definitely recommend you do so, it is filled with great advice. Below are some notes from it, posted here mostly so that I will remember them myself. But I have also added some comment where appropriate.

* Do not forget to do reflection when you are planning. As Peter Provost comments, this is simply following the process since the process tells us to reflect on the past iteration while we are planning the current iteration. But somehow this part is often forgotten. To add to that, you should of course not only reflect while planning the current iteration, reflection should be done as often as possible.
* Manager approval and sponsorship is very important to agile projects. For instance, a common problem is that people are used to being judged by their individual achievements. But agile advocates that the whole team and the results it produce (in particular) is what is important. Therefore it is a manager’s job to make sure that the importance of setting aside your ego and instead working for the team is communicated to everyone.
* One specific area is pair-programming. Developers (who have not tried it) can be reluctant to do it. A similar problem is pairs that never change, e.g. some or all of the pairs are always comprised of the same developers. Peter suggested a pairing chart to help with these issues. List all the names of the developers in a matrix and note down when two developers pair. The goal is to have all the columns in the chart filled. Someone added that a pairing session should not be more than a couple of hours long, spanning a “single coherent logical thought”. After that you should move on to another partner and problem.
* A particularly interesting part (to me) was remote pairing. The P&P team includes a number of consultants that are not present in the warroom at Microsoft at all times. Some of the work was done by people in South America. All the time though people where pairprogramming. They tried a number of different setups, including Skype, VNC and LiveMeeting, similar to what “me and Andrés”:/articles/2005/11/22/distributed-pair-programming/ tried. Although this seems to have worked very well for them, they where also quick to note that when they where actually pairing at the same physical desk theyfelt much more productive and focused.
* The final thing I want to mention was the discussion on how to get people (developers, management, customers etc) interested in working with agile principles. This was interesting since Andrés recently did a presentation with similar ideas called Guerilla Agile at Dotway’s latest competence weekend (actually I ended up presenting his material since he got sick the night before). The most obvious way is to simply show them how well it works. For instance, write your code using TDD even if you are the only one. It should soon be obvious that your code is better. :) When someone presents a design as a diagram, ask if you could state it as a test to document it. Pairprogram whenever you can, soon pairing will be the default instead of the other way round. And my favorite quote from Peter Provost: “If you do not have a warroom, steal one”. Book a conference room for weeks, days, hours or whatever it takes. Share the reservations between the team members, finally management will understand that you really do need a dedicated room.
This is just a small set of all the great advice from the webcast. I hope you are not satisfied with my notes and go listen to the whole thing ASAP.

Inverted wisdom

Posted by Chris on October 04, 2006

The agile community is full of words of wisdom. These often describe a lot more about the process than an article (or even books), at least if you understand the meaning behind them. The funny thing is that many of these are often kind of inverted wisdoms, negations if you wish. Here are some examples (in my own wording):

bq. If your user story does not fit on a card, get a smaller card!1

The reasoning for this is of course that you are not really writing user stories, you are writing something that is bigger (too big). The story should not include a lot of detail, that part is for the conversation [about the story]. By making sure that the stories are not too large you also make sure that you are not spending too much time gathering requirements and thinking too much about details.

bq. If a project is finished on time with all the specified requirements implemented, then chances are that it will not be considered a successful project after some time!2

If a project finishes all the specified requirements on time, then chances are you did not really ‘find’ all the requirements that the customer really wants to have. Project management is all about deciding which requirements that are to be implemented now, later or left behind. You should always have more stories than you have time for.

bq. If a project/team/company is completely dependent on one programmer, get rid of him!3

The longer you wait, the more dependent you become and the more of a bottleneck, or constraint, this programmer becomes. Note to ‘trigger-happy’ management: you do not really need to fire the programmer, there are “other ways to remove the constraint”:http://www.nayima.be/about/TheoryOfConstraints.html.

If you have similar wisdoms then please post them in the comments.

References:
fn1. This one I picked up in Mike Cohn’s book “User Stories Applied: For Agile Software Development”:http://www.amazon.com/gp/product/0321205685/, but he further attributes it to Tom Poppendieck.

fn2. I am not quite sure where I first heard this, but I know Ron Jeffries has written about “similar issues”:http://www.xprogramming.com/xpmag/jatmakingthedate.htm.

fn3. I am not sure if I have read this one specified in this way or just made it up myself, but in any case it is such a common idea that no one source can be attributed.