More C# Partial Class Testing Strategies

I can’t take credit for this approach, and even if I could, I probably wouldn’t, because it makes me feel kind of icky.

Anyway, I recently heard about a legacy code testing strategy where you mark your class as “partial”, and in another file, you add whatever public properties/methods you need for your tests. You make the contents of the second (testable file) conditionally compiled (the classic #IF DEBUG) so the encapsulation is still there for any release builds.

It’s kind of like endo-testing, but you’re extending the class “sideways” instead of “downwards”.

Basically, it’s breaking encapsulation in a controlled way, and for the most part, I think it’s a bad idea if you’re working with a new design. If, however, you’re trying to get some meaningful coverage for your legacy code (which wasn’t designed for testability) it can be a good stop-gap in dealing with the legacy code refactoring catch-22: where you don’t want to make changes without tests, but you can’t make tests without making changes. 

Any strategy, such as this one, which allows you to get the first layer of tests down before further refactoring, should be embraced as a good thing.  If you find that you need to do this for any new/original classes, my guess is that your class is too big and needs to be decomposed further into more cohesive and testable classes.

Testability in Isolation: Not just for automated testing

One of the code qualities that I’ve been advocating is “Testability in Isolation”. I’ve expanded the name from the NetObjectives-blessed quality of just “Testability” because I find it more explicit and useful.  Everyone can say “sure, my code is testable, I test it all the time”, and be done with it. If you ask, “Can you test your business logic in isolation from your other layers” you get a more meaningful understanding of the quality of their code.

Usually, I discuss testability in the context of unit tests,  using the strict definition of unit tests: automated, developer-authored, testing one thing in isolation without external dependencies. I’ve found recently that testability in isolation is a virtue when doing manual testing too.

I’ve been working on a Windows Forms application recently, and I found myself drilling down to the same part of the UI over and over while I tweaked one of the user controls. I felt the feedback loop growing, so I created a separate winforms project which I called “Playground” and made a way to get to that particular control I was testing with just one click.

In the playground UI, I use the same test doubles for things like persistence that I use in my unit tests (I’ve got a really nice in-memory-db fake to replace my file-based persistence layer). When I need to, I have the test harness pre-load enough fake data to work with, so when I click the “one button”, it’s set up to exercise what I want it to exercise. Tightens my feedback loop and speeds me up immensely.

It’s weird to think that I haven’t done this explicitly before (although in some web apps, it’s easy to just jump directly to the right page). I’m sure that I’m going to do it again, especially when working with testers, it can give them a way to test just one interaction/user interface component by itself without feeling blocked because parts of the app aren’t ready yet.

Program Launcher/Auto-Updater

I remember the first desktop app that I developed for a client. All of my coding experience until that point had been as  a web developer. Sure, I had done QA and technical support for a  shrink-wrapped/installed program before, but I left that organization to take a web developer job.

Even though I hadn’t yet heard of Agile or Scrum, I had grown accustomed to the notion of frequent incremental releases of value. The web made that easy to do. You didn’t have to worry about installers or versioning, just push the bits out to the server and you were good.

So, the first thing I did for the client app was make a program launcher, so when you click on the icon to start the app, you don’t actually start the app itself, you start a little shell .exe that displayed a splash screen. It checked a web site for a newer version of the software. If there was one, it would pull it down, copy over the bits and then start it.

When the currently installed app was the same version as the app on the server, it would just launch the app directly. This process made startup take only a few seconds longer, and it gave me a chance to display the really cool-looking splash screen I created.

It had to be a distinct .exe because I couldn’t over-write the one that was running, obviously.

It worked like a charm, of course, and I could easily make updates to the client app as well as the server apps.  I never had to worry about supporting multiple versions of the software in production. The app was small enough, and the users all had LAN access, so that even on the days where you did have to download the new version, it always took less than a minute.

I remember thinking, at the time, “This is how all software is going to be in the future. A tiny little launcher app which will keep the main app in sync.”

If nothing else, this demonstrates my ability to predict the future, as I haven’t seen anyone else use this approach. On the Microsoft technology side, with .NET xcopy deployment, web services, and xml-based (vs. registry) configuration API, this should be easier than ever.

I wonder why this idea never took off. Is this like coding with fixed-width fonts where people are so stuck in the same way of doing things?

Autogenerating C# classes? Add the “partial” keyword

I’m not a big fan of code generation, but I recently thought up a scenario of making it a little more manageable. Here’s the scenario, let’s say you’ve got some tool that generates a lot of C# code for you. Perhaps it’s a “clever” db-to-CRUD code translator. They exist out there. You can’t really change those classes, as your changes can be over-written by whatever tool is generating the code.

Using those types hard-code a database dependency into your domain classes, so you can’t really test in automation anymore.

Two strategies for working with this problem.

1. The usual wrap+fake maneuver. You can encapsulate the auto-generated class into some abstraction (interface or abstract class) and have your domain objects couple to that wrapper.

2. Change the tool to add the “partial” keyword to the class (or change the output of the tool, realizing that you may have to change it again when it gets over written, but it’s what, seven characters?)

Now that this auto-generated type is partial, you can extend it without changing the file. It’s open-closed at the file level. The original example I had was to add just the following in a distinct file.

public partial class AutoGeneratedType: SomeInterface{}

public interface SomeInterface
{
}

Now you can use ReSharper’s “pull Member up” automated refactoring to build the abstraction, couple your domain objects to the interface, and make fake versions for testing in isolation. You are using ReSharper, aren’t you?

And, of course, just like any time that you break a concrete type dependency, you can do things like substitute a different version, add a proxy, etc.

Another decompression artifact example: Design Patterns

I just finished writing about decompression artifacts and while I still had Comic Life open, I thought I would come up with another example.

Design Patterns Decompression Artifact Example
This is largely in response to the post about design patterns on the Coding Horror blog, which seems to get design patterns wrong in the usual way.

More anti-region sentiments.

Around this time last year while I was working with Kent Skinner at thePlatform and were ripping out the #REGION sections wherever we found them in legacy code, we Googled for the phrase “Regions are Evil” and didn’t find any matches.

Now I’m delighted to say that in addition to my rant from a few months ago, I’ve recently found a few other anti-region arguments. including the specific “Regions are Evil” phrase we predicted.

Mind you, in the comments of one blog, someone calls the anti-region sentiment “a load of elitist horsecrap”. I think I can live with that, though. I’ve been called worse.

Links:

http://blog.jayfields.com/2005/05/do-youregion.html

http://www.oobaloo.co.uk/articles/2007/6/6/visual-studio-regions-are-evil

Happy Independence Day

Alan Shalloway, the NetObjectives chief and one of the world’s most prominent Lean Software Devleopment evangelists created a Yahoo! group around “lean-agile-scrum” and invited the NetObjectives/VelocityPartners people (me included) to participate. So I do, sometimes.

I’ve been haunted (haunted I tell you!) about a question that came up a while ago. I’m too lazy to look it up, so I’ll just paraphrase.

“I’m interested in putting up some big visible charts/information radiators, but we don’t have a suitable scrum room/whiteboard and we’re prohibited from putting things up on the cubicle walls. Any advice on a technological solution.”

I responded that he had a deeper problem with the core lean principle of “respect for people” if the office furniture is more important than the people. Most people ignored my comment and actually answered the poor guy’s question. Use a wiki, cardmeeting.com, whatever. Nice, polite, helpful comments, totally missing the point of this guy’s problem.

But I’ve been thinking about that, and I wondered what I was prohibited from doing at my current place of employment. Here’s what I came up with (in no particular order).

Harassment
Assault
Theft
Vandalism
Indecent Exposure

Notice anything in common on that list? Those are all things that are prohibited by law in our free society.

I talk about “values” a lot, and how you can look at intellectual frameworks such as Lean and Agile as pre-packaged sets of values. I don’t think that any of them are definitive, and different groups need to come up with the values that matter to them.

So now, on the eve of the anniversary of the founding of this great nation (and trust me, it’s hard to be very patriotic right now, with the Scooter Libby thing and all) I’m thinking that an important value for companies/collaborative teams…. Liberty.

This is similar to Lean Software’s “Respect for people/Empower the team” value, but I think it’s more than that. People should only be prohibited from doing what harms other people or destroys property. Think John Stuart Mill’s “harm principle”.

And no, tacking up a chart on a cubicle wall doesn’t count as vandalism.

The Problem With ScrumWorks

For the last few projects at thePlatform (a truly great place in almost every respect) we’ve been using the basic version of ScrumWorks. For the projects before that, we used the sticky-note-on-whiteboard approach (not without its own set of problems, of course) and the customized excel spreadhseet approach (again, with problems).

Usually, for sprint planning, one of the product owners/solution architects will fill in the uncommitted backlog ahead of time, and we just drag things around and refine estimates during the meeting.

Today, with our regular product owner gone on vacation, I put the majority of things in ScrumWorks, and it was painful.

The most recurring problem was that i had to estimate both stories and tasks before I could put them in the sprint. It totally breaks the creative flow of “oh, we’re going to need to do this, put that in there” if I get these, and there’s no other word for it, rude, dialog boxes popping up telling me that I can’t do what I just tried to do, in that I have to do something else first.

On top of that, there’s this rigid two-level hierarchy in place. If I accidentally enter something that should be a task as a story or vice-versa, too bad, I’ve got to re-enter things, and re-estimate things (in different units of measurement, no less) to appease the software.

At that point, I’m working to serve the needs of the system, and not the other way around.

The biggest revelation that I had today, which I shouldn’t complain about too much, as I only spent around 30 minutes doing it, was that this isn’t the way that I would model a software development project.

Besides the rude UI around entering stories/tasks, there’s no logic for collective ownership or load-leveling (as in, “Gee… it looks like those are all things that Kevin’s going to do and Mo has nothing”). I prefer to estimate both stories and tasks as small/medium/large than days/hours (sometimes with a translation table that says that small = 8 hours, on average, etc.)

I’ve been wanting to write my own software project management tool for years, the design of which changes as I learn more about how different teams approach software tools. I’m eager to try Mingle, from ThoughtWorks, as I trust that they know how to develop a less offensive user interface.

My hope is that the people developing/using these tools realize that they are essentially modeling tools. The “state of the world” in ScrumWorks is an incomplete model of the actual state of the project. Like any model, it can only approach reality, and after a particular point, you get diminishing returns with marginal verisimilitude. Also, just like I’m starting to see the Scrum process as too prescriptive (that is, “what” driven instead of “why” driven), ScrumWorks is way too prescriptive. I don’t want people telling me how to make software, and I sure as hell don’t want software to tell me how to make software.

Even though it means that I’ll have to give up a whiteboard, I might just start lobbying for the sticky notes again.