Category Archives: Agile

The solution here is not process dogma

The other day I was discussing a process/dev workflow problem with one of my friends. I managed to get a basic understanding of what the problem was (team disagreement about the importance and sequence of sprint reviews, retrospectives, and planning) but we were both too pressed for time to brainstorm for a solution. He had a meeting to attend, and I had a demo to give.

The only advice I gave was, “The solution here is not process dogma. You can’t just fall back on the Scrum rulebook and say ‘we’re supposed to do it this way’, you have to get to the value of why you should do the previous review/retrospective before the next planning.”

The value, of course, is that you should be taking what you learned from the last sprint and using it to inform your actions of your next sprint. It’s Scrum’s larger-scale feedback loop (the small feedback loop is the daily meeting).  Inside or outside of Scrum, feedback loops are important for making software well.

When I was thinking about it later, I realized that process dogma is never the solution. Software development is intellectual work, and to make a persuasive case with skeptical people, you have to do better than “because the book says so.”

Code Smells, Correlations, and Poisoned Coffee

Winston Churchill had a much faster and sharper wit than I have. Consider this* famous exchange:

Lady Astor: “If you were my husband, I would put poison in your coffee.”

Churchill: “If I were your husband, I would drink it.”

I, on the other hand, always think of the correct thing to say in an argument a few days later. It’s not just for ex-post-facto arguments, either. I’ve been doing some technical trainings lately, and I try to have a more conversational style than just reading lots of bullet points from slides. I figure that if I don’t understand a topic well enough to speak about it extemporaneously, than I have no business talking about it.

In the last presentation I made, I talked about the practice of refactoring and the concept of code smells.  I gave examples of a few of high-value smells and then gave this little wishy-washy disclaimer.

“When you encounter these code smells, it doesn’t mean that you have to change it, it just means that you should look at your code closely here, as you may have problems.”

It’s not so bad, it’s pretty much what everyone says about refactoring. What I should have said, was this:

Code smells are correlations to quality problems. Heavily commented code blocks aren’t necessarily bad, but lots of comments very strongly correlate to readability problems. You don’t fix it by deleting the comments. You fix it by making your code readable enough to stand without the “how does it work” comments.

Long methods aren’t necessarily bad, but long methods very strongly correlate to cohesion problems. It’s possible, and sometimes required, to have a long method that’s perfectly cohesive, but it’s outside the norm.

And, of course, you don’t fix the problem based on the correlation to the problem, you fix the actual underlying problem. Breaking a long ProcessThings() method into three arbitrary methods called StepOne(), StepTwo(), and StepThree() doesn’t actually make the code any better.

You see, that’s not wishy-washy at all, and it appeals to the distinction between correlation and causation. It’s not as funny as poisoned coffee, but it has some concreteness to it.

*After looking up the Astor/Churchill exchange, I found that it’s very possibly an apocryphal story. Oh well, it’s still funny.

Testing Abstract Classes In Isolation

In my post about guilt as a code smell, a comrade pointed out that it’s perfectly possible to test abstract classes in isolation, you just make a concrete derived class as part of the test (thanks Craig!). Having a distinct concrete subtype for testing is something I’m already doing a lot with endo-testing, so it’s not even totally without precedent.

It does still bother me, though, and I’m not 100% sure why. Some thoughts:

Let’s say your abstract class is using the Template Method pattern, where it has a public method which just delegates “downwards” to abstract methods with different overridden implementations. This is a perfectly good use of an abstract class, yet it seems kind of pointless to test in isolation, as you’re going to be testing each implementation and will be testing those base classes anyway. 

The scenario that I had in the other post had a different kind of abstract class, with no public methods, just protected ones. I’m just achieving zero redundancy by moving code logic upwards in the tree. Testing here becomes trickier, as there’s no publicly exposed design surface. Should I just make one up? The right solution, for me, for that time, was not to test the abstract class, but to move that logic into a distinct service class so I could work with it more directly.  It’s textbook “favor composition over inheritance” design patterns stuff.

In any case, I stand corrected. It’s absolutely possible to test an abstract class in isolation. It is impossible, however to test a base class in isolation without testing its abstract class. Which could make testing the abstract class in isolation kind of pointless.

Narcissism

I was reading the post on Narcissism over at the (always excellent) Coding Horror Blog and I was reminded of something that happened to me.

Many years ago, I was disappointed by my job. I was working at a small consulting company which had been mostly a software development outfit and was transitioning to being more of a general marketing consultancy. I didn’t mind that as much, I have a greater love (or tolerance, depending on your perspective) for marketing than the average developer. Actually, I was driven out by a top-down mandate to “improve our process” which was manifest around trying to adopt development methodologies that were state-of-the-art thirty years ago. It felt as if they were saying to me “all of those things you did to make the last project you ran a big success (tight feedback loops, close developer-to-customer collaboration, design for change, light up-front-documentation)… well, don’t do any of them again.”

So I quit.

In retrospect, I think it would have been possible to improve the situation, but I didn’t yet have a solid handle on why I was doing the things I was doing and how to help a software development team be more effective. I was still essentially “in the closet” –embarrassed that I didn’t want to write 200 page functional and technical spec documents before doing any actual coding. This was before the Agile/Scrum concepts were as mainstream as they are now.

When I was interviewing for another job, I found myself absolutely enthralled by one of the interviewers. He seemed like the single most brilliant developer I had ever met. I was so eager to work with this guy, that I let myself overlook a bunch of warning signs about the organization (legacy code base, previous-generation languages/tools, uninteresting problem domain, deplorable office space, disrespectful management, etc.). I wound up taking the job.

Many months later, as I was trying to figure out how I came to be in the horrible situation I was in, I came to the shameful realization that I thought this guy was brilliant because he reminded me of myself. It was narcissism, plain and simple. Ever since then, I’ve been cautious to think about why I think someone is so amazingly smart.

The same phenomenon came up again last summer, when I did a “Pragmatic TDD” seminar presentation for a handful of development companies. After one presentation, a guy came up to me and said “Your presentation was great. Just brilliant. This is exactly what I’ve been advocating we do for forever.”

Of course I brilliant, I was just like him.

Now, I’m trying to do a better job of being honest with myself, challenging myself, and listening intently to those I immediately disagree with.

Guilt as a Code Smell

One of the best things about working for Velocity Partners is that I get a chance to do presentations/brown bag seminars for their other clients/prosepective clients. I like to think that as I’m a hands-on developer who actually works with this stuff every day, I have different credibility from the full time trainers/presenters/coaches/pundits.  Note: I don’t want to disparage the full-time trainers that I know and respect, their credibility comes from the fact that they were hands-on coders for a long time, and have more time to do the reading/research/writing that someone working on deadlines may or may not have.

I’m currently putting together a presentation on Refactoring, and how that relates to the other agile tools and techniques (where “agile” simply means “modern development practices that work”). While looking for examples, I’ve been re-reading Martin Fowler’s great Refactoring book.  One thing that struck me was the focus on “getting the inheritance hierarchy right” which, after living in the design patterns (favor composition over inheritance) realm for the last few years, felt kind of odd to me.

Meanwhile, in my “day job” I’ve been working on a well encapsulated, generics-based .NET client for a family of REST-y XML services.  After making one major component, I found that I had to make another major component that does much the same thing. So, I created a new abstract base class and made both my existing component and the new component concrete derived classes of the base class. As I needed functionality for the new class, I generalized it and moved it up to the abstract class (using the protected modifier, of course) so I could use it in the other derived class.

It was working pretty well. I had very little code duplication and ReSharper is particularly good at the “Pull Members Up” refactoring, but I was starting to feel a little guilt about not doing things in a “pattern oriented way”. Sure, you could never use the two concrete components interchangably, so there was a violation of LSP, but I can be cool with that. The abstract class didn’t have any public methods on it, so there’s no danger of someone trying to couple to its publically-exposed interface.

After, I figured out where my guilt was coming from. It’s a coupling/testability problem. As the concrete types are tightly coupled to their base types, I couldn’t ever substitute a different type to handle that functionality. This is particularly important as the base type was all about making external (HTTP) service calls, so it was impossible for me to test any of the types in isolation.

The solution: it’s pretty obvious, but I just moved the functionality of the abstract base class into a service class with an interface. Now my two components (formerly derived classes) just have an instance typed to that interface. I can test all three parts in isolation (Q: How do you test an abstract class in isolation? A: You can’t) and I can re-use the same functionality across additional components in the project, further reducing duplication.

So, what I’m saying is: take your guilt seriously. If you have a bad gut feel about a design, it might very well be bad.  It’s just like any of the code smells in the original Refactoring book. It doesn’t necessarily mean that there’s a problem, but it’s worth looking at.

But, at the same time, I’m not going to beat myself up over my interim design. It was a good, easy stepping stone to a more optimal design. This is the sort of thing that Scott Bain’s new book Emergent Design: The Evolutionary Nature of Professional Software Development is about. I’ve only skimmed it so far (I’m working on a new presentation, after all) but I know Scott, and I know his approach to the subject. From what I’ve read so far, it seems to be the right text for the professional programmer who wants to move beyond “just getting it working” to the level of “getting it working well”.  I wish that I could have read it years ago.

More C# Partial Class Testing Strategies

I can’t take credit for this approach, and even if I could, I probably wouldn’t, because it makes me feel kind of icky.

Anyway, I recently heard about a legacy code testing strategy where you mark your class as “partial”, and in another file, you add whatever public properties/methods you need for your tests. You make the contents of the second (testable file) conditionally compiled (the classic #IF DEBUG) so the encapsulation is still there for any release builds.

It’s kind of like endo-testing, but you’re extending the class “sideways” instead of “downwards”.

Basically, it’s breaking encapsulation in a controlled way, and for the most part, I think it’s a bad idea if you’re working with a new design. If, however, you’re trying to get some meaningful coverage for your legacy code (which wasn’t designed for testability) it can be a good stop-gap in dealing with the legacy code refactoring catch-22: where you don’t want to make changes without tests, but you can’t make tests without making changes. 

Any strategy, such as this one, which allows you to get the first layer of tests down before further refactoring, should be embraced as a good thing.  If you find that you need to do this for any new/original classes, my guess is that your class is too big and needs to be decomposed further into more cohesive and testable classes.

Testability in Isolation: Not just for automated testing

One of the code qualities that I’ve been advocating is “Testability in Isolation”. I’ve expanded the name from the NetObjectives-blessed quality of just “Testability” because I find it more explicit and useful.  Everyone can say “sure, my code is testable, I test it all the time”, and be done with it. If you ask, “Can you test your business logic in isolation from your other layers” you get a more meaningful understanding of the quality of their code.

Usually, I discuss testability in the context of unit tests,  using the strict definition of unit tests: automated, developer-authored, testing one thing in isolation without external dependencies. I’ve found recently that testability in isolation is a virtue when doing manual testing too.

I’ve been working on a Windows Forms application recently, and I found myself drilling down to the same part of the UI over and over while I tweaked one of the user controls. I felt the feedback loop growing, so I created a separate winforms project which I called “Playground” and made a way to get to that particular control I was testing with just one click.

In the playground UI, I use the same test doubles for things like persistence that I use in my unit tests (I’ve got a really nice in-memory-db fake to replace my file-based persistence layer). When I need to, I have the test harness pre-load enough fake data to work with, so when I click the “one button”, it’s set up to exercise what I want it to exercise. Tightens my feedback loop and speeds me up immensely.

It’s weird to think that I haven’t done this explicitly before (although in some web apps, it’s easy to just jump directly to the right page). I’m sure that I’m going to do it again, especially when working with testers, it can give them a way to test just one interaction/user interface component by itself without feeling blocked because parts of the app aren’t ready yet.

Program Launcher/Auto-Updater

I remember the first desktop app that I developed for a client. All of my coding experience until that point had been as  a web developer. Sure, I had done QA and technical support for a  shrink-wrapped/installed program before, but I left that organization to take a web developer job.

Even though I hadn’t yet heard of Agile or Scrum, I had grown accustomed to the notion of frequent incremental releases of value. The web made that easy to do. You didn’t have to worry about installers or versioning, just push the bits out to the server and you were good.

So, the first thing I did for the client app was make a program launcher, so when you click on the icon to start the app, you don’t actually start the app itself, you start a little shell .exe that displayed a splash screen. It checked a web site for a newer version of the software. If there was one, it would pull it down, copy over the bits and then start it.

When the currently installed app was the same version as the app on the server, it would just launch the app directly. This process made startup take only a few seconds longer, and it gave me a chance to display the really cool-looking splash screen I created.

It had to be a distinct .exe because I couldn’t over-write the one that was running, obviously.

It worked like a charm, of course, and I could easily make updates to the client app as well as the server apps.  I never had to worry about supporting multiple versions of the software in production. The app was small enough, and the users all had LAN access, so that even on the days where you did have to download the new version, it always took less than a minute.

I remember thinking, at the time, “This is how all software is going to be in the future. A tiny little launcher app which will keep the main app in sync.”

If nothing else, this demonstrates my ability to predict the future, as I haven’t seen anyone else use this approach. On the Microsoft technology side, with .NET xcopy deployment, web services, and xml-based (vs. registry) configuration API, this should be easier than ever.

I wonder why this idea never took off. Is this like coding with fixed-width fonts where people are so stuck in the same way of doing things?