Wednesday, November 3, 2010

Test features, not implementations

I used to try to test my code. I would fire up some JUnit classes and go to town, trying to think of what parts of my code were testable. Having read a bit online about unit tests I would try to do all the things I was reading about - test public method signatures, parameter validations, and even do some wiring tests making sure my code was calling my other code in the ways I expected.

However, at the end of the day, I just didn't see all that much value in writing these tests. They did help a bit, and often made writing the actual code easier if I was dedicated enough to test drive. But it came at a high cost - not only did I have to spend time writing the tests, I had to spend time maintaining them as well. Every time I changed code I had written before, I had a horde of broken tests to deal with.

I really wanted to figure out this whole testing thing, but it just wasn't making sense.

Enter Business Driven Development (BDD). It was like a veil was lifted and I could see what I had been missing before. You still wrote tests, and wrote them first so you were still test driving your code, but instead of testing all the little details of your code, you tested features! It made so much sense.

Now, what is a feature you might ask. I define a feature as anything your application is supposed to do. If you click a button and an overlay is supposed to open, that's a feature. If the overlay is supposed to display an image, that's a feature. If you click a close button and the overlay closes, that's a feature.

Let's assume you have a jQuery plugin named "overlay" that, when called, opens an overlay. Let's also assume you're using Jasmine for your JavaScript tests because it's awesome and BDD oriented.

Here's the wiring test for opening the overlay:

it('should call the overlay plugin', function() {
  spyOn($.fn, 'overlay');
  $('button').click();
  expect($.fn.overlay).toHaveBeenCalled();
});

Here's the feature test for opening the overlay:

it('should open an overlay', function() {
  $('button').click();
  expect('.overlay:visible').toExist();
});

They might not look all that different, but the difference is very significant.

If you wrote the wiring test, it would make sure that your plugin was indeed called. However, what if you wanted to refactor your code later? What if you needed to change the overlay plugin to only configure the overlay but not open it? Your wiring test would need to be rewritten. What if you had ten of these wiring tests? Hundreds? Any refactoring you do would cause a ton of test maintenance!

Now look at the feature driven test. What is it really testing? The user clicks a button, and and then an overlay should be visible. There's your exact requirement from your customer and there's the test that makes sure it works. Refactor your heart out, the test can stay the same because you're testing your feature, not your implementation.

This is a very simple example, but the concept only gets more important as the features get more complicated. Your tests should always work for you instead of against you, and letting you refactor without needing to change your tests is a huge win.



P.S. I'm not saying you shouldn't ever write wiring tests, but I would recommend using them as plan B. They can still come in handy (e.g. when integrating with libraries that you can't control), but I would recommend against writing them for everything.

Sunday, August 1, 2010

Craft

A while back it occurred to me that not every software developer strives to be the best at their craft that they can be. Many don't even regard it as a craft at all, and see it simply as a way to make some money.

Now, I'm not talking about people that put in the 8-5 and go home because, admittedly, that's what I do. But, rather people that are too content with the status quo, or unwilling to change it if they're not. People that aren't opposed to learning new things, as long as they can do it by accident.

I'm not sure why this revelation surprised me, it seems incredibly naive in hindsight, but it did. I guess it's because to me, the interesting part of writing software has always been learning new things, pushing the boundaries of what's been done, and trying to do better than you did last time.

We're not always presented with new, groundbreaking opportunities every day. For example, my current project is an Android app where our biggest goal is to "make it like the iPhone." Not the most exciting thing in the world.

However, the interesting parts are in the details. We're using some great libraries to make testing the app easier, and challenging ourselves to make a cleaner app with better patterns than the last one we did. Everyone on the team is engaged in making this the best app it can be for our client.

It's not always the technical details that matter either. We've been giving thought to how we can improve communication with clients, how we can begin a project with the best chance of success, and how we can make sure we're on track as the project continues. I also feel like I've made huge personal leaps by learning how TDD/BDD development really works and how pairing can help make you better as well.

So who are you? Are you someone who is content with what's been done before? Or are you someone who constantly strives to push things and make them better one step at a time? It doesn't have to encompass your whole life, but it's a fascinating world we work in, and I urge you to take a look.

Monday, July 26, 2010

Do your tools make sense?

A couple of days ago I was talking to a friend and he asked me what some of the fundamental things were that caused me to change jobs. After thinking about it for a while, I think I can start to answer it in reverse - what new things I have seen that I hope every organization I work for from here on out will have.

The first thing that came to mind is just doing things that make sense. This seems like it might be obvious and something that people do anyway, but I assure you, it is not. It's not to the extent that I named this very blog after that very thing - does having a cement cutting board that will dull your knives really make sense? Yet I used one every day for months.

One of the biggest tangible things I can think of is using tools that make our jobs as developers easier. Making our jobs easier = happy developers. We use IntelliJ instead of Eclipse since you get a complete polished package with awesome refactoring tools. We (mostly) use RubyMine instead of TextMate for the same reasons. We use git instead of subversion because you get a much more powerful feature set. We use github instead of our own server so we have less operations overhead. We use rails instead of Spring/Struts/Django/.NET/etc. because we feel it gets the job done easier and faster.

The list goes on and on, and it's always a work in progress, but that's what's so great about it. If a new tool comes along that we feel will significantly make a difference, I guarantee you it will show up on peoples' machines right away.

The point of this isn't to say "you need to use tool xyz", but simply to say that it feels great to work for a place that is willing to invest in a toolset that lets you simply get work done.

It always baffled me before when I was told that I couldn't use a certain tool that had a cost involved (often under $100) because it was too expensive. If that tool had saved me even half an hour of time total the company would have broken even at the rate they billed me for.

One of the biggest examples of this I can think of involved code reviews. At my last company we had always tried to code review everything but had trouble making sure we were diligent about doing them. One day I decided that we needed something better. I started researching code review tools and found one in particular that looked really promising. There was a 30 day trial so I got it up and running in about an hour and we started using it on my team with great success.

The issues started when we tried to convince Those With Money to spend $500/seat on this tool. I completely understand wanting to make sure that you invest your money wisely, but the battle that ensued over this tool, when it had rave reviews from any engineer that used it, was lengthy and ended up with everyone upset and exhausted by the end.

The funny thing is, if this tool could prevent everyone using it from writing one bug every 3 months the company would have broken even on the cost in the first year of using it. (These numbers are based on their calculations.) We had already seen results much better than that in the first 1 month trial, and had already expressed other benefits such as knowledge transfer, reduced time bringing someone new onto a team, and cross-team reviews to name a few. Yet still the battle was fought every day for weeks until a decision was finally made.

(By the way, if you are doing code reviews, I highly recommend using Code Collaborator. It is well worth the price tag and was called out by others as the single biggest improvement in our engineering process that year.)

Probably the craziest situation that I have ever seen happened at another company I was working with. They had a development team of about 10 people, and all the engineers were given mediocre laptops as their sole development machines. To top it off, they were running a heavy webserver (Weblogic), had only 2GB of memory, and had 5400 RPM hard drives with software encryption used on every read/write.

To give you some data points - publishing code changes to the Weblogic server on my dev machine, a fairly pimped out Mac Pro, took about 12 seconds. To publish changes on their laptops took around 7 minutes. Really. I timed it. Twice.

This means that _every_single_code_change_ they did took them 7 minutes before they could see and test it. It was amazing to see their management continually shoot down requests for new machines and then complain about how unproductive this team was!

I guess what I'm really getting at, is working for a company that honestly supports you in getting your work done is an amazing thing. Does it cost more money up front? Yes. Will it pay off in the end? Absolutely. Is it awesome to get to use tools, languages, and frameworks that the developers like? Oh yeah.

So, until next time, keep your toolkit close and your knives sharp.

Wednesday, July 21, 2010

Why I like pair programming (and why I'm not leaving)

A few friends have pointed out a blog post from someone who recently left Pivotal Labs due to the fact that he didn't enjoy pair programming full time. (It's well worth the few minutes of time to go read it.)

I read through a bunch of the comments on the blog and the linking reddit page and most of them started as "Well I've never done paired programming but..." or "That's stupid! I did pair programming once and...", so I figured I would write up my own reply as someone who has done pair programming full time for around 4 months now.

Mark's post has gotten me thinking a lot about the whole pairing thing and whether it is all that it's cracked up to be. I'm always trying to reevaluate the practices and techniques I'm using, even if I do work for a place that has mandatory pairing on my agenda. (For the record, a few months ago there was a company wide forum where everyone reevaluated as well. The result was nearly unanimous for pairing full time.)

One of the biggest arguments that I heard from the post was that pairing doesn't allow for that sort of reflective, meditative approach to programming. You know, those times when you have a big problem in front of you and you need to spend at least an hour or more thinking about it and coming up with a wonderfully elegant solution. Some people are better at this than others, and Mike seems to be one of them.

In fact, my biggest challenge with pairing is probably the same thing. I like to think I'm pretty good at that technique myself, and I've had my share of cool ideas that have paid off from doing things that way.

However, something just doesn't sit right with me about that. Sure, that lone ranger style of coding can be a lot of fun, and you can feel very proud of your results at the end, but I've found that when pairing you almost always arrive at solutions that are just as good, and often better the first time around than doing them by yourself.

Just last week, my pair and I had a problem where I had a vision of how I thought a part of our system should work out. Part of me was screaming, "I know how to do this... I don't want to explain it, I just want to go off and do it by myself and get him to review it when I'm done!" However, because I was pairing, I found that my pair had some great insight into the problem, and, even though the vision was still mine, the end result came out way better than it would have if I had done it on my own (and I learned a lot from my pair along the way!)

I think that we often overestimate how well we do things the first time. Looking back at some of the code I've been the most proud of, I could convince myself I do it great the first time around. But the reality is nearly every single one of those things has been through a number of revisions that has gotten the code to the state it is in today. Had I paired on those features, I'm sure they would have ended up in a better place more quickly.

Ultimately, I think the pill I've had to swallow about this has been to give up the self for the greater good. As I mentioned, it can be a lot of fun and feel very rewarding to go off and do some meditative programming, but is that what is really best for the project? For the team? For your client? Yes, great things can come from that, but I've seen time and time again that you get to a better place faster by pairing. You also spread knowledge between team members faster, write fewer bugs, and generally stay more on course by writing less unnecessary code.

Additionally, I find the extra knowledge and learning I get from pairing helps make up for the lack of satisfaction I used to get from being the lone coder. There are many ways to collect paychecks as a programmer, and I can't fault someone like Mark deciding pairing isn't how they want to do it. But for now, I've decided to try to give up a bit of my pride, drink the kool-aid, and embrace pairing and the team centric atmosphere it creates. I can't ignore the results I see every single day of being more productive and having a better product to show for my efforts. Obviously you will have to make your own decisions for yourself.

P.S. Here's another post from a Pivotal person on the same topic. Also worth a read.

Saturday, June 12, 2010

Trifecta

Software engineers are a strange bunch. We're forced to thrive on bits of random trivia, and are often literal to the point of being intolerable. It's sort of a cart/horse dilemma though, I'm not sure how to write the causal equation although I suspect that software gives us an outlet for our oddities more than it creates them.

One of the seemingly more peculiar attributes (good) software engineers have is the fact that they want to be productive and do good work. If given the choice to face a challenge or avoid doing any real work most of us would pick the challenge every time. What this gives a manager of such engineers is huge potential that just needs to be pointed in the right direction.

However, there are a few key things that can squash the enthusiasm of any engineer. Or rather, there are a few things that need to exist for them to be content. After many conversations about this with my colleagues, I think I've been able to boil it down to three fundamental concepts.

Here, I present to you, the Software Engineer's Trifecta of Happiness:

- Cool stuff to work on
- A good work environment
- Fair compensation

If we can have these three things, happy and productive engineers we will be.

Cool stuff to work on

First and foremost, cool stuff to work on is going to be key to most software engineers. If we are bored at work we won't be sticking around that long. This doesn't mean that we have to be constantly inventing the latest and greatest startup idea with bleeding edge technology, but we need engaging challenges to work on.

Bob's Brochure Site #7 might not sound all that exciting, but there can almost always be challenges found in the details of even a mundane task. However, if our work environment doesn't encourage innovation and forces us to do Bob's Brochure Site #7 exactly the same way as we did #1-#6 any chance of it being a cool project are nil.

Software engineers love our craft because it lets us invent things. One of our core principals even states that repeating ourselves is one the greatest sins you can commit. If we're not given the freedom to experiment and try to improve the project and ourselves we'll grow tired of any project that is thrown our way.

A good work environment (a.k.a. respect)

Respect is our currency. It's how we interact with each other and how we organize ourselves. Pretty much everything that goes along with a software engineer's good work environment has to do with respect. Sure, fun perks are great, but even that gets translated to showing respect for the employees' well being.

A good work environment also has to be supportive and empowering. Like I mentioned about cool projects, if the environment isn't conducive to growing and being productive it's really hard to be satisfied. If we can't feel any ownership over the projects we work on our motivation plummets.

We're stereotypically quick to complain, but that stems from having a very low tolerance for things that don't make sense. This low tolerance is part of why we are good at our jobs - being able to quickly decide something is suboptimal is a key skill to have for software development. There are few things that kill our enthusiasm faster than feeling like management or policies that don't make sense are getting in the way of our work. To us, it seems like these kind of managers or policy makers don't respect us enough to bother making good, thoughtful decisions. We're coming in every day, trying to work, and people that are supposed to be on our team are putting roadblocks in our way.

I could go on and on about what makes a good work environment, but it's way too much to go into right now. To summarize - treat us with respect. We don't have to get our way all the time, but you can't pull the wool over our eyes about things that directly affect us and expect us to not notice. Let us know why things are happening that seem counterintuitive and they will be much easier to swallow. If you can do that and give us an environment that encourages us to grow and constantly improve ourselves and we'll happily crank out feature after feature. And always remember the cement cutting board principle - let people affected by the policies help make the decisions.

Fair compensation

Ah, everyone's favorite topic. At the end of the day, the reason everyone goes to work is to get paid. The trick is to make us forget that's why.

Money really isn't a good motivator for software engineers. We're more than happy to get paid a fair market price to do our jobs. However, money is a great de-motivator. Not getting paid competitively ties back to the the respect issue. Unless you can provide amazing projects at an amazing place, we're going to notice that our paychecks are lighter than they could be. We're pretty good at doing greater-than and less-than comparisons ;) Soon we'll begin to feel that people don't appreciate the work we do.

I think places like my current job get this one right. Everyone's compensation is presented neatly and fairly and is competitive in the local market (really). You know exactly what you're getting, and then you can forget about it. The last thing you want is as bunch of employees focused on their compensation. You need those brain cycles spent on development, not grumbling about money.



So there you have it, the Software Engineer Trifecta of Happiness. Three fundamental principles that when combined make a surprisingly potent combination. The best part is all the legs of the trifecta focus on being more productive and being able to take pride in a job well done, something that everyone from clients to managers to the employees themselves want anyway.

Thursday, May 13, 2010

Extreme musings pt. 2

This is part 2 of my reflections after 2 months of XP development. (Part 1 can be found over yonder.)

A culture of productivity

One thing that XP really fosters is a culture of getting things done. The entire process is focused around delivering tangible business value to your customer. TDD, and even more-so BDD, are setup so you only write the code you need to get an actual task done. Gone are the days of engineering an entire framework only to later realize it won't work with future requirements or that it wasn't needed at all.

One thing I do miss with this style of development is that it can be fun to architect the frameworks. There is definitely a perceived elegance to throwing some headphones on, getting in the zone, and cranking out huge swaths of code.

However, even though I think I've had success with writing those kinds of frameworks, I can't argue that XP delivers more business value faster. There is still room for those frameworks in XP, but you don't end up writing them until there is a real need for them.

I've heard a saying before that goes something like "The first time you write something, do a one-off, the second time do a one-off, and the third time figure out if there's an abstraction." This what XP guides you to do.

Another thing that's really refreshing is that everyone I now work with is simply focused on being productive. It's not even a burden to them mandated from some whip-cracking exec as much as it is just who they are. An XP shop, at least the one I'm at, seems to automatically weed out anyone who would rather spend half a day surfing the web than be working. The fact that your pair can probably tell the difference between reddit and an IDE probably helps ;)

Less stress

As a whole, the developers I work with who are doing XP are less stressed than any other group I've ever seen. A lot less actually.

A common myth seems to be that if you're not stressed, you're not working hard enough. It's as if there is this secret potential that can only be unleashed if you're stressed. I think it's just the opposite, at least with people who naturally want to be productive. (And do you really want to work with anyone who's not?)

XP (and Agile) is all about getting the most value out of the time you have. You have 4 developers and 2 weeks - lets get the most value we can out of it. Everyone is considered on the same team and everyone has the same goal.

As long as everyone on the project agrees to stick to that approach it results in really happy developers who are excited to crank out quality features faster than I've ever seen. I've been lucky and haven't had to deal with external pressures such as a conference or presentation that might squeeze the timeline yet, but I would say there is less total stress among the ~50 developers I currently work with than the amount in the 12 developers I was with previously.

Also, in true XP fashion, overtime is pretty much disallowed. There are rare cases where this rule is broken, but it's far from the norm. Having a team of fully rested, alert developers seems be often under-appreciated elsewhere, and I'm glad that, as an anti-cement-cutting-board company should, a good life/work balance is highly valued.

Happy customer

All these factors seem to result in happy customers, which is a huge win. Having an honest and engaging relationship with your customer might seem scary at first, but when it works it's a beautiful thing.

Most of our projects have the product owner on-site where they can see exactly what is going on every single day. There's really no way to hide anything, and in the end you realize you don't need to.

For example, my current customer has been quite satisfied with the results we've been able to deliver, and I think it's largely the XP process we can thank for that. We've been honest with him, and he's appreciated that. When thing are good we let him know, and when things aren't going as planned we tell him honestly. The transparency has built a level of trust within the team that is really refreshing. Ultimately, it's better to have your customer decide to abandon a non-essential time sink of a task than to pretend everything is ok and end up delivering sub-par quality.

If XP is wrong, I don't wanna be right

At the end of the day, XP isn't a perfect glowing mecca of butterflies and sunshine, but it's as close as I've seen so far. I often feel like I'm at some sort of oasis where developers have gathered after being worn out at other jobs.

It's really great to be a part of a dedicated team of people solving real problems and providing real value every single day. When you couple that with an environment of honesty and an intentional lack of overtime, you don't have to twist my arm to convince me it's where I want to be right now.

People ask me how we can survive in the economy when we bill in pairs and pay employees well. I tell them I don't know all the details, but we constantly have so much work we're turning down jobs - so something must be going right. At this point, as long as I can have a job where I get to work like this I'll let someone else worry about the rest :)

Monday, May 10, 2010

Extreme musings pt. 1

The main reason I started this blog was to chronicle my thoughts as I transitioned not only to a new job but to an entirely new development style. Since I'm approaching the 2 month mark, I figured now was a good time to check in.

To recap, I'm now doing fairly strict XP - pair programming full time, test driving everything, and an engaged customer whose job it is to manage our backlog. Previously I was doing some sort of half-assed scrum/agile where our biggest victories were having CI, sprint planning meetings, and short (2 week) releases.

Pairing

One of the biggest changes and something I've written about before is pair programming. I had done some pair programming before both in school and professionally, but only when tackling an especially hard problem. Doing it for a full 8 hours every single day is a different beast entirely.

It probably goes without saying that the most critical piece of the pair programming equation is who you are pairing with. I've had the opportunity to pair with 6 people in the past 2 months and each person has been radically different than the last. I could probably write an entire post about different pair personalities, but to keep this at a high level, I think there are 2 main points I've discovered: There are people I really enjoy pairing with and it's been equally helpful to pair with everyone on the team. (To be fair, I haven't paired with anyone I've disliked, but inevitably you'll work better with some than others.)

Obviously it sounds like it would be great to pair with the people you really click with, and it is. But I think that pairing with the other people is equally important. There are always things to learn from pairing with someone new, and I've found it's very important when working on a team to understand the position of all team members on the project. Most of the time when developing, you're spending your brain cycles figuring out how pieces of the application should go together, and the more people you've paired with, the better your understanding of the pieces will be.

Tests f-ing rock

Having a full test suite on your application is awesome. I really can't say that enough. Having a full test suite on your application is awesome. There, I said it again.

I've been writing unit tests for a few years and before doing full time BDD I always felt like I was missing an important piece of why tests are useful. I get it now. When you truly have complete test coverage of your application you are in a position of great power and flexibility. Refactoring, even scary refactoring, is suddenly conquerable. Bugs are more often "oops, we forgot about..." instead of "dude, did you even run this before you committed?!"

The thing about tests is that if you don't have full coverage, you're only getting a fraction of the benefit. It's not until you can reliably count on your tests catching any bugs that you can reach the full potential of a test suite. I do think that some tests are better than no tests, but once you hit the magical full coverage mark, it's an exponential growth in what you get out of them.

(By "full coverage" I don't necessarily mean every single line is directly tested, but that the intent of each method and class is covered. Think BDD instead of TDD, although I'm sure more "pure" TDD can have the same benefits as well.)

...

This post started to get a bit longer than I originally expected so I'm going to break it up into more parts... installment 2 coming soon :)

Friday, May 7, 2010

Good enough

When doing XP development, I've found that the notion of "good enough" comes up a lot. It's a pretty common thing to have to think about when programming, but when you're pairing full time it's something that actually gets discussed directly instead of just going by your gut or by how ambitious/lazy you're feeling on any given day.

When writing code, as with most things in life, there isn't really a black and white concept of "done". Think about mowing your lawn - the point is to get all the blades of grass to a short, uniform length. How uniform your grass must be is really a matter of opinion. Do you let the lawn mower get it relatively close? Probably. Do you go around cutting every single blade by hand with a ruler? I hope not. Do you spend double the time to trim all the edges? Probably. Do you spent another 50% more time to trim that hard to reach part behind the planter that you can only see from that one specific angle? Maybe.

This same pattern applies almost directly to software. Do you make the code meet the requirements? Yes. Do you fine tune every single line of code for the utmost clarity? Probably not. Do you make the new API's you write as easy to use as you can? Maybe.

So how good is good enough? It's always a tradeoff between time and quality. When doing TDD/BDD the test suite can help you with this balance by giving you a safety net of tests to rely on. You can code with confidence that what you're doing will work and that it should play nicely with the rest of the code base. I think this lets people make the "good enough" call earlier than they might otherwise. If your test run green and your requirements are fulfilled you're done, right?

Usually this is great - the sooner you can confidently say "good enough", the sooner you can start working on the next feature. The time/quality scale has been shifted in your favor.

However, I think this safety net can lead people into other traps as well. If you always throw the "good enough" flag as soon as you can, you've become a high risk for technical debt. If you ignore the grass that grows behind the planter for too long, it can start to take over other parts of your yard, and now you have a big problem on your hands.

This problem definitely isn't specific to test driven projects - I've seen many a code base with the programmers gasping for breath in an ocean of debt. The difference I've seen in the TDD projects is that you can actually survive in these oceans. When you have your life jacket of tests strapped to you, the waves don't seem so scary. But again, I'm not sure that is really the best thing either.

One of the major reasons for doing TDD/BDD is that you increase your velocity. You can plow forward without having to worry about the past. However, like on any project, if you don't take time to keep things clean, the technical debt you'll accrue will quickly start to work against whatever velocity gains you made in the first place. Having a suite a of tests certainly helps, but it doesn't make you immune to these kinds of things.

So I issue this plea to all developers, those doing TDD and those not: Remember to take time to keep things clean and sensical in your code and I will do the same, as someday I may be in your code base and you in mine.

Tuesday, May 4, 2010

Leave my constructors alone!

That's right, leave my constructors alone! Yes, they are mine. Why, you might ask? Because someday I might want to use them, and if they are doing anything fancy I might not be too happy about it!

Consider the following:

function Foo(bar) {
  this.bar = bar || {};

  this.refresh();
}

This is a pretty common pattern - you set fields and do whatever initialization is needed. In this case, the constructor is using a refresh method, presumably so new objects and refreshed objects all get setup with the same code.

This might look fairly innocent, but what does that call to the refresh method really do? I don't know, and that's just the point. Maybe it sets some default values. Maybe it makes a database call. Maybe it makes an AJAX request for more data. This is an important thing to know when using an object written this way, as well as when you need them for testing.

What if, for example, refresh makes an AJAX request for more data and updates this.bar when it gets a response. (This could be a likely pattern when your views are data-bound, you would initially show default or cached values and then the view would update itself when new data came in.)

Now, think of testing with Foo objects. Every time you need one you have to worry about an AJAX call being fired by the constructor. Even if you have fake responses in place this can add a lot of complexity. And what about your code? If your tests are hard to maintain, will the actual code be that much better?

Another example - what if for the purposes of a test, we needed to mock the refresh method before it is called. In the JavaScript example above that would be possible if you are using prototypal class patterns (Foo.prototype.refresh = ...).

But, what if we needed to mock the refresh method for just one instance of Foo that you're particularly interested in? Now you're in trouble. True, there are some hackstechniques you might be able to use such as changing Foo.prototype's refresh method before constructing the instance you're interested in, and then changing it back, but that might not always work.

What if your class is written like this:

function Foo(bar) {
  var self = new Base();

  refresh();

  return self;

  function refresh() {
    // Do AJAX-y things
  }
}

Now you're really screwed. How do you control the refresh method in this case? There's no way to prevent it from being called with the actual implementation. Even making refresh public by putting it directly on self wouldn't help, you still couldn't change its behavior until after the object is constructed.

I've found that when object construction is as simple and fast as possible the world is a much happier place. Use the constructors to save off some fields (this.bar = bar) and set some default values if needed.

That's it.

Nice and simple.

If you're concerned that some necessary behavior (such as calling refresh) might be forgotten, create a factory method that will wrap that up for you:

function createAndRefreshFoo(bar) {
  var foo = new Foo(bar);
  foo.refresh();
  return foo;
}

It's not foolproof since your objects could still be constructed without this function, but I think it's a worthy compromise to make.

The overarching theme here is to avoid design patterns that limit you down the road. Having any sort of complex behavior in your constructors is definitely one of those patterns and I would argue it should be avoided unless absolutely necessary.

Keep things simple. Help your fellow developers. Help yourself. And leave my constructors alone!

Thursday, April 15, 2010

JS Class Patterns

I've been in a lot of discussions lately about JavaScript design patterns, specifically around how to write a class. The two patterns in the lead are as follows:

Decorator pattern, new Foo() returns a decorated BaseClass object:

function Foo(bar) {
  var self = new BaseClass();

  var baz = 42;

  self.publicFn = function() {
    return bar + baz;
  };

  return self;

  function privateFn() {
    bar = 0;
  }
}

Prototype pattern, more traditional JS:

function Foo(bar) {
  BaseClass.call(this);

  this.bar_ = bar;
  this.baz_ = 42;
}
Foo.inheritsFrom(BaseClass);

Foo.prototype.publicFn = function() {
  return this.bar_;
};

Foo.prototype.privateFn_ = function() {
  this.bar_ = 0;
};

At first, I was 110% in the Prototype camp. It feels like how JS is "supposed" to be used and has some undeniable advantages. Object construction using prototype instead of decoration is much faster, uses less memory, and is usually considered a Good Thing™.

It also lets you know what type objects are in a more intuitive way. With the Prototype pattern, typeof new Foo() will return Foo, where as with the Decorator pattern typeof new Foo() will return BaseClass since what you really have is a decorated BaseClass object.

Using the prototype object to add functions to your class lets you also gives you a lot of flexibility regarding function overriding. It makes mocking objects much easier, and lets you wrap functions with things like logging on a global level instead on an instance by instance basis.

The Decorator pattern does have some advantages though. You can have privately scoped variables and functions. You don't have to worry about execution scope since you always have a variable "self" to reference. You definitely hear a lot less "is this this, or is this that... which this is this?!" when using the Decorator pattern.

Overall, the Decorator pattern behaves much more like other OO languages at the cost of performance and flexibility you get with the prototype object.

When listed out like that, I'm still at least 99% in the Prototype pattern camp. However there are some other key factors that have started to change my mind:

Talking to the team, we discovered that development speed has increased significantly by moving to the Decorator pattern. This could be attributed to a lack of JS knowledge by the developers since the Prototype pattern is fairly JS specific, but they are all very smart, talented engineers so the learning curve affects should be minimized. I think even the reduced debugging time from having the "self" variable has helped a lot. I've done a lot of JS development, and I know I forget to bind functions to the right "this" from time to time.

Test effectiveness has also been much better with the Decorator pattern. With actual private variables and functions there are less shortcuts taken in tests. We end up doing less pure "unit" tests and more domain tests, and since we don't have automated integration Selenium-ish tests for our current project's platform that has been quite helpful.

Also, on our project, the performance loss from not using the prototype object is basically a non-issue, so the Decorator pattern doesn't hurt us much in that regard. We are only constructing a handful of objects at a time (less than 100) so we're only sacrificing a couple milliseconds of CPU cycles if that.

I don't claim to have the all encompassing answer for which pattern is better, but for our project, I'm starting to lean more towards the Decorator pattern, or at least acknowledge how it's helped our development. Anything that can improve both development speed and quality wins major points in my book.

One of the reasons I bring this up on the blog is that the whole discussion is the perfect anti-cement cutting board story. In another world, someone could come in and impose cement cutting boards like "You need to use the Prototype pattern since that's the most JavaScripty way." Or even "Both those are wrong, use Crockford's latest and greatest pattern."

However, on our project, we, the developers, not only get try multiple patterns but discuss and come up with a solution that works best for us in the real world. These kinds of decisions couldn't come from anyone other than those in the trenches with real hands on experience.

Ultimately, we are the ones it matters most to since we are the ones that have to deliver a great product to our client. Even if we use a design pattern that loses us some academic brownie points, if we deliver a better product faster, we still win, our client wins, and that's what really matters.

Wednesday, April 14, 2010

What's in a comment?

One of the biggest surprises I had at my new job was finding out that writing code comments pretty much doesn't happen. It's not explicitly forbidden, but it is unofficially looked down upon by the developers.

Previously, I have always used code comments - Javadoc style method comments were more or less required (for the useless metric of code/comment ratio), and I would sprinkle others throughout the code as I thought was needed. I definitely don't want to fall into some sort of macho "you need comments?!" type of mentality, but if there is a better way of doing things I'm all ears.

Around my new office, the general thought on comments seems to be that if you need them you're doing something wrong. Also, we will often look at tests to see what any piece of code should do. Since we're using Jasmine (basically modeled after RSpec) each of our tests starts out more or less like a code comment anyway.

For example, say you saw this code:

MyApp.prototype.launch = function(action) {
  if (action == 'viewActivePages') {
    this.view.showActivePages();
  }
}

Reading this, you start wondering when launch would get called this way - there isn't any time you can think of where you would want to launch with this action. Then, looking at the tests you see this:

desc("#launch") {
  desc("when user taps dialog") {
    it("should show the active pages") {
      myApp.launch('viewActivePages');
      ...
    }
  }
}

Suddenly it all makes sense, you remember that the app can be launched by tapping on a dialog, and when that happens you should expect to see the active pages. The tests have successfully served as the comments.

This clarity could have also been accomplished with a comment:

MyApp.prototype.launch = function(action) {
  if (action == 'viewActivePages') { // tapping on dialog
    this.view.showActivePages();
  }
}

However, what if after looking at that comment you saw that the tests instead said this:

desc("#launch") {
  desc("when user taps dialog") {
    it("should show the user's work items") {
      myApp.launch('viewWorkItems');
      ...
    }
  }
}

Now you have conflicting information. The comment says that 'viewActivePages' should happen when a user taps on a dialog, but the tests say that 'viewWorkItems' should happen instead.

I drug you through the example above so you will have an idea of where I'm coming from on this. I don't know what the right answer is for whether or not to write code comments, but here is what I have figured out so far:

- Code comments, when they exist, should explain the why, code should explain the how
- Comments should be used when the code can't be made as clear as you would like (happens in CSS a lot)
- If you feel you have to write a ton of comments, you probably are doing it wrong
- Reading a comment is a lot faster than looking at a test
- An accurate comment can save you a lot of time

However, I also have found this to be true:

- Comments are often inaccurate, they get outdated very quickly and aren't well maintained
- An inaccurate comment is much worse than no comment at all
- Tests aren't perfect but are maintained much more than comments are

One place that I definitely think comments should be included is when the code can't convey all the information. For example, if you chose one algorithm over another because it performed better, it's probably worth putting in a comment to say so. The old slower algorithm won't be left around to show people reading your code that it was removed or explain why.

Another place that comments can be very useful is in things like CSS where you are very limited in how you code things. Conveying intent can be very useful when you come in later and try to figure out why something was done a certain way.

So what do you think? What is the right time to write comments? What should your threshold be for when warning bells go off when you find yourself wanting to write one?

Tuesday, April 6, 2010

You should "should"!

I never thought I would grow attached to the word "should". Weighing in at around six letters, it seemed fairly unremarkable, however now I use many times every day. "It should show a login page when the user starts the app." "It should let the user set notifications to either on or off." "It should overload the constructor until you can't possibly understand what is going on." (Ok, maybe not that last one... more on that later.)

These "should" statements above are called user stories, and in agile development, these are what the developers are given to work on. You get a big ol' queue of user stories and start cranking from the top, marking stories complete as you go.

So, you might be wondering, what is so great about the whole "should" thing? Couldn't you phrase the stories like this:

"Show login page when app starts"
"Set notifications to on/off"

They are shorter and probably display better in your bug tracker, why bother with all this extra typing? I mean, what are you... a Java developer?!

In my experience, stories like the ones above are only one step away from the dreaded one-word-requirement:

"Login page"
"Notification settings"

Ok, two words, but still. It sounds like a caveman wrote the stories above... What should we work on next? "Login page!" Well what should that do? "Login page!" When should we show it? "LOGIN PAGE!" At that point, your customer clubs you over the head and finds another vendor wondering why you were too stupid to know exactly what was meant by "Login page".

How often have you seen requirements like this and scratched your head wondering what to do? I've had features that lasted for multiple months based on three words written in a meeting I didn't even attend.

Now, try to phrase the requirements above using the word "should". Can you make a caveman story with "should"? I dare you to try it. Even better, try to write the stories with both the words "should" and "user". "Login page" turns into "It should show a login page when the user starts the app" and suddenly you have an actionable story.

I'm not claiming that you can't write good stories without the word "should", I think that "Show login page when app starts" is completely fine. However, you can prevent the slippery slope toward the one-word-requirements by making "should" a requirement for your stories.

Another effect of writing your stories using "should" is that they become much manageable - exactly what agile says you should do. It's hard to write a giant requirement like "CMS" as a user story with "should". It becomes obvious pretty quickly that a story like "It should let the user manage content for each page and let them save content before publishing and should let them publish only if they have permission and should check whether content is already there and if so prompt and if not add it to the publish queue" is really multiple stories.

It also helps make your story scope easier to understand. "It should show the login page when a user starts the app" doesn't say anything about not showing the login page if the user has cached credentials. If that is also a requirement, it should be another story. By writing your stories this way, both the developers and clients can be much more clear about what expect when a story is delivered.

It's no coincidence that test frameworks like RSpec and Jasmine start each test with it("should ..."). It works well for the exact same reasons. Which of the following tests is easier to understand:

it("should update the user's credentials after login")
testLoginUpdateUserCredentials()

So, don't write caveman requirements, you should "should"! (That Geico guy is going to get me for this...)

Friday, April 2, 2010

Thoughts on pairing

Pair programming can be hard. It can be daunting at first to work side by side with someone for 8 hours every day, talking about almost every decision you make as you work. However, I really think the benefits far outweigh any downsides. You'll learn more, work faster, and likely produce the best code you ever have. You'll be better for it, the team will be better for it, and the client will get a better product because of it.

Before I get into it, first a little disclaimer. The company I'm working for does pair programming full time. Every project for every client is done in pairs. I've also only been at the company for a couple weeks, so I'm still learning all this myself. However, I think that puts me in a great position to compare pairing to non-pairing since I have both fresh in my mind (and is also why I started this blog in the first place).

Really, pairing is all about finding your rhythm. Once you hit your groove, pairing doesn't seem to be as much work as it might at first. After about two weeks it started to feel really natural for me, and so far I definitely prefer it to working alone.

Here are a few techniques I've picked up so far to make it a little easier to do:

First is the setup. Each member of the pair should have their own keyboard and mouse plugged into the computer. This allows either person to immediately take control from their seat and help out. It also lets people use the hardware they like, cuts down on germs, and basically lets pairing work really smoothly. I really can't emphasize this part enough, I think that each person having their own keyboard/mouse combo is truly essential to being able to pair repeatedly day after day.

Next is a technique called ping-pong. This technique works especially well with TDD. Say, for example, I'm pairing with Joe. I'll write the first unit test and make sure it fails for the right reasons. Then, Joe will implement whatever is needed to make the test pass. He will then write the next test and I will make it pass. Doing this, you ensure that both people in the pair get some keyboard time for both the tests and code, and, more importantly, it makes sure that both people in the pair understand each feature that is being added.

Much less formal than ping-pong, but even more important: don't be the victim of a keyboard controller! In each pair, inevitably there will be one personality that is more controlling than the other. Maybe one person is more familiar with the language or the framework being used, or maybe they just really like typing. The important part is to make sure that you are trading off who is driving regularly.

I would say 5 minutes is about the longest you want one person to be at the wheel, and ideally even shorter than that. One of the big benefits of pair programming is the knowledge transfer that goes on, and even if someone is slower at first, it's worth making the investment in them to help them become more productive.

Ping-pong also helps combat the keyboard-controller, so if you're having problems either being one or getting pushed around by one, try implementing a formal ping-pong coding style.

Another thing about pair programming that might not seem intuitive at first, is do everything as a pair! If you have a question about something and need to go talk to someone, go together. If someone else needs to ask you a question about a feature, answer it together. Remember, programmers aren't limited by their typing speed. Any seemingly lost productivity by having 2 people involved in discussions will be quickly made up by having 2 knowledgeable people working on the project.

TLDR; Pairing can be hard, especially at first. But if you stick with it and find your rhythm, it is definitely worth the investment.

Tuesday, March 30, 2010

Re: How deep do you test?

Just a quick note to follow up my previous post on how deep do you test. I had mentioned some frustration in our test suite regarding integration style tests with asynchronously loaded data. The old adage rings true:

If your tests are hard to write, you're doing it wrong.

I spent the better part of yesterday refactoring the code dealing with the async data after we found that it could have caused some really sneaky bugs. The code is much cleaner now, and well all agreed it was much easier to understand. Everyone was happy and there was much rejoicing.

One last comment before I sign off - the bugs we prevented would have been incredibly hard to track down, and would have likely only shown up intermittently due to cell network latency. TDD really saved us on this one, since without it, we would have never seen the warning signs and realized how fragile some of the logic really was.

Why you should pair program

My new job uses pretty strict XP for software development. I say pretty strict, because as I understand it, they started with strict XP and have adapted it to their needs. To me, coming from a half-hearted Agile/Scrum shop, the two most noticeable features about XP are pair programming and Test Driven Design|Development (TDD).

It seems like, to most people, pair programming is the biggest mystery of the whole process. Before trying it full time, even I was a little bit skeptical as to whether or not it was really worth it. I mean, really, two people doing one job?! ;) (A little background for you non-technical types - pair programming, in a nutshell, is two people sitting at a single computer working on something collaboratively.)

The biggest and most obvious question about pair programming is how can you possibly get as much work done when working in pairs. It's usually phrased something like this:

- Jack can get feature x done in 9 hours.
- Sally can get feature y done in 5 hours.
- Can Jack and Sally get both features done in less than 14 hours by pair programming?

Only having pair programmed for about a week (8 days to be precise), I don't know that I have a good answer for that question quite yet. I also think that the question is going about it the wrong way.

For starters, here's what I have noticed about pair programming so far:

- Having begun a new project on a new framework on a new platform, I came up to speed at least 3-4 times faster than I would have on my own
- Knowledge transfer happens in real time and is unavoidable
- Code reviews are mostly unnecessary
- There are far fewer bugs
- When one of us gets stumped, about 70% of the time the other person has an immediate answer to keep the pair going
- Time wasted trying to run and debug code because of typos, etc. is nearly eliminated
- Time wasted on Digg and Reddit drops to 0
- The job is much more engaging so I don't mind a Digg-less lifestyle
- I make fewer design decisions that I end up having to change later
- My brain hurts when I go home every night, but in a good way

Here are some other reasons why I think the question asking about the 14 hours is overly simplistic:

- The cost of bugs, even ones caught internally, is HUGE
- Code that can be read by 2 people is much more likely to be able to be read by more than 2 people
- Code that can be read by more than 2 people is much easier to maintain
- When a pair says something is done, it's more likely to actually be done, so tracking progress is easier

I think a better way to phrase the 14 hours question is this: Is a team's velocity higher when pair programming? Keep in mind that bugs don't count toward velocity, only new delivered features do.

Even with all these bullet points above, I would guess that pairing is still around 50% faster than doing something on your own. Think about when you're writing code, are you really limited by your typing speed? Most of your actual time is spent thinking about the problem, trying to find the right approach, googling, and testing. Having a pair to work with seriously cuts down on the time required for these parts of development.

So ask yourself, should you be pair programming?

Friday, March 26, 2010

How deep do you test?

I've been talking a lot recently about what the right level of testing is on my current project. Previously, on the Java projects I worked on, I was a full blown mocker, anything but the class I was testing would be likely be mocked and I would try to do pure "unit tests", testing only the code in one class at a time. Mockito was my friend and the world was good.

Now I'm working on a WebOS app so I'm back in the land of JavaScript and dynamic languages. I've never done any real JS testing before (or any dynamic language for that matter), so I'm trying to figure this all out for the first time.

The question of how deep to test keeps coming up because we're running into some some significant test complexity. We're not mocking many of our own classes, only the WebOS classes, so our tests are really halfway between integration tests and true unit tests.

This has its advantages, especially in a dynamic language where parameters are only checked at runtime. If we were doing pure unit tests and mocking the world away, we could be creating a fantasy where our class thinks it's running just fine, but isn't passing the right parameters or could possibly even be calling mocked functions that don't exist.

On the other hand, without mocking, our test complexity goes way up, especially since we're dealing with a lot of async behavior. Consider the following test:
it("should call the callback delegate", function() {
  var foo = new Foo();
  var myCallback = function() {};

  foo.doWork(myCallback);

  expect(myCallback).wasCalled();
});
Here we have a simple case where we are testing a doWork function on the Foo class and expecting that it calls a delegate at some point in the execution.

However, in reality, doWork might call some other class that relies on an XHR and asynchronously loading something from a datastore. Now our test looks something like:
it("should call the callback delegate", function() {
  var foo = new Foo();
  var myCallback = function() {};

  foo.doWork(myCallback);

  Tests.AJAX.Requests.fakeResponseFor(SOME_XHR_REQUEST); // Succeed pending XHR, return stub data
  Tests.Datastore.get.succeedAll(); // Succeed pending async datastore get requests
  Tests.Datastore.add.succeedAll(); // Succeed pending async datastore add requests

  expect(myCallback).wasCalled();
});
As you can see, the test complexity goes up very fast. Almost half the lines of code don't even have to do with the class we're testing!

Mocking could solve this problem much more cleanly by mocking whatever helper Foo uses to do the async logic:
it("should call the callback delegate", function() {
  var foo = new Foo();
  foo.helper = mock('HelperClass');
  var myCallback = function() {};

  foo.doWork(myCallback);

  expect(myCallback).wasCalled();
});
The argument against mocking in the tests above is that it could hide some poor design decisions, especially in our app when dealing with async behavior. For example, you might need a whenReady delegate to execute some code when async data is ready, and if you're mocking everything, that might not be apparent. However, the tradeoff is a lot of test code that detracts from what you are really trying to test.

The option of creating a stub class that automatically succeeds or fails all async behavior was brought up, but the legitimate concern was you could be hiding bigger problems. You could mistakenly try to code it to work synchronously and it would pass in the mocked world, but not in reality.

Also, being true to TDD, we could setup doWork to return a value instead of use a callback. This code would likely succeed with auto-succeed stub classes, but would fail completely when run in the real world:
it("should get a value from #doWork", function() {
  var foo = new Foo();
  var result = foo.doWork(); // doWork will fail in an asynchronous setup

  expect(result).equals('myValue');
});
I could rename this post to "How do you test async behavior?", but I think that my async example is just one of many things that mocks/stubs either help or hurt for the reasons above.

So, how deep do you test your code? Does the language affect your decision? Do you have any general rules of thumb for when to mock or when to do integration tests?

Tuesday, March 23, 2010

Keeping the knives sharp

For most of my life, my dad ran a successful small business doing hardwood floors. He usually had no more than 4 employees since he thought quality started to suffer beyond that point. Throughout the 25+ years he ran the business, he never did any advertising yet was often booked for months in advance. Word of mouth and repeat customers kept things going strong until he decided to sell the business to a longtime employee and retire.

I was talking to my parents this weekend, and told them I was starting one of these "weblog" things on the "internet". We were discussing the cement cutting board metaphor, and I realized that my dad's business was one of the only places I've ever noticed an absence of cement cutting boards. The way we did things there made sense and were setup to make us do a better job and be more productive.

Talking it over with them, I realized that the biggest factor to preventing the cement cutting boards was that the person making the decisions had hands on time and was a real expert on the tasks we were doing. I couldn't have imagined hearing something like "Go ahead and use that dull sandpaper for a few more days, the new shipment isn't in yet" since the boss knew first hand how unproductive that would have been. The knives (and sandpaper) where kept sharp at all times.

I really can't emphasize this enough: The person making the decisions must be an expert in the domain.

Now, let's apply this to the fun, evolving, and often counterintuitive world that is software development. ManyMost of the things we do don't make sense to people that haven't spent time in the trenches. Think of these things from their perspective:

Unit tests..? You don't even deliver those!
Pair programming..? Two people doing one person's job?!
Behind schedule..? Add more people and work longer hours!

Programmers can't even fully understand those things unless they have experienced them first hand.

Even if the executives or project managers don't have expertise developing software, I would hope the business as a whole has people who do. Use those people! Embrace tools that make sense to the people using them, and don't be afraid to invest in your company. In software, these things often have huge returns on your investment. If a tool that costs $100, even $1000, can prevent one bug from reaching production it will have paid for itself many times over. Keep your developers' knives sharp!

To put it simply, if you can't explain how a good mock framework can cut down on test complexity and increase productivity, should you really be the one to decide whether it should be used?

Monday, March 22, 2010

Hello World! (a.k.a. Enter the Cutting Board)

I know, I know, that's probably the most overused title for a first post. But I'm a programmer... what else am I supposed to say?

I'm starting this blog as I move on to a new job and hopefully plenty of new learning and opportunities. I wanted a place where I could chronicle my thoughts for my own record, and, in the spirit of Atwood, I figured why not put it online in case something interesting comes out of it.

The name of the blog was inspired by recent events in my life, namely many months of using a cement cutting board. Now, this was a beautiful cutting board. Nice and shiny, it was a deep blue gray to match the counters it sat upon. Being cement, the knives used on it didn't leave a scratch and I'm sure it will retain it's shiny surface for years to come.

However, there was a problem with the whole cement cutting board situation - the knives didn't leave a scratch. When your cutting board is harder than your knife blades, pretty soon you have a lot of cooking knives that might as well be giant butter knives. Blades that once cut thin slices off a ripe tomato could now only mangle and eventually smash whatever they were cutting in two.

You might wonder who would buy (or manufacture!) such a thing. After a decent bit of thought I came to only one conclusion - someone who never uses cutting boards. After a bit more thought I realized I was surrounded with cement cutting boards. Software designed by people who never used it. Processes created by people who weren't affected by them. The list could go on and on.

I realized that nearly every institution I had ever known was riddled with cement cutting boards. I began to think it was just how the world had to work due to some unseen saturation of illogical thought. It was as omnipresent as the laws of physics - objects fall when you drop them and you will have to do things that are counter-productive and make no sense.

Yet deep down I knew there must be some oasis of nice, soft, wooden cutting boards, or at least a group of people that despise the cement ones as much as I do. I think I've found such a group, so here begins my journey to see if the grass really is greener on the other side.

- Ian