Monday, March 30, 2009

My Joel Test

I used to give client teams the Joel Test. It was good for the most part, but it does contain some pretty basic questions that almost all teams succeeded at. I retained a few from the Joel Test, and added some questions that I found were really essential to form a productive team.

Retained these from the Joel Test:
1. Can you make a build in one step?
2. Do you fix bugs before writing new code?
3. Do you use the best tools money can buy?
4. Do new candidates write code during their interview?

Added these to my list:
5. Do you use Scrum/XP?
6. Does your team do your own releases? (or is there a central release team that does that)
7. Can your team start/stop your prod servers and batch jobs (ofcourse with proper audits)?
8. Do you insist on measuring round trip times before and after a performance enhancement was made?
9. Do you have realtime alerts comparing inputs and outputs of your system?
10. In enterprise integration scenarios, do you automate the validation of inputs from other teams?
11. Do developers get to interact frequently with the end users of the system?
12. Does your team get some choice in the frameworks they use or does your company mandate a standard one?
13. Do your developers participate/give tech talks? Do you sponsor if needed?

A word on Joel's Tests:
Joel's tests have 12 questions. I have retained four. The rest got the axe because I think they are too common (using a source control) or I still have see them in action (like hallway usability studies)

5. Do you use Scrum/XP?
Without going into the pros & cons of Scrum/XP, the only point I would like to add here is, both of them tend to keep the end user involved in product development. This cuts waste and gives direction.

6. Does your team do your own releases? (or is there a central release team that does that?)
In general, teams that can release and upgrade their applications tend to be more productive especially during release cycles. I've seen enough teams which were forced to go through a central release team. These poor teams often email/send the upgrade commands to the release team and twiddle their thumbs. When the release breaks (ofcourse that happens), the team is forced to 'remote debug' for the release team. The release team usually has no time/expertise to deal with these app specific issues. This brings the morale down, makes sure that upgrades are a major pain for everyone in the team. Hence, they tend to lump upgrades together and not to frequent releases at all.

7. Can your developers start/stop your prod servers and batch jobs (ofcourse with proper audits)?
Your team designed the app, architected it, and built it. They fix the bugs. If you trust them and give them responsibility of keeping the server up, they will/can handle it. If they need to bounce the server or force kick off a failed job, they have to be able to do it. If they are not able to do these tasks, during production outages etc, they execute these manually defeating the entire purpose. I do agree the need for proper audits, but I strongly believe that depends on the manager doing enough checks.

8. Do you insist on measuring round trip times before and after a performance enhancement was made?
Nothing speaks like numbers in performance situations. Performance is a non-functional feature. You can only see the effectiveness of the fix only if you measure the before and after effects.

9. Do you have realtime alerts comparing inputs and outputs of your system?
Nothing beats having checks that constantly compare the outputs with expected output for the real time input. When it goes out of sync, an alarm would be raised. This has the benefit of catching the error as soon as it occurs (failfast). This also has a benefit that you catch your error as soon as the user encountered it. This ensures a very quick turnaround time.

10. In enterprise integration scenarios, do you automate the validation of inputs from other teams?
Suppose you rely on a service from another team. Having automated input validations to the values returned by the service (sometimes proactively) ensures that you catch data setup errors in systems that you depend on as well.

11. Do developers get to interact frequently with the end users of the system?
Above all, this makes your developers take responsibility for their code & bugs. No one likes appearing sloppy. If developers can see that their product is being used by end users who frequently talk to them it makes them put their best code forward.

12. Does your team get some choice in the frameworks they use or does your company mandate a standard one?
Too many companies force their developers to use a single framework/API for a task. This is generally made by someone 'up there' who believes too much in consistency. While I agree that having too much choice is bad, having a bit of choice is generally good. Using the right tool for the job makes all the difference.

13. Do your developers participate/give tech talks? Do you sponsor if needed?
Tough times, I know. But, the truly committed developer usually involves himself in attending (or better giving) talks. And, most of these talks usually pay for the speaker's expenses which makes it a no brainer.

1 comment:

Vakranas said...

Insightful post. As far as source control is concerned there are too many one-man or two-man teams where Source Control is not used. I think even if there is only one developer Source Control should be mandatory. Freedom of choice in terms of Frameworks etc. depends on the world you live in. You can develop the best of breed with a great framework, but certain corporations look for easily replacing developer. That is why they go for main-stream technologies instead of niche, even when the niche framework is as good as tailor made for their solution.