In Defence of “its”

The case for meaningful prefixes

Before we start let’s get any emotive straw man arguments right out of the way – this is not about Hungarian Notation. Many of us have bad experiences – more than one programmer I know has been savaged by Hungarian Notation as a child, and still bears the scars. Hungarian Notation is obscure, unreadable, and potentially disastrous if you get it wrong. Anyone playing the Hungarian Notation card in this debate will be considered to have fallen foul of Godwin’s Law.

Instead, I’m talking about the use of “its” as a prefix for class member variables.

Let’s look at the basics. Code must be easy to read. Code should stand on its own, and for itself. And variable names should be meaningful.

But that, of course, means more than just a name. “Elizabeth” is just a name. There are lots of Elizabeths – I know half a dozen of them. They’re all similar in some ways and different in others; a name is just a label, useful in some contexts, useless in others. It’s what we do with it that counts.

The case against

First let’s look at the case against prefixes. Robert C Martin, in Clean Code, says:

“You also don’t need to prefix member variables with m_ anymore. Your classes and functions should be small enough that you don’t need them. And you should be using an editing environment that highlights or colorizes members to make them distinct.

Besides, people quickly learn to ignore the prefix (or suffix) to see the meaningful part of the name. The more we read the code, the less we see the prefixes. Eventually the prefixes become unseen clutter and a marker of older code.”

I’ll agree that “m_” is a useless prefix; it doesn’t mean anything.

I also agree that, where possible, we should be using colours and highlighting in our IDEs to make code easier to understand. But to rely on it? This undermines one of Martin’s underlying points in his book – that a snippet of code should be fully understandable _on its own_. To me this means independent of IDE settings, of formatting, and of a developer’s personal naming foibles.

Of course methods should be short – but I want to be able to read every line of code and understand its impact. I shouldn’t have to scan the input variables to read every single line.

Actual Code

Consider this snippet of code currently leering at me, three inches above this blog posting:

for( currentTimepoint = timepointRange.GetBegin(); currentTimepoint != timepointRange.GetEnd(); currentTimepoint++ )
{
if( GetObjectList().HasObjectsInTimepoint( currentTimepoint ) )
	{

You probably wouldn’t know that currentTimepoint is a member of this class. (Maybe I can see it’s a different colour, but maybe I can’t – and you certainly can’t now it’s been copied into a blog post.) But the fact that we’re iterating over a _member_ and not a _local_ variable is critical to understanding this if statement, and the line that follows it.

Even worse, the same class contains a member variable named simply “task”. It’s bad enough that my IDE formats this name as a keyword. No; what I actually want to do is find out where, in this class, the member is used. It’s impossible for me to search the file for instances of “task” without turning up three comments, four other class names, and two more variables with slightly different names. I want to assess the impact of a change to “task”, but it’s become a dangerous … well, task.

Martin also states that we fail to see meaningless prefixes; I agree, which is why I promote the use of “its” rather than “m_”. After all, “its” has a meaning. itsHat; itsTask; itsContinuingProblemWithAntsInTheOffice. These are easy to relate to the current class, and we can envision what their scope is.

It’s been argued that this leads to clumsy names – for example, itsIsExpandable for a boolean flag. OK, it’s not Shakespeare – but it’s clear, concise, and delivers instantly the meaning required.

Smelling Sweet

What’s in a name? I used to work in Natural Language AI, and a string of letters means nothing without context. A meaningful prefix provides that context to an experienced programmer who knows what baggage comes with Object Oriented programming.

Here’s a sentence: “I’m having tea with Elizabeth and her family”. Here’s another: “I’m having tea with Queen Elizabeth and her family”. The second is totally different to the first; the added prefix changes the meaning and content of all the words around it.

In the same way, currentTimepoint++ is very different to itsCurrentTimepoint++. By using a simple prefix, we can keep our code readable, useable, and maintainable to such a degree that the cost of three extra key-presses pales by comparison. “its” isn’t just a prefix; it’s a line of defence, and all that stands between anarchic barbarism, and gleaming, lofty civilisation.

Advertisements
Posted in Uncategorized | Leave a comment

22 Sweaty Men

Reflections on the Daily Scrum

(This blog post is sponsored by Scotland’s awful defeat at the hands of our Celtic cousins. Try not to drop it next weekend, eh?)

As part of our move towards Agile Development, our team has been holding daily “Scrum” meetings. (If you’re unfamiliar with Scrum, Mountain Goat Software has a good description of the process here.)

The purpose of a Scrum is usually stated to be “synchronisation”. This means checking we are all working towards a common goal – the work we’ve committed to completing in the current sprint – and that, when the sprint ends, all our parts will come together to form a complete product.

In practice, though, I find the Scrum to be more helpful for other reasons:

– Motivation. Each morning, I can mentally run through what I committed to the day before. I’ve got a few hours then to take stock, and if possible, rattle through what still needs to be done.

– Maintaining a sense of progress – both personal, and in the team. I have surprised myself by finding that moving a post-it note from one column to another can give me a real sense of achievement. I’ve been tracking our burn-down chart in story points; a lot of reward comes from making a little orange cross on a whiteboard. Honest.

– Help & advice – both in terms of asking for help when I’m stuck, and being able to offer advice when someone is working on something I know a bit about.

I sometimes feel our Scrums are in danger of becoming a bit “smelly”. When you do something every day, it’s easy to just turn up, go through the motions, and sit back down again, having achieved nothing but stretching your legs. Attendance is mandatory, but only because we’ve decided as a group to do this – if we lose track of why we’re there, it’ll become a stale bean-counting exercise.

Crouch

Preparation is key. I’ve been guilty lately of only visiting the Scrum board during the daily Scrum. I turn up, hunt for the right post-it note, then scribble frantically on a new one. That’s not right – the Scrum itself should be about reporting progress and committing to tasks, not an exercise in bad handwriting.

The Scrum board should reflect reality at all times, to keep the Scrum meeting short and tidy.

Touch

The Scrum meeting should be about communication. The point is for everyone in the team to hear what everyone else is doing; for them to listen; and for them to assess the impact of their own work on their colleagues’.

Therefore, we shouldn’t mumble at the board! We also should not be talking to the Scrum Master, but talking to the group; and articulating the problem, and solution, in a language that we can all understand.

Hold

Maintaining interest in a daily meeting is tricky. We try to keep our task allocations “vertical” – anyone is capable of working on anything. Sometimes, though, a developer will be working on a specific task for several days running. It’s too easy, then, to tune out that developer. We need to take an active interest in each other’s work – after all, we might be called upon to fix the bugs in it tomorrow.

Maybe, though, this is a symptom of a wider problem in the way we’re allocating tasks to a sprint.

Engage

All of this is worthless if, as a group, we find the Scrum a pointless task. Particularly telling is how we react when the Scrum Master isn’t in the office – do we then see the Scrum as a bit pointless, if we’re not reporting to anyone? I hope not. If we’re really committed – as a team, not just as a band of lackeys – we should all be taking equal responsibility for making it happen, on time and on message. We need to be prepared to invest in the Scrum, and maintain that investment indefinitely, with the enthusiasm of 22 huge, sweaty, mud-streaked men.

But none of this has really been about “synchronising”. Are we implementing our sprints correctly? Are we spending two weeks working, as a team, towards a common goal? Or are we just packaging up the work we’d do as individuals and calling it a sprint backlog? We’ve made massive progress, but I think we need to think about that question as we keep refining our Agile workflow.

Posted in Uncategorized | Leave a comment

Stumbling in the Darkumentation

What should we do about in-code documentation?

Working on a project like ours, some of our code is, inevitably, pretty complicated. Some is unavoidable, and some could be simplified – the rest, being brutally honest, is a painful hangover from “iffy” design decisions of the past.

Where, and how, should we explain – on a technical, class-by-class level – how all our code works?

I don’t mean individual functions and methods. I’ve come to grudgingly agree with Robert C. Martin’s Clean Code – functions should speak for themselves, and the code itself is the best documentation. But when we design classes and architectures within the code, we make decisions that should impact how they’re developed in the future.

I’m currently helping rework a hefty part of the code base to do with image measurements. We’re working on a large batch of classes, with many, and sometimes multiple, inheritance. A key feature is that these classes should be independent of the state of the user interface. We’ve had many discussions at the whiteboard with coloured pens and elaborate hand-waving, designing around this divide. Now we need to capture these design decisions. Otherwise, how could we blame a future developer from sneaking in a pointer or reference to the UI to some sub-class?

Let’s write it down

The easiest place to put this information is in the class headers. It’s where we think about what methods to include and exclude. Our code base is certainly littered with this sort of documentation – “This class is intended to be …”

It’s not necessarily a bad solution. By putting warnings about the behaviour of our threading implementation in XThread.h, we can be sure it’ll be seen by anyone trying to use or alter the class. But as soon as we derive a class, the information becomes as good as invisible.

Well, we have a good place for documentation – our local wiki. We put functional specifications there, and attempt to keep them as living documents. Code documentation could be edited by everyone, kept up to date, and be an excellent technical resource, not only for checking whilst using a class, but as a learning resource covering our whole code base. Watch as it grows with the experience of the team, capturing and refining the best ideas throughout the many programmer-decades that are lovingly poured into it.

I’m detecting a drizzle of scepticism.

Yes, I suspect you are. Why? Because we can’t even keep in-code comments up to date. A comment is right there, under our cursor; yet we blithely copy and paste, refactor around it, and delete old code, until the comment means nothing at all. (I don’t intend to denigrate the team; I can’t believe anyone doesn’t do this. It’s inevitable, and it’s why Clean Code advocates removing all comments anyway.)

Documentation on the wiki would be a whole two-clicks-and-a-search-box away from the window we’re coding in. Most of the documentation I’ve seen has quickly become old and irrelevant when not constantly maintained (and our current project is currently under flux; since I started writing this blog post, we’ve handwaved at least three new design decisions into existence.)

Is it possible to maintain code documentation without heavy policing? Anything written under compulsion will become as dry and dangerous as the Raraku Desert. Is it possible to police the maintenance of documentation, without generating resentment amongst the team for its very presence?

Maybe I’m barking up an imaginary tree. Striving towards simple design, pruning unnecessary complexity, and carefully drumming common coding values into the team, might just prove far simpler than the process of actually writing something down.

Posted in Uncategorized | Leave a comment

A Likely Story

Thoughts on estimating an agile backlog using story points

When we embark on a Sprint we need to know just what it is we’re committing to. If we can’t estimate how much work is in the sprint, we can’t pretend that we have a realistic chance of meeting, let alone stretching, our goals.

At first we estimated our workload in “hours” – a sort of mythical, idealised unit of work. But this concept carries all sorts of baggage, and so we’ve tried playing with the idea of “story points”.

These are intended to be an abstraction of effort – something that can be used as a guide to plan work for a sprint. A team should slowly negotiate a feeling of how to grade tasks in story points, and find a productive “velocity” of story points covered by sprints.

To be honest, I’ve had trouble coming to grips with story points, and so, I think, have others in the team. If story points aren’t hours, how can we possibly estimate them for a task? And if they are, why not use hours?

Opinion, of course, is divided. Peter Stevens argues that they relate to the complexity of user stories, not the effort a task requires; thus their workings can remain as a black box to the Chickens in the room. Mike Cohn, of Mountain Goat software,  however, argues exactly the opposite, even going as far as stating that they can measurably relate to hours spent on a task.

I don’t like either of these approaches. The latter brings all the baggage of hours, along with all the ambiguity; the former relates so little to what we must actually commit to that it renders the process meaningless.

When we first started using story points, we decided that, as an initial guide, we should think of one story point equalling roughly two hours’ work. I think this, in retrospect, was a mistake. Estimating in story points was reduced to estimating in hours, then halving the number. When we explicitly tried to break the relationship, people just did the sums quietly in their heads (I know I did, and I could see my team-mates’ lips moving as they did too. Don’t deny it.)

But without any guide, we’re trying to relate story points (an unknown quantity) to velocity (another unknown). That’s two variables, and only one data point (our actual productivity) – mathematically, that’s impossible – or, at best, NP-complete. All we can hope for is to stumble blindly around the solution space hoping to fall into a local-minimum pit.

So should we even bother?

Yes, we should. The iterations might be slow, but we need an estimate of effort if we’re going to take sprints seriously. We might not understand story points, but we all have experience of previous tasks. We instinctively know what’s “easy” – a single piece of UI, or a new but trivial measurement. These should be the one-pointers. If we’re most certain about these, we should strive to break every task down to the smallest chunks possible. Bringing a forty-pointer into a sprint represents a huge level of uncertainty, and that’s the most dangerous thing in a sprint – we should consider this a failure of planning.

What else should we do?

– Plough on, and hope to find a mutual team understanding of story points

– Avoid, at all costs, secretly translating hours into story points in our heads. That way madness lies.

– Relate each task to previous tasks we’ve known and loved, and draw story point estimates from them

– Strive to break everything down into small pointers. Ones, two, threes, fives at a maximum.

The iterations are slow – two or four weeks between each step – but until we find a better idea there’s no shortcuts to this. The alternative is either the uncertainty of a committee’s bad planning, or the deluded certainty brought by planning the future in man-hours.

The iterations are slow – two or four weeks between each step – but until we find a better idea there’s no shortcuts to this. The alternative is either the uncertainty of a committee’s bad planning, or the deluded certainty brought by planning the future in man-hours.

Posted in Uncategorized | Leave a comment

How Long Is A Piece Of Sprint?

Reflections on the Agile process

Our Agile development process here in the office has evolved slowly over the last few years. When we first decided to try “Sprints”, we couldn’t imagine implementing, testing, bugfixing and retesting in anything shorter than a month. Since then we’ve regularly run four-week sprints, punctuated with pizza-soaked sprint reviews.

To manage all that in a month, we’ve developed our own coping strategies. Break the tasks down into managable chunks; make no task last longer than a day; present work for testing early, and often; risky tasks first; and constantly monitor (and cull) the sprint backlog. I won’t say we’re good at it but we’re getting better.

Christmas presence

Our product was scheduled for release in mid-January. Early in December, we considered our options – we were mostly feature complete, but had a reasonable list of bugs to fix. Also, most of the team would be off for one or two weeks over Christmas. A four-week sprint would be interrupted and fragmentary, so we planned for two-week sprints instead.

Did it work? I think so. Because of the nature of the workload, there was a steady dribble of jobs being completed, and tested.

One thing we’ve struggled with in the past has been breaking tasks down into small chunks before the sprint begins. We’ve considered a new feature as some monolithic task, and lumped it into the sprint backlog, promising “the first task will be to break it down into smaller tasks”.

New year, new danger

We’re into a new, 4-week sprint, and we have three or four such large tasks. Breaking them down has proved less painful than in previous sprints, but still, our burn-down charts look rubbish.

What made the recent shorter sprints a success? Smaller tasks. So, rather than running a four-week sprint, we could have spent two weeks concentrating on bugs (there’s always bugs), and specifications. With these complete, meetings done and Product Managers made happy, we could have committed to the broken-down workload from a position of knowledge – or at least, less ignorance.

And that needn’t apply to the whole team. Running shorter sprints would mean we could stagger the effort. One or two people could be tackling “bugs and specs”, while others implemented the features they’d specced out last sprint. With a shorter cycle, we can commit to broken-down tasks in quantities we can handle.

Not that I’m arguing for change. At every step in the move towards Agile development I’ve been the biggest sceptic on the team, and such I will remain. But I decided early on in this process to test my scepticism empirically, and the only way to find out is to try.
(Plus, shorter sprints means more pizza, and there’s no such thing as too much pepperoni.)

Posted in Uncategorized | Leave a comment

Book: Clean Code, by Robert C Martin (1)

This book has recently been read and highly reccommended by two co-workers, to the extent that there’s now a copy sitting on every desk in the team. Having had many discussions about the principles of the book, before actually laying eyes on it, I’m keen to find out what it actually has to say …

(The preamble includes a page of discussion on the cover image, being a composite visible & infra-red image of the Sombrero Galaxy. The striking appearance of the galaxy derives from the immense destructive power of the suppermassive black hole at its centre. I shall refrain from conjuring any metaphorical similarity between the devastating power wrought by the over-compression of information & mass, and the concept of reducing the complexity of a tangled code-base. At least until I’ve read more of the book.)

In Chapter 1, there is a discussion as to what Clean Code actually is. The author reproduces the opinions of a few bigwigs in the software engineering community, such as Ron Jeffries & Ward Cunningham, and teases out some kind of consensus – mostly revolving around readability, attention to detail, and extensibility, through which he justifies the effort required to write & maintain clean code.

He doesn’t shirk from identifying this as a big task, but threatens those who disagree with a lifetime of painful maintenance and even stalled careers.

“Good code matters because we’ve had to deal for so long with it’s lack”, which seems a bit harsh, but it reflects the familiar feeling of reading code I’ve written myself and sighing. Here in Coventry we can’t escape our ever-expanding code base (some good, some bad) so I think we have a keener understanding of the problem than we do of the solution.

Chapter 2 covers “Meaningful names”. “If a name requires a comment, then the name does not reveal its intent”. The main principle is that code should be readable without having to refer to any other declarations or methods.

Other rules include names which vary in small ways; this can be difficult, however, when the following classes all require slightly different implementations:

XXYClass

XXZClass

XYZClass

XXYZClass

… etc!

He also mentions the length of variable names, and suggests that the length should generally be proportional to its scope, with short names restricted to short loops and methods. I’ll take this to heart, and further – the short loops are the worst ones, IMHO! A good IDE will autocomplete most variable names, and with decades of practice, my typing speed is, if I may say, pretty fast. Compared to the cost of dirty code, keypresses are cheap.

Posted in Uncategorized | Leave a comment

Refactoring, Chapter 2

There are a number of neat little truisms scattered through the book, for example, when deciding whether to move a method between classes:

“If you must use a switch statement, it should be on your own data, not someone else’s”

I thought, “well yes, of course,” but then a dozen recent counter-examples popped into my head. It’s easier to agree with a statement like that than it is to implement it every day. On a related note, another very useful piece of advice is not to dwell on past mistakes, but to see them as just another opportunity to refactor your code into a better place!

By the end of Chapter 1 the author has covered a range of refactorings, touching on adding inheritance and pure virtual methods. These examples always seem somewhat crowbarred in – it’s hard to see why anyone would spend time refactoring the video rental program he uses – but all programming examples suffer from this, and the principles are laid out for the rest of the book.

Chapter two examines the principles of refactoring, and discusses why it should be done; it’s somewhat evangelical in tone but that’s maybe justified by the experience of the author. The author holds a self-deprecating and ironic tone throughout, which led me to the assumption that he’s a Brit! (Wikipedia proved me right. 😉

An example might be the Rule of Three:

“The first time we do something, we do it. The second time, we wince at the duplication but do it anyway. The third time, we refactor.”

The author’s comfort with loop duplication comes from the idea that we should “refactor now, optimise later”. I must profess a certain cynicism about this. Optimisation frequently makes code more convoluted, and an endless cycle of refactoring and reoptimisation looms. Skipping forward a few pages, the author doesn’t spend much time on the principles of code optimisation. Again, he assumes a comprehensive raft of tests, and the ability to weave profiling tools (or at least timing information) into these tests.

For a massive legacy system such as ours, I wonder how risky the process of refactoring would be, without first devoting significant time to creating the tests required. Our primary application has been in development for 13 years – but we must of course invest in the next 13 years of development too.

 

Posted in Uncategorized | Leave a comment