These are some of my coding philosophies, in no particular order. Some of them are likely to be controversial, but I’m not suggesting that you adopt them or even that they would be useful for you.
This post is really for me.
Comment your code
Comments on public methods and properties should say what the method does. A developer who wants to use a method shouldn’t be required to read the code to figure out if it does what’s needed.
Inline comments should say why it was implemented this way. A developer reading the code is probably trying to change it. It is difficult to change something without first understanding the reasoning behind its current state.
Code should be self-documenting. Choose appropriate names for everything and divide it up so it’s readable. However, self-documenting code isn’t an excuse to omit comments. Code tells you how, not why, no matter how descriptive you make your variable names.
I’m not a fan of automated comment creation tools like GhostDoc. If a method signature is well-crafted enough that a tool like GhostDoc can generate an accurate comment, the comment must be obvious. What benefit does an obvious comment provide? Why include obvious comments in your code? Jeff Atwood calls this “undocumentation”.
Process is not a substitute for competent programmers
Joel Spolsky refers to this in his essay about the McDonald’s Methodology. You can hire someone with no experience, give him a 10-step guide to making a burger, and he’ll churn out endless amounts of near-food with very little effort. However, that same worker will never produce a 5-star meal, no matter how complex the instruction book.
In the world of software, the Process Guys believe that they can create better software not by hiring better programmers, but by carefully documenting, describing, categorising and monitoring every step of the development process.
Do they always produce 5-star software? Nope.
Do they still create software burgers? Yep.
Standup meetings, backlogs, retrospectives and all of the other pomp and ceremony in the Methodology du jour won’t enable a mediocre developer to produce a quality piece of software.
Quality isn’t an afterthought
This is one of my favourite Steve Jobs quotes:
A great carpenter isn’t going to use lousy wood for the back of a cabinet, even though nobody’s going to see it.
If the UI is polished and beautiful but the code is badly designed, unstable, spaghettified crap, it’s not a quality product. It’s a shiny turd.
Quality should pervade the product. The entire system should be well designed, even if no-one but the developers will ever see the code or appreciate the tidy architecture. Why? Because software maintenance represents a significant cost, and the only way to reduce that is by creating high-quality software in the first place.
Remain detached from the tools you use
Here’s a bug report for TeamCity:
Support for TFS branches
like for Git and Mercurial in version 7.1, but for TFS
Mercurial and Git both support lightweight branches, and TeamCity can now automatically build those branches without admins needing to create new project definitions. TFS does not support lightweight branching.
The bug report is like asking Ford to retrofit air conditioning and alloy wheels to a horse and carriage. If you need this feature, upgrade to a modern source control system. Microsoft are trying to make this easier by adding Git support to Visual Studio.
Abandon tools that no longer serve their purpose or that have been superceded by something better.
Learning a new programming language will make you a better programmer
Polyglots seem to be the most capable programmers. Do programmers that learn multiple languages become more highly skilled as a result? Or do highly skilled programmers inevitably learn multiple languages?
In either case, learning more languages can only be a good thing. I’d expect a well-rounded programmer to know at least one systems programming language, a scripting language and a language for web development (client or server side; preferably both). I should probably learn a functional language.
Would you hire a carpenter whose only tool was a hammer?
Choose the best tool for the job, even if that means learning a new tool
If you are handed a new platform to start developing for and your first instinct is to try and find a C# compiler for it, because C# is what you use now, you are doing it wrong. A new platform is an opportunity for you to learn something new: a new language, a new platform, a new IDE; you can learn new patterns, new approaches, and come out of the experience a better programmer.
I don’t know who originally said this, but it’s appropriate:
Some people work for 5 years and gain 5 years of experience. Some people work for 5 years and gain 1 year of experience 5 times.
Reaching for the same old toolset isn’t going to teach you anything new. Don’t complacently squander a rare opportunity for advancement.
This is particularly relevant in the era of mobile development. Do you learn to code for Android and iOS and create great apps for both, or do you go the “web app” route – as Facebook and LinkedIn tried to do – and give everyone the same second-rate experience?
Learn to use your tools as they were designed to be used
In a similar vein, don’t try to use tools for purposes for which they are not suited. You can bang a screw in with a hammer, and you can use SharePoint as a CMS. Neither was designed for the purpose and there are far more appropriate tools out there.
“That’s how we’ve always done it” is never an acceptable answer
What this answer really means is:
- We’re lazy
- We’re afraid
- We don’t understand
- We don’t care
Your API should be beautiful
Your API is the UI that other developers will see. Don’t produce garbage like libxml’s htmlEncodeEntities function, which for optimal usage requires you to precognitively know its output before you call it.
Create the best solution you can using the information you currently have
The Agile enthusiasts I’ve known have a tendency to believe that they can implement hacky, half-baked solutions for everything because, y’know, iterative. If it’s important it’ll get fixed later.
The process they end up following is what Spolsky calls the “infinite defects methodology”:
The story goes that one programmer, who had to write the code to calculate the height of a line of text, simply wrote “return 12;” and waited for the bug report to come in about how his function is not always correct. The schedule was merely a checklist of features waiting to be turned into bugs. In the post-mortem, this was referred to as “infinite defects methodology”.
Iterative development is intended to solve two problems:
- Users who incessantly change their minds about what they want;
- Architecture astronauts who build vast systems just in case.
Short development cycles mean that users who change their minds have a less detrimental impact on the product. Similarly, tight deadlines mean that the guy who wants to build a towering behemoth of architectural indirection simply doesn’t have time to do so.
Unfortunately, it has a third effect:
- Developers ignore the majority of information they have about the problem at hand in order to create a solution that meets the acceptance criteria – and nothing more – as quickly as possible.
Even though the developer may know that the problem he is trying to solve will need to be re-used throughout the entire system, he will implement a one-off fix that just addresses the acceptance criteria for the current development iteration.
In the best-case scenario, this minimal solution gets copied-and-pasted throughout the app – perhaps with minor usage-specific tweaks – until it becomes a major problem. At that point all of the work done so far gets thrown away, at considerable cost and effort, and more time is wasted creating a more appropriate solution. In the more likely scenario, the copy-and-paste solution is copied-and-pasted more and more, and the system ends up as a Big Ball of Mud. It’s very difficult to be taken seriously when you say, “Remember all that work we did in the last 4 iterations? We need to throw all of that away and start again even though the requirements haven’t changed.”
Consider this situation. A developer is asked to add a drop-down list of books to his web application. He knows right now that the drop-down list UI widget will be re-used dozens of times in the application. How does he approach the problem? He looks at the acceptance criteria for this iteration, which says “create a drop-down list of books”. That is precisely what he creates: a one-off, non-reusable drop-down list widget that contains a list of books. In the next iteration he has to create a drop-down list of authors, so he copies-and-pastes his book code, replaces the hard-coded book list with an author list, and he’s done. Acceptance criteria met, and the Big Ball of Mud is well on its way.
If the developer instead had looked beyond the acceptance criteria and used all of the knowledge he had available at the time – that the drop-down list UI widget would be re-used throughout the app – he would have made it a re-usable component.