How to write rock-solid code

There are 4 simple things I’ve learned over the years (and try to do…) to write solid code:

  • Write modular code
  • Document as you write
  • Have each code chunk complete its task or throw an exception
  • Write unit tests as you debug

These are compiled here largely for my re-reference, but also because I was just discussing it with a friend (also a software developer).  Now to the details, which include a really handy technique I’ve picked up for manual testing.

  • Write modular code.
    • Each piece of code should be short and have a well-defined purpose.  It takes specific inputs (and doesn’t rely on global variables unless appropriate), performs a specific function, and returns specific values.  If you have more than a pageful of code in one method, for example, you need to split it into more methods.
  • Document as you write.
    • Every class, method, handler, subroutine, etc should have documentation appropriate for the language you’re using (e.g. POD for Perl).  If you don’t know what that is, go Google “standard documentation format for <insert_language_here>” right now and learn it.
    • Document:
      • What your code-chunk does in a one-sentence summary.  Your user is looking for modules to use and should be able to read that line in a list and pick the module they need.
      • What parameters (input) does it take?
      • What does it do?
      • What does it return?
    • In a Wiki or similar (I use Google Docs for personal stuff), document the system itself.  If you get to more than 5 classes, you’re going to start forgetting what calls what and where your data flows.  Document data flows and sequence diagrams at least.
  • Have each code chunk complete its task or throw an exception.
    • Each method/handler/subroutine should either complete what it’s asked to do or throw an exception.  As you’re coding, you can start with a general exception at first, then as time passes and you get specific errors happening, you can update the code to throw specific exceptions and possibly catch and deal with them.  This makes several things possible:
      • Calling code can just call the method, not have to check obscure return values.
        • Makes calling code cleaner:
          • getWidget();
          • moveWidget();
          • deleteWidget();
          • Instead of:
          • if getWidget() = -1 then { dealWithBadWidget(); }, etc.
    • Validate inputs to the method. Throw an exception if the input’s invalid. Assuming your caller may give you bad values helps prevent obscure bugs.  Note here that distinguishing between user input vs programmatic input is handy: e.g. input’s the wrong class is a program problem, input’s got a semicolon in “fist name” is a user error.  Throw a different type of exception for each – program problems should alert the developer, user errors should alert the user.
  • Write unit tests as you debug.
    • You’re going to write code to debug anyway, just make them into unit tests.  Follow the format appropriate for the environment/language you’re writing in. Don’t worry about making super-comprehensive tests, just make sure you have (safe) tests you can run that test the things you’re working on.
    • Unit Testing Frameworks:
      Perl: Test::More, but really see “man perlnewmod”
      PHP: PHPUnit
      JavaScript: QUnit
      Java: JUnit
      Python: Unittest
      iOS/MacOS X: Built in to Xcode!
    • If it’s not practical to write automated tests (e.g. for a user flow on a web site, although see Selenium), make a QA script (AKA QA test procedure):
      • Make a 2-column spreadsheet (Google Docs is great for this).  Column one is “Step”, column 2 is “Result”.  I like to keep this really simple and have the result be “PASS” or “FAIL”.  If there are any notes about the failure, I add them as a note attached to the cell.
      • In the “Step” column, list the steps to test the feature, one step and result per line.  Make sure each step and test can pass or fail.  So a test for Google search might look like: “Go to www.google.com.  Page loads and displays a search box.”, “Type ‘pizza pie’ in the search box, hit return. Search results display.”, “Top 3 search results contain the words ‘pizza’ and ‘pie’ in the title.”, “Ads display on the top and right side of the page.” etc.
      • You can also use conditional formatting to make the backgrounds of the Results cells red or green so you get a really clear-at-a-glance picture of whether your code works or not.
      • After you make significant changes, before you push to production, run through the script yourself or have an intern or assistant do it.
      • Tips:
        • Split the tests into multiple tabs (or spreadsheet docs if appropriate) based on feature (think separate unit tests), and name the tabs/docs by feature.
        • Keep the scripts short enough to quickly test a feature, and as long as necessary to include all steps needed to test that feature (e.g. if you’re testing a shopping cart’s order confirmation page, your test will need to include adding items to the cart, signing in and/or registering, entering test payment info, and confirming your order).
        • Write the steps very clearly.  Pretend you’re writing a program.  A person with no technical knowledge should be able to follow your steps without thinking and mark them “PASS” or “FAIL”.  e.g. say “go to http://www.google.com, verify a search box displays” and not “load the web site” (which web site?), or “see if it loads correctly” (what’s “correct”? If I’m testing, I may think the error message is “correct” because it’s grammar is accurate).
        • “PASS” or “FAIL” makes debugging easy.  If your steps and verifications can’t pass or fail, break them into smaller steps or separate test sheets.  Your automated unit tests don’t leave vague notes – your manual ones shouldn’t either.

There.  Solid code in 4 steps.