.

nodejs unit testing

My habits

  • write code
  • write tests

(sometimes done in the reverse order)

My tools

  • Testing framework
  • Assertions package
  • Mocking package
  • Profiling tool

Frameworks

Frameworks

  • jasmine

  • mocha

  • qunit

  • vows

Mocha

  • Came out in 2012
  • Junit style
  • Relatively lightweight
  • Does not include assertions or mocking

Setting up Mocha

package.json:

{
  "name": "test-program",
  ...
  "scripts": {
    "test": "mocha"
  }
  "devDependencies": {
    "mocha": "~2.3.3"
  }
}

Some options for directory structure:

test/test-index.js
test-index.js

Mocha structure

describe('User', function() {
  it('should save without error', function(done) {
    var user = new User('Luna');
    user.save(function(err) {
      if (err) throw err;
      user.delete();
      done();
    });
  });
  it('should save without error', function(done) {
    // ...
  });
});

describe('Group', function() {
  it('should save without error', function(done) {
    // ..
  });
  it('accepts user additions', function(done) {
    // ..
  });
});

Run output

 User
     should save without error
    ✓ can log in

 Group
    ✓ should save without error
    ✓ accepts user additions

 4 passing (10ms)

Failure output

 User
    1) should save without error
    ✓ can log in

 Group
    ✓ should save without error
    ✓ accepts user additions

 3 passing (11ms)
 1 failing

  1) User should save without error:
     Error: Database full
      at Object.User.save (user.js:5:10)
      at Context. (test-user.js:16:10)

Shared setup

describe('User', function() {
  beforeEach(function(done) {
    var user = new User('Luna');
  });
  it('should save without error', function(done) {
    user.save(function(err) {
      if (err) throw err;
      user.delete();
      done();
    });
  });
  it('should change name without error', function(done) {
    user.newName("Sol", function(err) {
      if (err) throw err;
      user.delete();
      done();
    });
  });
});

Shared setup and teardown

describe('User', function() {
  beforeEach(function(done) {
    var user = new User('Luna');
  });
  afterEach(function(done) {
    user.delete();
  });
  it('should save without error', function(done) {
    user.save(function(err) {
      if (err) throw err;
      done();
    });
  });
  it('should change name without error', function(done) {
    user.newName("Sol", function(err) {
      if (err) throw err;
      done();
    });
  });
});

Shared setup and teardown

describe('User', function() {
  beforeEach(function(done) {
    var user = new User('Luna');
  });
  afterEach(function(done) {
    user.delete();
  });
  it('should save without error', function(done) {
    user.save(done);
  });
  it('should change name without error', function(done) {
    user.newName("Sol", done);
  });
});

Assertions

Assertions

  • Built-in
assert(myString == "ok")
  • should.js

myString.should.equal('ok');
  • expect.js

expect(myString).to.be('ok')
  • Chai

assert.equal(myString, 'ok');

Chai

  • Chai

myString.should.equal('ok');
expect(myString).to.equal('ok');
assert.equal(myString, 'ok');

Chai in action

describe("RhythmGuard", () => {
  it("can lookup", done => {
    rhythmguard.lookup({ip: "127.0.0.1", checkset: "testSet"}, (err, result, extras) => {
      expect(err).to.not.exist;
      expect(result).to.be.true;
      expect(extras).to.deep.equal({
        "ip": {
          result: true,
          lists: ["nonroutable"]
        }
      });
      done();
    });
  });
});

From real-world projects

expect(parseInt(matches[1])).to.be.at.least(loadCutoff);
expect(finalizer.commit).to.be.a('Function');
expect(data[0]).to.match(/^Your new bucket is [0-9a-zA-Z]{20}$/)
expect(objs).to.have.members(["wazoo", "woohoo",
  "StaggeringlyLessEfficient"])
expect(data).to.be.empty

Mocking

  • Doubles, fakes, spies
  • Chops out time-consuming or difficult-to-control dependencies
  • Databases, network services, filesystems, shell calls, hefty library calls, etc.

Sinon


(there's really nothing else worth bothering with)

Java-inspired mocking (DI)

function duckCount(db) {
  var values = db.query("SELECT COUNT(*) FROM DUCKS");
  return values[0];
}
// ...
it("gives the expected number of ducks", done => {
    var stub = sinon.stub().returns(42);
    var result = duckCount(stub);
    expect(result).to.equal(42);
    done();
});

Duck punching

  • Or monkey patching
  • Environmental manipulation
  • Avoids the downsides of DI
  • Available with any decent language

doing it with sinon

// count.js
var db = require('db');
function duckCount() {
  var values = db.query("SELECT COUNT(*) FROM DUCKS");
  return values[0];
}

// test-count.js
var count = require('./count');
var db = require('db');

it("gives the expected number of ducks", done => {
    sinon.stub(db, 'query', () => { return 28 });
    var result = duckCount()
    expect(result).to.equal(28);
    done();
});

Real-world example

var sinon = require('sinon'),
    redis = require('redis'),
    fakeredis = require('fakeredis');

sinon.stub(redis, 'createClient', fakeredis.createClient);

Also

  • Easily simulate behavior
mockFs({
  'path/to/fake/dir': {
    'some-file.txt': 'file content here',
    'empty-dir': {/** empty directory */}
  },
  'path/to/some.png': new Buffer([8, 6, 7, 5, 3, 0, 9]),
  'some/other/path': {/** another empty directory */}
});

Also

  • Easily simulate failures
sinon.stub(fs, 'open', (path, flags, mode, callback) => {
  callback(new Error("Open failed!"));
});

Risks of mocking

  • May not be faithful to the mocked object
  • Dependency interfaces may change over time

Profiling

Profiling

  • istanbul

  • blanket.js

  • JSCover

Profiling

  • Lets you know what you've put under test
  • Can highlight subtle cases of missed coverage

Automated Profiling

  • Set an "acceptable level" of coverage
  • Frequent runs makes slipping coverage clear


Coding cycle

  • Write a couple lines (of code, of tests)
  • Run the tests!
  • Write some more…
  • Run the tests!
  • A little more…
  • Run the tests!

Coding cycle

  • Run tests every few seconds/minutes
  • May help to use nodemon + Growl (or equivalent)

CI

  • Ideally, with every pull request
  • Report to somewhere every dev can see (Email, chat)
  • Fix broken builds

CI

Jenkins

Unit tests are just one piece

  • manual dev testing
  • unit tests
  • regression testing

writing a proper unit test

  • Make them fast!
  • Don't be overly concerned about the "unit" in "unit test"
  • Code boundaries are less important than user needs
  • Keep the end goal in mind (why are you doing any of this?)

The "unit" way

  • Test classes closely mirror code classes
  • Test methods closely mirror code methods

The "unit" way

// blocklist.js
class Blocklist extends BaseCheck  {
  constructor(checkset_name) {
    // ...
  }
  prepareChecksetConfig(config, callback) {
    // ...
  }
  run(msg, callback) {
    // ...
  }
}

The "unit" way

// test-blocklist.js
describe("Blocklist check", () => {
  before( done => {
    var BL = new block_list();
  });
  it("prepares its config", cb => {
    BL.prepareChecksetConfig(null, (error, finalizer) => {
      // ...
    });
  )};
  it("run does the right thing", cb => {
    BL.run({"page": "http://www.cnn.com"}, (error, result) => {
      // ...
    });
  )};
});

Not-as-"unit-y"

class RhythmGuard extends EventEmitter {
  constructor() {
    // ..
  }
  init(config, callback) {
    if (cluster.isMaster) {
      process.on('SIGHUP', () => {
        async.forEachOf(cluster.workers, (obj, id) => {
          cluster.workers[id].send("reload");
        });
      });
    } else {
      process.on('message', msg => {
        if (msg === "reload")
          // ...
      }
    }
 })
}

Not-as-"unit-y"

describe("Clustering", () => {
  it("reloads children on HUP", done => {
    // Fire up a cluster
    child = child_process.fork("sample-server.js", [workerCount]);
    process.kill(child.pid, 'SIGHUP');
  }
});

General tips

  • Consider unit tests as part of the normal workflow
  • Do not separate them when planning
  • Write them going forward, don't skimp

Tests are a pain in the butt

  • Why persist?

They add to development time

  • 2 to 3 times as long in some cases

They can be difficult to write

  • The SIGHUP thing, for example.

They break and have to be fixed

  • May disturb a dark and ancient creature from the depths of an existing test
  • And then you've got to got to beat it down

I would not want to do without them

  • Prevents a lot of bugs and general stupidity on my part
  • Gives me confidence when adding new features
  • Makes refactoring comfortable
  • Provides ego to my programming id