BDD-Style Testing with MSTest

A way to make BDD-style “specs” using the MSTest framework in Visual Studio.

Why?

BDD tests are a nice, human-understandable way to write tests.
The idea is to write the test code in a way that it can be read and understood
like a functional spec. The benefit of this approach is that you end up with
a set of tests that are self-descriptive and act as a living spec that
can both validate and describe the code that they test.

There are many BDD test frameworks available, but the ones I looked at
lacked good Visual Studio integration. One option would be to write
adapters that allow other frameworks – like nspec
to run with the MSTest runner. The biggest problem with that is that
without significant work, all you end up with is a single pass/fail
for the hosted framework result. Also, debugging through multiple test
harnesses is a pain. So I ended up writing BDD-style in MSTest directly.

Concepts

There are a few core concepts to grok first.

  • Tests should describe expected behavior not implementation details.
    You don’t want the test to fail because some expected API wasn’t called.
    It’s more important that the expected outcome be tested.
  • Tests should only test one behavior at a time. When a test fails, it should
    obvious to the developer what aspect of the behavior is broken. Mondo tests
    with tons of setup and assertions make debugging failures a problem.
  • Tests (and their context) must accurately describe a behavior. If you
    cannot describe the behavior you’re trying to test in a few words, you’re
    trying to test too many things at once. It’s better to break the test
    into multiple tests or add a new context.
  • Test should be written first. With many BDD frameworks it’s simple to write
    a set of basic tests that assert the behavior the code should have. Once in
    place, it’s very useful to have the tests there as a feedback mechanism that
    the code is behaving as expected.

Check out betterspecs.org for more concepts and
best practices. It’s targeted toward Ruby, but the concepts are the same.

Specs, Contexts, and Tests

The basic layout of a test looks something like this:

  • describe_MyClass (a spec)
    • when_some_condition (a context)
      • it_should_behave_like_this (an assertion)

A “spec” describes some piece of code. Typically this will be a container
object in the test framework – a class in C#. The spec contains a set of
“contexts”. Contexts describe some state surrounding a test. This can be
both literal state (of fields, etc.) or it can be a description of an
action that is being taken.

For example “when running” would be an example of a state context and
“when concatenating with a string” would be an example of an action.

Contexts will then have “tests/assertions”. These are the behaviors
that the code is expected to have given the context they are in. Usually
a context will have multiple assertions which both test and describe the
behavior of the code for a given context.

For example in the context of “when running”: “IsRunning should return true”
and “calling start should throw InvalidOperation” would be tests that
explain and test the expected behavior.

BDD in MSTest

With a basic understanding of BDD concepts, lets look at how we can write
BDD-style tests using the MSTest framework built into Visual Studio.

To get started, you’ll need to add a new UnitTest project to your solution.

Namespaces

There are a few ways to organize your test code so that it’s easy to
determine what product code it’s meant to test.

One way is to make a parallel namespace structure:

  • ProductName.TheNamespace
  • ProductName.TheNamespace.Utils
  • ProductName.Test.TheNamespace
  • ProductName.Test.TheNamespace.Utils

The problem with this approach is that you end up with namespace name
collisions which leads to confusing code and usually ends up with
name and type aliasing. It also really confuses Intellisense which makes
authoring the tests a chore.

Another way, and the way I’m recommending, is to have flattened test namespaces.

  • ProductName.TheNamespace
  • ProductName.TheNamespace.Utils
  • ProductName.Test.TestTheNamespace
  • ProductName.Test.TestUtils

This makes it clear that you’re testing things in TheNamespace and also
prevents the annoying namespace collisions. One thing to note is that
because the namespaces are flattened, you should avoid the having namespaces
with the same name in your product code. This isn’t a requirement, but
it makes things saner and is generally a better practice.

Specs

Specs in MSTest are mapped 1:1 with classes. Spec classes should be named
like describe_<ClassName>. The top-level test class should NOT be
marked with the [TestClass] attribute because it itself should not contain
any test code*.

*One exception may be if you want a [TestInitialize] method that
would apply to all of the tests in your spec.

For example let’s write a spec for a class called Base64Utilities. I’ll
build on this example throughout the rest of the doc.

public class describe_Base64Utilities {
  // ... test code ...
}

The only code that should be in the top-level class is member fields
possibly some helper code that is applicable to all of the tests in the spec.

Contexts

We need to have a way to define some test contexts. This could be state, actions,
or in this case, a function. To express the context, use an inner class.

For the example, I’m going to describe the static helper method EnsureValidLength.
To add this context I would define an inner class that inherits from the top-level class.
This class will be marked with [TestClass].

Inheritance gives us access to the helpers and state defined in the top-level class.
Also, inner classes show up in the test explorer as TopLevel.InnerClass which makes
the test explorer and test results more readable.

public class describe_Base64Utilities
{
    [TestClass]
    public class EnsureValidLength : describe_Base64Utilities
    {
      // ... test code ...
    }
}

In the inner class, you can also define a method and mark it as [TestInitialize]
and MSTest will ensure that the code will be run before any test methods.

Tests

Once your context has been written, you can write some tests. The general naming
convention I prefer is something like “given_blah_it_should_return_foobar” or
“it_should_run_the_initializer”. This style is more readable than the same
names in camel or Pascal case. “ForExampleThisIsTheAlternativeAndItIsNotAsNice”.

The test method itself should do very little work itself and it should only
assert the things that it’s explicitly trying to validate. This is true even
if there are other things in the test that you might want to assert; resist the
temptation and create two tests instead. It will make test failures more useful and
will make your spec more comprehensive.

Here are a few example tests.

public class describe_Base64Utilities
{
    [TestClass]
    public class EnsureValidLength : describe_Base64Utilities
    {
          [TestMethod]
          public void given_null_or_empty_it_string_should_return_null()
          {
              Assert.IsNull(Base64Utilities.EnsureValidLength(null));
              Assert.IsNull(Base64Utilities.EnsureValidLength(String.Empty));
          }

          [TestMethod]
          public void given_string_less_than_4_chars_it_should_return_null()
          {
              Assert.IsNull(Base64Utilities.EnsureValidLength("AAA"));
          }

          // ... more tests ...
    }
}

Test Explorer and Test Results

What do you get from writing your tests this way?

First, in the Visual Studio Test Explorer, if you sort by type
the tests are nicely grouped by the inner class names.

  • describe_Base64Utilities.EnsureValidLength
    • given_null_or_empty_it_string_should_return_null
    • given_string_less_than_4_chars_it_should_return_null
    • … other tests
  • describe_Base64Utilities.DecodePartialBase64String
    • given_invalid_base64_it_should_return_null
    • … other tests

When you run the tests the results are more meaningful because it’s
easy to understand what behavior is working and what is not.

Appendix

Gotchas

  • If you inherit from a class “context” that has TestMethods in it
    the derived class will also run the test methods defined in its parents.

    • To deal with that you should either not have test methods defined in parent scopes or you must hide them in the derived class by redefining them with ‘new’.
  • Nested classes need to have different names for their before methods.
    • If they are hidden with new, the base implementations will not be run.

Testing MSTest

Included in case you want to play around with the MSTest behavior yourself.

public class describe_NestedContexts
{
    public string theState = String.Empty;

    [TestInitialize]
    public void before_all()
    {
        theState += "Before Parent";
    }

    [TestClass]
    public class when_running_tests_in_inner_classes : describe_NestedContexts
    {
        [TestInitialize]
        public void before()
        {
            theState += ", Before Child";    // before names must be unique or they will collide
        }

        [TestMethod]
        public void it_should_run_all_test_initializers()
        {
            Assert.AreEqual("Before Parent, Before Child", theState);
        }

        [TestClass]
        public class when_going_deeper : when_running_tests_in_inner_classes
        {
            [TestInitialize]
            public void before_deeper()   // before names must be unique or they will collide
            {
                theState += ", Before Other Child";
            }

            // have to hide the parent's test methods
            public new void it_should_run_all_test_initializers() { }
            public new void it_should_not_be_affected_by_child_classes() { }

            [TestMethod]
            public void it_should_run_all_test_initializers_again()
            {
                Assert.AreEqual("Before Parent, Before Child, Before Other Child", theState);
            }
        }

        [TestMethod]
        public void it_should_not_be_affected_by_child_classes()
        {
            Assert.AreEqual("Before Parent, Before Child", theState);
        }
    }
}

Class Modules in Lua

I’ve been playing around with the Löve2D game engine as part of a side project. It’s a 2D engine that uses Lua for its scripting. Part of trying it out meant getting familiar with Lua.

Lua is an interesting little language. Very simple in its syntax, but also quite flexible and extensible. Out the box, the language offers very little OOP capability. There’s some special syntax to automagically declare and pass around a this (self in Lua) reference, but other than that is definitely “roll your own”.

If you’re not familiar with Lua, here’s a quick primer. Everything is represented by a table (think dictionary/associative array). Basically you create a reference to a table. The table can contain named values and values can be functions.

x = { a = 5 } creates a table referred to by x which has a single member a that has the value 5. You can get and set the value of a like this print(x.a) or x["a"] = 5. It’s a lot like JavaScript in that regard.

You can also store functions in tables.
x = { sayHi = function() print("hi") end }
You can call the function like this x.sayHi()

If you want to build an “object” you can do something like this:
obj = {val = 0, incr = function(self) self.val = self.val + 1 end }
In this example, we’ve created a table that holds a value and a “method”, incr, that acts on the object. To call incr you must either pass in the object ref like this obj.incr(obj) or use the special colon syntax obj:incr() which will automatically pass in the containing table as the first parameter.

You’ll also need to know about metatables, specifically the __index metamethod, to really understand the rest of this post.


Disclaimer: I’ve been programming professionally for years, but I’ve only been messing around with Lua for about a week.

If you search for “Lua class modules” online, the top two hits are two blogs with two very different approaches to creating modules that export OO types. Hisham’s guidelines for writing Lua modules and catwell’s response to that post.

After reading both, I prefer catwell’s approach because of the flexibility around implementation hiding and ease of changing the interface that the module exports. The examples in his post had one problem that I didn’t like: the approach of setting anonymous metatables makes implementation of metamethods clunky because they are defined separately from the rest of the methods. His approach has a benefit that because the metatable is anonymous, the internal implementation is safe from tampering because the method table itself is not directly exported (you can still get it though). There’s another option to protect the method table from monkey patching using metatable hiding which is how I address the problem below.

To demonstrate my approach, I’m going to walk through a table-based implementation of a Counter class module. The explanation will be in the code comments.

-- First, I pre-declare a table that will act as both
-- the method table and the metatable. I do this because
-- I like to define my 'new' function before the other
-- functions so it's available if I want to return new
-- objects from within other functions in the module
local mt = {}

-- Next, define all of the module functions as local
-- functions. Doing it this way instead of directly
-- hanging the methods off of the class table, allows
-- for more flexibility to modify the internal
-- implementation separate from the exported interface.
-- This is aligned with catwell's approach.

-- Define the 'new' function. The member fields are
-- also defined here. Notice that that I'm setting the
-- metatable to mt (that's why it needed to be pre-decl'd)
local new = function(val)
  local obj = {
    val = val or 0
  }

  return setmetatable(obj, mt)
end

-- Define the increment function. Notice I'm not using
-- colon syntax here for the reasons catwell outlined.
local incr = function(self)
  self.val = self.val + 1
end

-- Define a tostring function
local tostring = function(self)
  return string.format("Counter is %d", self.val)
end

-- Now that the implementations are defined, we can add
-- them to the method table. This also gives you a nice
-- place to refine the interface you want to export.
-- E.g. I will export 'incr' as 'increment'.
mt.increment = incr

-- If the class had more methods, they would be
-- exported here. Because I'm also using mt for a
-- metatable, I can export metamethods too.
mt.__tostring = tostring

-- Now that the interface has been defined, I need to
-- set up the metatable. First the metatable needs to
-- use itself for method lookup.
mt.__index = mt

-- Next, because we don't want the method table to be
-- tampered with, hide the metatable. This line will
 -- make getmetatable(x) return an empty table instead
-- of the real metatable. If we didn't do this, consumers
-- could get the metatable and because it's also the method
-- table, could monkey patch the implementation.
mt.__metatable = {}

-- That's pretty much it. I also add a 'constructor'
-- to forward the arguments to the 'new' function.
local ctor = function(cls, ...)
  return new(...)
end

-- Finally return a table that can be called to get a new
-- object. You could also simply return a function or a
-- table with a 'new' member. It's all a matter of style and
-- what syntax you want your consumers to use.
return setmetatable({}, { __call = ctor })

You use my new class module like this (in the Repl).

> Counter = require "counter"

> c = Counter()
> print(c)
Counter is 0

> c:increment()
> print(c)
Counter is 1

> c2 = Counter(100)
> print(c2)
Counter is 100

Pretty straight-forward. Notice I’ve made two instances, one that that was initialized with the default and one initialized with 100. You can see that they each have their own value that can be incremented independently using the exported name ‘increment’ (as opposed to the defined function ‘incr’). Also notice that the meta method __tostring is defined and is forwarded to the internal implementation of tostring.

Now lets test how “safe” it is. First let’s try overriding ‘increment’ on one of the objects and verify it doesn’t affect the other. I’m overriding the behavior on instance ‘c’ to increment by 10.

> c.increment = function(self) self.val = self.val + 10 end
> c:increment()
> print(c)
Counter is 11

> c2:increment()
> print(c2)
Counter is 101

Good. It only affects instance ‘c’. How about if we try to explicitly patch the method on the method table? You can usually get a reference to that table by getting the __index instance from the method table.

> =getmetatable(c).__index
nil

Can’t do that either because the metatable is hidden (by setting the __metatable metamethod).


To recap, I prefer the module style described by in catwell’s blog because it allows you to more formally export an interface that isn’t directly tied to the implementation. Also, it promotes simpler function design and better implementation hiding. The difference between my approach and catwell’s only really differs in the mechanism of hiding the method table. By combining the metatable and method table instead of using an anonymous metatable, it’s clearer and cleaner when defining metamethods. This is especially useful when defining metamethods like __add and __eq.

That said, I would not consider myself an adept Lua programmer. If I’ve overlooked something, please help me learn by leaving a comment below.