Ok so I said I was going to cover Machine Specifications this week but I thought a look at some spec flow tests in the wild so to speak might be helpful, and it gives me a chance to showcase a home project.
A long time ago now I wrote a small application that watched C’# solutions, when a save was made, the application built the solution and ran all the tests, and displayed the result in a popup window. I have used this on many projects now, but have started to find problems with the way it operates, and decided to do a project rewrite, as the original was all legacy code, and I wanted to provide better support for the application.
I put together two features that covered the basic functionality I wanted,
- Feature: nTestRunner solution change
- As a developer
- In order to get rapid feedback
- When I save a file, the program should be compiled and
- all tests run and the results stored in a file in the
- same format as nunit, so I can use beacons to view the
- results of the test.
- The file watcher watches the solution file, and the immediate
- project directories, when a change in these watched areas is
- detected then the watcher is stopped, the build and test cycle
- started, and the results written to the xml results file in nunit
- format, if the runner display is set then it is activated with
- the results from the build test cycle.
- Scenario: with default configuration
- Given the program is running with no argument
- When a change event is received from the watcher
- Then the watcher is switched off
- And the build is triggered
- And the tests are run
- And the results are stored
- And the watcher is restarted
- Scenario: with test runner configuration set
- Given the program is running with MSpec argument
- When a change event is received from the watcher
- Then only projects with the specified runner are tested
- Scenario: with the display runner configuration set
- Given the program is running with Growl argument
- When a change event is received from the watcher
- Then the runner display is called
- And the runner display is given the test results
I have laid out the feature files in the same manor, this is my documentation, so there is a large amount of text, that is pure description, then we get to the scenarios, these the above file has been tweaked so that the each is scenario uses a similar syntax, this will assist me when I write the step definitions
- Feature: nTestRunner Program Startup
- As a developer
- In order to get rapid feedback
- When I save a file, the program should be compiled and
- all tests run and the results stored in a file in the
- same format as nunit, so I can use beacons to view the
- results of the test.
- This feature covers the startup of the program,
- calling nTestRunner
- starts the program up in default mode, it will scan
- up the directory chain looking for a solution file,
- starting at its current directory it is this file
- that the applicaion uses to decide what is a test
- project, and it is the file that is passed to msbuild,
- it is assumed that all test files in the solution want
- to be run, regredless of the test runner used, all
- results are combined into an nunit result format.
- No display is made when the tests are run, only the file
- is written to the same directory as the found solution file.
- calling nTestRunner -Path | -P [path to solution file]
- starts the program up with the solution file path set
- calling nTestRunner -Test | -T [Test runner name]
- starts the program up with only the specified test runner
- calling nTestRunner -Display | -D [Runner Display Name]
- starts the program up with the specified display runner
- Scenario: Startup without arguments
- Given that the program is not running
- When the program is run with arguments ''
- Then the user sees text containing 'nTestRunner version 1.0'
- And the user sees text containing 'Watching Files'
- And the user sees text containing 'nTestRunner.sln'
- And the user sees text containing 'nTestRunner.features.csproj'
- And the user sees text containing 'nTestRunner.Spec.csproj'
- Scenario: Startup with path arguments
- Given that the program is not running
- When the program is run with arguments '-Path,C:\TestSolution.sln'
- Then the user sees text containing 'TestSolution.sln'
- And the user sees text containing 'TestProject1.csproj'
- And the user sees text containing 'TestProject2.csproj'
- Scenario: Startup with test runner arguments
- Given that the program is not running
- When the program is run with arguments '-Test,MSpec'
- Then the user sees text containing 'Running tests with MSpec'
- Scenario: Startup with display arguments
- Given that the program is not running
- When the program is run with arguments '-Display,Growl'
- Then the user sees text containing 'Displaying results in Growl'
This second file has not undergone any changes as yet, and is still in the raw form from the initial project outline, although it should not take much tweaking to get it into shape.
With the feature files in place, I get an overview of the project, and what I need to do to make everything work, I tend to only worry at this point about exceptions but to focus on the “happy path”, because that is what the user will care about, getting the happy path in the features, will ensure the application works, and give the user the quickest feedback and allows them to answer the question.
Does what I have asked for make sense?
Next step is to choose a feature to start on, in this case it was easy, I need the start-up features before I can even think about anything else, the step definitions are actually very simple consisting of just 3 steps
- [Binding]
- public class StartupStepDefinations
- {
- TestConsole _console;
- Runner _runner;
- [Given(@"that the program is not running")]
- public void GivenThatTheProgramIsNotRunning()
- {
- _runner = null;
- }
- [When(@"the program is run with arguments '(.*)'")]
- public void WhenTheProgramIsRunWith(string arguments)
- {
- var args = arguments.Split(',');
- _console = new TestConsole();
- _runner = new Runner(args, _console);
- }
- [Then(@"the user sees text containing '(.*)'")]
- public void ThenTheUserSeesTextContaining(string expectedText)
- {
- Assert.Contains(expectedText, _console.Output);
- }
- }
and that’s it, the only interesting item in the list is the “WhenTheProgramIsRunWith” step, as this splits the argument string at the comma to produce the argument array that is usually passed into a console application.
The test console is simply a class that has tales the place of the normal standard out for this application, the actual implementation will simply call console.write with any string given to it, this version however aggregates the strings sent to it which it returns when the output property is called.
I will next time cover machine specifications, and show how I judge when to move from this high level whole system tests to the lower level specification tests.
You might want to avoid using private members in your SpecFlow definitions. I recorded a short screencast on the subject a while back:
Darren thanks for the feedback and the screencast
I only tend to use the context classes when my step definations start to flow across classes, I think the local storeage of data helps reading the step definations, especially when first starting out. But I totally agree that once you start to reuse step definations and have then across classes then using the context is a must, I also like the way you refactor you classes to improve navigation at the end of your demo.
Ok, cool, just passing the idea along. 🙂 This is one of the ideas I’m always passing to developers when I see people tweeting or blogging about SpecFlow. I think the private member issue is a big “gotcha” that keeps .Net developers from moving past the demo SpecFlow use and into full-app SpecFlow testing.
Thanks for your posts on MSpec, too. I’m getting into that myself now, and it was a good read.