TFS2010 WIT: A better User Story work item template

http://2020fs.com/

Here is a revised version of Microsoft’s Agile User Story Work item Template (WIT) which I have adapted to permit greater flexibility in rejected the story.  You can download it here http://2020fs.com/assets/TFSWITs/user%20story.xml

There is no warranty expressed or implied, it’s free for use and modification, but all I ask is that you credit it back to here with the appropriate links if you use it.

Enjoy.

User stories represent narrow verticals, typically one screen of user interface or a short user journey.  Where a screen is reused but with different details (fields, workflow, purpose) according to alternate actors for example, more than one User Story will exist but they will have a “related” type relationship described.

Assigned To

This field holds the identity of the BA who is responsible for delivering the analysis (when the workflow status is any of New, In Analysis, Ready for Analysis Review by Client, Ready for Development) and of the developer who takes ownership of the implementation (when the workflow status is Ready for Development or any subsequent state).  During testing, the owning developer is still named in this field to make it easier for the USER STORY to be returned should any part of the testing fail.

State

User Story WIT Workflow

User Story WIT Workflow

This describes the present condition of progress of the USER STORY in the workflow (see diagram above).

State is changed to State set by Trigger Actor Action
New Analyst New requirements are discovered Analyst User Story is created
In analysis Analyst Requirements are understood Analyst Business requirements documentation commences
Ready for analysis review by client Analyst Requirements are documented Client Analyst Client reviews business requirements documentation
Ready for development Analyst Client validates business requirements PM, TDA, Tester TDA estimates workload; PM allocates USER STORY to a phase;   Tester commences describing behaviours; Unit test skeletons written
In development PM USER STORY is assigned to next/current phase Developer Implementation commences
Ready for testing Development manager Implementation is complete Tester USER STORY enters testers’ queue
In testing Tester Tester is available to test USER STORY Tester Testing commences
Testing passed by Supplier Tester USER STORY passes all behavioural tests; USER STORY meets   exit criteria Client Tester Client reviews implementation
Testing passed by Client Tester Client validates implementation
Deprecated Analyst USER STORY is out of scope
Deleted Analyst USER STORY is mistakenly created
  • At any point, the analyst may remove the USER STORY either by marking it as Deprecated, when the USER STORY is out of scope, or as Deleted, when the USER STORY has been created in error.  Although this action can be executed by any TFS user, it should only take place with the expressed agreement of the analyst.
  • A User Story is created by the analyst when she is aware of its existence.  All User Stories and requirements are to be captured regardless of their scope.
  • Once the analyst begins work on understanding and document, she marks the USER STORY as In Analysis.
  • When the analyst has completed the USER STORY, she marks it as Ready for Analysis Review by Client and then presents the USER STORY to the Client for review and validation.
  • If the Client is not completely satisfied with the USER STORY, the analyst returns the state to In Analysis and makes the changes recommended by the Client.
  • Once the Client has agreed that the USER STORY is correctly written, covering the expected business requirements comprehensively and without ambiguity, the analyst marks the USER STORY as Ready for Development.
  • User Stories with a state of Ready for Development are reviewed by the technical design authority where implementation notes may be added to the Developer and Tester descriptions accordingly.  If the TDA is not satisfied with the content of the business description, he may return the state back to New for further work by the BA, or he may amend the USER STORY and return it to Ready for Analysis Review. In either case, the USER STORY is assigned back to the BA.
  • If the USER STORY meets with the satisfaction of the TDA, the project manager will decide in which phase the USER STORY belongs.  A developer may or may not be assigned to the USER STORY at this point.  If a developer is assigned, he becomes the owner of the implementation of the USER STORY, although that does not preclude other developers from performing some or all of the implementation.
  • User Stories that are marked as Ready for Development can be reviewed by the tester who may now start describing the behaviours and writing test scripts for the USER STORY.
  • Once a developer is ready to implement a User Story, he marks it as In Development.  He may work alone on this USER STORY or he may wish to involve other developers.  The owner is the developer assigned to the USER STORY and it is his responsibility to ensure it has been implemented satisfactorily with sufficient code coverage and meeting all of the requirements described in the USER STORY.  Should the tester have already described behaviours, the developer owner should be satisfied that the implementation passes all tests described thus far.
  • In the event that a developer is unable to implement the User Story due to incomplete analysis, he may return it to Ready for Analysis, flagging the reasons in the Notes or other sections as appropriate.
  • When a User Story has been implemented to the satisfaction of the developer owner, he will notify the development manager who will mark it as Ready for Testing, at which point it joins the test queue.  He should not change the Assigned To field.
  • The tester shall frequently query to return User Stories marked as Ready for Testing.  Once the tester is available to test the USER STORY, he moves it to In Testing but does not changed the Assigned To field.
  • A tester may reject the analysis of the USER STORY and return it to In Analysis, setting the Assigned To field to the business analyst.  This is expected to be an edge-case scenario and is unlikely to occur.
  • Only when the Test Manager is satisfied that a User Story fully implements all of the behaviours described in the USER STORY and passes all tests should he mark it as Testing Passed by Supplier.  Bugs may be raised against “passed” User Stories, typically regarding non-functional implementation detail or test automation.
  • The tester is responsible for reporting to the Client that a User Story is believed by Supplier to be complete and ready for their review.  All known associated Bugs should also be reported to the Client at this point.
  • If the Client is unsatisfied with the User Story, she will ask the tester to return the USER STORY to the In Analysis state and assign it back to the business analyst.
  • Where the Client is satisfied with the User Story but does not believe the implementation covers the behaviours described in the USER STORY, she will request that the Supplier tester returns the USER STORY to In Development.
  • User Stories that are implemented to the satisfaction of the Client, only to the extent that they meet all of the described or implied behaviours, can be marked by the Supplier tester as Testing Passed by Client.  Bugs which are not covered by the USER STORY, including failure to meet non-functional requirements, should not inhibit the USER STORY being passed by the Client.

Reason

Whenever the state of the User Story changes, the reason is used to describe the trigger for the change of state.  Exclusively for User Stories, there is only one reason available per state transition and may be treated as a descriptor only.

Flags

This field is for free-form use by the project manager to mark User Stories to assist with creating queries and therefore should be ignored by all other TFS users.

Area

The Area field describes the high-level vertical association with the application.

Iteration

Iteration is used to identify in which release a User Story is expected to be implemented.  This field should only be changed by the project manager or with the consent of the project manager.

Approved by, approved on

The Approved fields are used to record the acceptance of the business understanding and of the implementation of the User Story by the Client.

  • When the state of the User Story changes from Ready for Analysis Review by Client to Ready for Development, the name of the business owner or her proxy should be set in the Approved By field by the Supplier business analyst.  The time of the approval should be recorded in Approved On field at this point.  The BA may also choose to add notes to the History field at this point also.
  • Whenever the Business description field changes, the user must ensure that the Approved By and Approved On fields are set to blank [no value].  Like all changes to the work item, the previous values can be found in the History audit log.
  • When the state of the User Story changes from Testing Passed by Supplier to Testing Passed by Client, the Supplier tester should set the name of the Client tester who has approved the implementation and the date and time at which approval was granted.  The Supplier tester may also choose to add notes to the History field at this point also.

Approve before

This field is used exclusively by the project manager as a record of the deadline by which the Client must fulfil their responsibility to either review a User Story or the implementation of a User Story.

Stack rank

This field is used to describe the priority of a User Story and should be populated by the project manager, optionally with the assistance of the technical design authority, business analyst, tester and developers.  Stack Rank should be a number, typically in the thousands, where the lowest numbers represent the highest priorities.  Values as multiples of 1000 are normally applied to make it easier to insert priorities later on (e.g. Stack Rank 2500, 2750, 2800, etc.).  Typically, this should take the form that the last three digits represent priority and the first one or two the phase.  Phase 10 User Stories should default to 10500, phase 9 to 9500.  The last three digits should be changed according to the place of the USER STORY in priority, the lower the number, the higher the priority.

Story points

The project manager shall work with the technical design authority to populate the high-level estimate of implementation, spread across the development team but typically excluding analysis and testing effort.  This value is used for project planning purposes and is not related to task estimates and should be ignored by analysts, developers and testers.

Security risk

The technical design authority may populate this field with a perceived risk of security implications for consideration by developers.  Developers may, with consent of the TDA, upgrade the security risk.  Where security risk is raised, accompanying information may be present in the Developer Description, Tester Description and/or Notes fields.

Business description with acceptance criteria

All User Stories are described in terms of business requirements, represented as postcard User Stories in all new User Stories.

It is the responsibility of the business analyst to populate this field and to present it to the Client analyst/business owner until it has been approved by the Client.   Whenever this field is changed, the Approved By and Approved Dates should be set to blank [no value] and the project manager should be made aware that the Approve By date may need updating.  All parties (developers, tester, PM, TDA) must immediately receive communication from the BA that the USER STORY has been changed.

The content of this field fully and accurately describes the business requirements with relation to the USER STORY.  Should any wording in the USER STORY conflict with understanding expressed elsewhere, such as an attached item, the Business Description should be considered correct.

Developer description

This memo field contains notes to aid development.  These notes can be updated by the technical design authority, developers, business analyst or tester as they feel appropriate.  The contents of this field may be changed during the phase without affecting the analysis approval state.

Tester description

This memo field contains notes to aid testing.  These notes can be updated by the technical design authority, developers, business analyst or tester as they feel appropriate.  The contents of this field may be changed during the phase without affecting the analysis approval state.  Testers must be familiar with all elements of the User Stories, with special emphasis on the business description and tester description, and consider these when testing and raising Bugs.

 

http://2020fs.com/

Check-In Metrics and Code Reviews

http://2020fs.com/

 

Check-in metrics may include

  • test coverage,
  • duplicated code,
  • cyclomatic complexity,
  • afferent coupling (the number of other packages that depend upon classes within the package as an indicator of the package’s responsibility)
  • efferent coupling (the number of other packages that the classes in the package depend upon as an indicator of the package’s independence)
  • Instability (I): The ratio of efferent coupling (Ce) to total coupling (Ce + Ca) such that I = Ce / (Ce + Ca). This metric is an indicator of the package’s resilience to change. The range for this metric is 0 to 1, with I=0 indicating a completely stable package and I=1 indicating a completely unstable package.
  • warnings,
  • code style

When running tests, run each set in entirety so developer sees everything they expect to fix: first, all build errors.  If possible, next all unit tests; if possible, component testing, etc.

Production lines work on basis of assumption that previous step was correctly executed- this allows parallel work.  If the assumption is proven wrong, the work based on that assumption is discarded.

Whatever metrics are chosen to measure development or tester progress, beware that people will inevitably work to the measurement: if developers are rewarded based on the rate at which they produce lines of code, they will create unnaturally large solutions.

Code Reviews

There’s no substitute for having someone who really knows how to programme reviewing the code.  If no such person exists in the team, find someone. CUTTING CORNERS HERE WILL COST YOU CONSIDERABLY MORE WHEN THE PROBLEMS ARE DISCOVERED AGAIN LATER ON.  Code reviews are the single, cheapest thing that can be done to bring down the total project cost and timescale.  Treat the following as an aide memoire rather than a substitute for knowing what to do.
Ensure
1) business logic is only in the middle tier
2) business logic is completely covered by unit tests for ALL edge cases
3) business logic is completely covered by unit tests for happy path
4) business logic is completely covered by unit tests for ALL sad paths
5) business logic is completely covered by unit tests for all permutations of flow
6) all XML documents are present, useful and correct
 - if it tells you less than or no more than the method or parameter name, the comment is worse than useless
 - all exceptions thrown by the method must be documented
7) no magic numbers
8) No stringly typing
9) All dependencies MUST be externalised
 - Pay special attention for things like DateTime.Now and DateTime.Today – these are untestable and MUST be avoided
10) Think about the conversation between tiers. Is this code needlessly chatty?
11) Think about possible malicious exploitation of code.  NEVER ASSUME IT IS SAFE BECAUSE OF CIRCUMSTANCE because you will always be wrong.
12) Check that any suppressions added to production code are reasonable and justified
13) Check for warnings
14) Check to see if any previous unit tests have been deleted. ignored or otherwise moved out of scope
15) Generic lists are not exposed.
16) Check IDisposable is used correctly
17) Watch for thread safety
18) Look for code duplication – is inheritance being used correctly?
19) Do any methods have lots of parameters? It’s probably a God method
20) Do any methods do more than one thing? Refactor
21) Are any methods more than a screen of code? Refactor
22) Is there more than one class per file?
23) Are all fields private?
24) Have for-loops been used where LINQ could be used instead?
25) Has LINQ been resolved earlier than necessary?
26) Is arithmetic and conditional operator precedence obvious?
27) Is there any zombie code?
28) Has anyone used anonymous types?
29) Has anyone used anonymous methods?
30) Is interfacing atomic?
31) Is inheritance and interface use in line with Liskov?
32) Are any methods more public than necessary?
33) Do all events correctly follow subscriber/publisher pattern and/or use event aggregator?
34) Have methods been used where really properties should?
35) Do XML comments only describe behaviour? They should NEVER describe implementation
36) Has Exception been thrown directly? I hope not.
37) TODOs and NotImplementedExceptions are mutually inclusive
38) No extension methods on system.object.
39) Are generics used where appropriate?
40) Generics must not be nested
41) Tuples are PROHIBITED except where there is absolutely no alternative
42) No default parameters
43) no named parameters
44) Exceptions must be explicitly caught, never just a basic Catch
45) Properties must be read only or read-write, never write only.
46) Watch for hiding, reintroducing methods
47) Run cyclomatic complexity indexing
48) Consider afferent coupling
49) Consider efferent coupling
50) Cast cautiously and only when necessary
51) Consider what should be on the stack vs heap
52) No unused private fields or methods. EVER.
53) Rethrow correctly, preserve stack trace
54) Consider checked/unchecked
55) Consider NAN
56) Use T? in preference to nullable<T>

Programming in Configurability for your Application

http://2020fs.com/

Over-configurability of the solution can be a great cause of pain and is rarely necessary up-front.  Aim to defer configurability and add it in only when needed.

Configuration data must be persisted in version control as a first-class citizen of the solution.  If may or many not be appropriate to keep configuration data in the same repository as the solution source code.

Note that configuration data must be available only at run-time and never at build time.    Passwords must never be persisted in version control and, where necessary, should  be entered during deployment by the deployment engineer.

Where configuration is best described as one or more virtual machines, these too must be version controlled.

 

http://2020fs.com/

Team Foundation Server Working Practices

http://2020fs.com/

Over the years, I have continued to refine the working practices my teams use when using Microsoft Team Foundation Server (TFS) as the application life cycle management (ALM) tool for work item tracking and version control.  Here are my current recommended practices.

Check-Out Policies

Source code, XML and all ASCII-based files shall not be locked when checked out.  Multiple developers are permitted to work concurrently on files, subject to the directions in the Check-In Policies.

Check-In Policies

The Source code policy below has been kept deliberately simple to avoid unnecessary restrictions being placed on the developers.

Peer Reviews All production code should be peer-reviewed.  Peer reviewers should ensure that

• all of the quality markers covered in the Standards and Practices are followed

• unit testing is appropriate

• business logic is present in the appropriate layers

• logging has been applied in line with expectations

• NFRs have been appropriately considered

• No new compiler warnings have been introduced

Coding Standards

Visual Studio’s Code Analysis (CA) feature reports information, such as violations of the programming and design rules set forth in the Microsoft .NET Framework Design Guidelines.  The standards are mandatory unless where otherwise explicitly stated otherwise.

The following rules shall be set to “Warnings as Errors” – meaning that the solution will not compile if any of these errors occur. This list may be reviewed as appropriate.

• CA1001: TypesThatOwnDisposableFieldsShouldBeDisposable
• CA1002: DoNotExposeGenericLists
• CA1005: AvoidExcessiveParametersOnGenericTypes
• CA1006: DoNotNestGenericTypesInMemberSignatures
• CA1007: UseGenericsWhereAppropriate
• CA1010: CollectionsShouldImplementGenericInterface
• CA1011: ConsiderPassingBaseTypesAsParameters
• CA1021: AvoidOutParameters
• CA1024: UsePropertiesWhereAppropriate
• CA1026: DefaultParametersShouldNotBeUsed
• CA1030: UseEventsWhereAppropriate
• CA1031: DoNotCatchGeneralExceptionTypes
• CA1034: NestedTypesShouldNotBeVisible
• CA1044: PropertiesShouldNotBeWriteOnly
• CA1049: TypesThatOwnNativeResourcesShouldBeDisposable
• CA1050: DeclareTypesInNamespaces
• CA1051: DoNotDeclareVisibleInstanceFields
• CA1058: TypesShouldNotExtendCertainBaseTypes
• CA1061: DoNotHideBaseClassMethods
• CA1062: ValidateArgumentsOfPublicMethods
• CA1063: ImplementIDisposableCorrectly
• CA1064: ExceptionsShouldBePublic
• CA1065: DoNotRaiseExceptionsInUnexpectedLocations
• CA1500: VariableNamesShouldNotMatchFieldNames
• CA1502: AvoidExcessiveComplexity
• CA1504: ReviewMisleadingFieldNames
• CA1505: AvoidUnmaintainableCode
• CA1506: AvoidExcessiveClassCoupling
• CA1800: DoNotCastUnnecessarily
• CA1801: ReviewUnusedParameters
• CA1804: RemoveUnusedLocals
• CA1805: DoNotInitializeUnnecessarily
• CA1806: DoNotIgnoreMethodResults
• CA1807: AvoidUnnecessaryStringCreation
• CA1811: AvoidUncalledPrivateCode
• CA1818: DoNotConcatenateStringsInsideLoops
• CA1819: PropertiesShouldNotReturnArrays
• CA1823: AvoidUnusedPrivateFields
• CA2104: DoNotDeclareReadOnlyMutableReferenceTypes
• CA2124: WrapVulnerableFinallyClausesInOuterTry
• CA2200: RethrowToPreserveStackDetails
• CA2201: DoNotRaiseReservedExceptionTypes
• CA2205: UseManagedEquivalentsOfWin32Api
• CA2214: DoNotCallOverridableMethodsInConstructors
• CA2242: TestForNaNCorrectly

In cases where it may be appropriate to suppress a particular condition, this can be done explicitly in the code through the use of Source Suppression attributes.  This should be done judiciously and justification provided – and documented in the “Justification” property of the attribute. Unit Testing code will not have CA applied as it is not a deliverable.

StyleCop analyzes C# source code to enforce a set of style and consistency rules. It can be run from inside of Visual Studio or integrated into an MSBuild project. Each deliverable project will be configured for StyleCop “warnings as errors”. Together with the  are numerous points of failure in the development of a software solution:  check-in rules, this means that the code cannot be checked in without conforming to the defined standards.

All check-ins must be accompanied by linked TFS work items to describe the reason for change.  All check-ins must contain at least a short but usefult overview description or headline of the reason for change and the change itself.

The following guidelines shall also be followed:

  • Anonymous types should be avoided where practical
  • Anonymous methods should be avoided where practical (this does not affect use of lambdas)
  • Prefer implicit interfaces
  • Interface atomically
  • Use the event aggregator for event subscription/publishing
  • All method names are verbs or contain verbs
  • Prefer properties to methods
  • Always use XML comments for properties, always specify if it Gets and/or Sets
  • XML comments must describe behaviour and never implementation
  • No magic numbers other than 0, 1, -1
  • String literals are held in resource files
  • Never throw Exception, only ever derived exceptions. Treat Exception as though it were abstract
  • Offensive programming: All TODOs must be accompanied by NotImplementedException and vice versa
  • Use LINQ queries in preference to LINQ extension methods
  • Use LINQ queries in preference to loops
  • Only evaluate LINQ expression trees at the last possible opportunity
  • Never apply extension methods to system.object
  • All view objects (controls, label, elements in the rendered HTML) must include a unique identifier accessible by the test automation framework.  This identifier must persist each time the view is rendered but be unique within the application
  • Zombie code is prohibited.  Under no circumstances may code that has been commented out be checked in to the source control repository; such lines must be removed prior to check-in
  • Yoda conditions are to be preferred over      traditional comparators.  Constants,      literals, nulls and read-only members must be presented to the left of equality operators E.g.

if (null == foo)                Favoured

if (foo == null)                Disfavoured

Whilst it is not mandatory to apply the coding standards to Unit Testing code or to Stubs/Fakes, it is encouraged.  StyleCop warnings shall be promoted to errors only on production code.

http://2020fs.com/

Improving your Continuous Integration Process

http://2020fs.com/

In a previous post, I talked about the McLoughlin Gambit and how continuous integration (CI) can help to detect problems with environmental differences early in the software development life cycle, when they are cheapest to fix.  This post will talk about making the most of your CI set-up.

Now you have the basics of CI up and running, you can take advantage of that infrastructure to test a few more assumptions about your software.  Does the compiler spit out any warnings which you could treat as errors?  If you are writing unit tests (and really, you probably should be), then why not run all of these and make sure they pass? How about using tools like StyleCop or other code analysis offerings to ensure house coding style is being followed and there are no obvious anti-patterns creeping in? Does the installer or deployment process still work?  Do component parts talk to each other correctly and fail gracefully when external dependencies are unavailable?  Will it pass its functional testing? Is it complying with non-functional requirements such as performance or security criteria? All of these can become part of your CI and, as a rule of thumb, it’s worth doing as many as you can, starting with the first ones listed. More tests being performed means more work for your build server so beefing up the hardware would not be a bad place to start. Make sure it has great network connectivity to your VCS repository, a whole chunk of RAM and as fast a disk as you can afford – it doesn’t have to be large or, dare I say it, server grade as it’s just a holding space for the build and not critical data persistence. Aim to have the gated check-in return in as short a period of time as possible as developers can be impatient and may find means to bypass it if they feel the gateway is holding them up. Ten minutes is probably too long, half that is a better ceiling.

Compiler warnings

Most compilers recognise more than one level of issue, but only stop for the most critical errors.  Less severe are warnings which indicate potential rather than actual problems.  Warnings can be promoted to be treated as errors in that potential problems need to be addressed in order to ensure your code base is sound.

Unit tests

Good developers will often write tiny test harnesses around their code to act as jigs to prove their implementation in isolation from the rest of the system.  These unit tests, but their very nature, should be capable of running in any sequence, with no external dependencies. and are therefore fantastic candidates to include as part of your continuous integration process.  The idea is that it offers a degree of regression testing to make sure that not only does the new code work as intended but also that existing code has not been broken by mistake.

Coding style compliance

Assuming your source code is expected to last for more than a few days, or may be seen by more than one pair of eyes at some point in its lifetime, coding standards can go a long way to ensuring long-term maintainability of your product.  If you do not have a house style, then try to adopt one from your technology stack vendor (or consider adapting yours to fall in line with it).  There are various tools to interrogate and aid compliance, such as StyleCop, and this too can be a failure point for your gated check-in.  Although coding style does not necessarily predict the likelihood of defects, it can contribute positively to the ability to extend or fix your code and therefore bring down the total cost of your software development life cycle.  Of course, these tools are not perfect, so developers should be running them locally before checking in and suppressing only the erroneous problems (and fixing those which it identifies correctly).

Anti-patterns

Tools similar in nature to coding style checkers can be applied to identify poor or inappropriate code and these too can be treated as critical errors.  Code Analysis is a favourite amongst .Net practitioners and works out of the box with Microsoft Team Foundation Server.  Just like the warnings and coding style checkers, pattern analytics engines can report false positives (wrongly suggesting a problem) and these may be suppressed.  Make sure that you have a high level of peer review that pays special attention to suppressions so that nobody accidentally (or deliberately) suppresses valid concerns.

Beyond gated check-in

Everything up to this point can run quite quickly and there’s little excuse for not including it as part of your gated check-in.  Now we get to some meatier machine-driven analysis that may be slow-running and could hold up the development process.  The checks that follow may be best run on a schedule rather than triggered by committing code to version control so that your magic five-minute version control processing time is not breached.

Deployment

This is stepping into the realms of Dev Ops and I shall talk much more about this in future posts.  However, deployment is often a painful process and that makes us shy away from it, deferring it until the last moment.  The paradox with doing this is that something you know can be difficult and problematic, both in terms of achieving and the consequences of getting it wrong, gets little attention and precious little testing.  From day one, deployment really needs to be an automatable, highly-reproducible process so that not only can it get the attention is desperately requires, but it also gives us a platform to save time and improve quality in other areas.

Once you have refined your deployment or installation mechanism, it will need an environment in which to live.  Just like the build server acting as a rudimentary clean-room for the compilation, you need to have a staging environment free of artefacts and completely understood so that your CI can practice deployment.  This will need to be reset every time and you can do this either by also creating an uninstaller or by virtualisation.  The latter is the better approach otherwise you are going to end up trying to test the cumulative sum of your uninstaller and your installer and you won’t really have many clues as to which is causing the problems or, worse, if real defects are being masked by, say, the uinstaller.

Integration tests

Integration tests are very similar to unit tests in that they are normally created by developers to prove a piece of code.  Unlike unit tests, these depend directly on an external dependencies such as a database or a web feed.  Their very nature means that they may not be 100% reliable and therefore such tests should rarely be part of your check-in process, but work well on the scheduled run.

Functional tests

The bread and butter of traditional software testing is performing test scripts to see that the application behaves in the way intended.  The hard part is invariably writing good test scripts that follow the same patterns each time they are executed on working code; the easy part is running them.  With this in mind, whenever you have testers sitting around not writing test scripts, you are paying them to do something a computer could do instead.  Automating your test scripts means that regression testing can be run almost for free and can be run on pretty much every change to the underlying product, again catching bugs as early as is possible.  As with unit tests, the ability for a solution to be testable may directly influence the way in which it is implemented, so put this in the front of your architects’ and developers’ minds before anyone starts churning out solution code.

Non-functional tests

The pass criteria for your project will often include behaviour of the solution that is not an intended, direct or desired consequence of user interaction.  Performance considerations, scalability, security, the presence of audits and internal compliance with SLAs may all dictate whether your application is good enough to be released into the wild.  If it may inhibit sign-off of your solution, you need to test for it.  If you need to test for it, you should aim to test early and often.  Non-functional requirement (NFR) testing by its very nature may take significant periods of time and resources to run so a good approach is to have at least one separate staging environment dedicated to its execution.  If the idea of multiple staging environments is off-putting, consider how much cheaper a server is to buy (or rent) and run compared to potentially restarting your design and development to make major changes to the foundations.

NFR tests will be long-running, so run them on a loop: as soon as one iteration of NFR testing has completed, get the latest build and run it again.  If you have the hardware infrastructure, you may benefit from running multiple concurrent environments performing different NFR checks at the same time.

Side benefits

There are numerous other advantages that can be gained from a solid, automatically-policed CI set-up, such as ensuring good time management.  Modern CI systems allow the administrators to configure rules that allow check-ins only when the develop has declared the reason for change (i.e. associated the check-in with a user story, bug report or project task) and many will automatically create defect reports for each failed test.

As a rule of thumb, any process you can robotise will be cheaper in the long-run, let alone more reliable and scalable than anything done by meat bags (developers, testers, managers, etc).  If you have to repeat the same steps three or more times, you should probably consider automating it and including it in your CI.

http://2020fs.com/

The McLoughlin Gambit

http://2020fs.com/

“It works on my machine”
The McLoughlin Gambit is the defensive war cry of developers “But it works on my machine!”. That, think the tester, the delivery manager, the customer and the user, is all very well, but if it doesn’t work on my machine then as far as I’m concerned, it simply doesn’t work. Obviously, there is something special about this developer’s PC that grants him a working product denied to others further down the chain. The earlier this problem can be identified, the cheaper it is likely to be to fix: if the developer can catch it himself, he’s wasted only his own time and all the processes that follow won’t have to be repeated when he issues a “fix”. The more work that gets done on top of bug, the harder it becomes to unpick it and it is quite common for subsequent development to lean on or even depend directly upon the underlying problems.

We can reduce the instance of “bad builds” such as this from flowing far downstream by applying Continuous Integration (CI) and Continuous Deployment at the very first step of development.

Kent Beck (Extreme Programming, 1999) noted that the industry widely accepted that frequent integration of code was beneficial and that the logical conclusion is that we should integrate on every change. Modern version control repositories such as Microsoft Team Foundation Server and JetBrains Team City make Beck’s goal achievable and feasible to even the smallest of commercial projects. Indeed, the only reasons to not put in place CI in your software development projects is if people’s time does not cost you anything or if your developers never make mistakes.

Before CI, the typical software development life cycle relied on assumptions of correctness until proven otherwise. A significant problem with this approach is that a significant gamut of work would be undertaken, perhaps as far as completion (design, development) before this assumption would be tested. CI says instead we can and will test this assumption every time the developer compiles or every time source code is checked into the version control system (VCS).

How does it work?
At its simplest, CI works by having the VCS check-in process trigger a compilation on a “clean” machine – one not tarnished by artefacts of software development or software developers. This machine is called a Build Server and can be any computer capable of running your compiler from a command line – if you’re on a crazy budget, even an obsolete desktop left over from that Windows XP upgrade may be enough. Simply have a batch file that cleans the build server’s local source code folder, gets the latest version of the source code from the version control, compiles it and spits out a yea or nay depending on whether or not the compilation was successful. When you get a negative result, the check in needs to be flagged as a bad build or reversed out so that no one starts building on top of it. CI tools today do all of this and much more, most usefully by examining the output of the build server’s work and actually rejecting bad builds so they don’t become part of the VCS repository. This is called a Gated Check-In and it is probably one of the best investments you can make, even if you have a team of one.

Next time
In a future post, I’ll talk about making your CI process run more efficiently and taking advantage of the CI infrastructure you have put in place.

http://2020fs.com/