Archive

Archive for the ‘Testing’ Category

F# 3.0 on AppHarbor

November 1, 2012 6 comments

The online Analytics service for Rowing in Motion has some intense data processing requirements. A typical logfile that users may want to work with for a 90minutes training session is about 5 megabytes in compressed size. The in-memory models we need to work with for data analysis need to encompass millions of data points and can easily exceed 30mb of memory when fully unfold.

It’s pretty clear we could not offer a good user experience when processing all this data locally on a device, so we decided to build the data analysis software as an online service. There are some other benefits to this model too, especially in the space of data archival and historical comparisons. F# excels at expressing our calculation models in a short and concise manner and makes parallellizing these calculations easy, which is crucial to achieve acceptable response times in our scenario.

Deciding to use F# was easy, but it turns out I faced some problems integrating with our cloud hosting platform of choice AppHarbor. This post will explain what needs to be done to get F# code to compile on AppHarbor and also how to run unit tests there.

Compiling F# 3.0 code on AppHarbor

Visual Studio 2012 installs the F# “SDK” (there is no official one for F#3.0) into C:\Program Files\Microsoft SDKs\F#, and that’s where the default F# project templates point to.

<Import Project="$(MSBuildExtensionsPath32)\..\Microsoft SDKs\F#\3.0\Framework\v4.0\Microsoft.FSharp.Targets" Condition=" Exists('$(MSBuildExtensionsPath32)\..\Microsoft SDKs\F#\3.0\Framework\v4.0\Microsoft.FSharp.Targets')" />

We will fix this (and another issue) by copying the whole “SDK” folder into our source repository at tools/F# (yes, everything). Next up, we will create a Custom.FSharp.Targets file, that we will reference instead. Replace the project line above with:

  <Import Project="$(SolutionDir)\build\RowingInMotion.FSharp.Targets" />

We will also have to delete the FSharp.Core reference from the fsproj file. Since the AppHarbor build machines don’t have FSharp.Core 4.3.0 in the GAC (or in a ReferenceAssemblies location), we have to include this into the project too. I copied mine from C:\Program Files (x86)\Reference Assemblies\Microsoft\FSharp to lib\FSharp

The Custom.FSharp.Targets we created earlier will take care of including the correct reference, as well as pointing the Microsoft.FSharp.Targets to the correct F# compiler directory in our source tree.

<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  
  <!--Include a default reference to the correct FSharp.Core assembly-->
  <ItemGroup>
	  <Reference Include="FSharp.Core, Version=4.3.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a">
		  <HintPath>$(SolutionDir)\lib\FSharp\3.0\Runtime\v4.0\FSharp.Core.dll</HintPath>
	  </Reference>
  </ItemGroup>
  
  <!--Override the Path to the FSharp Compiler to point to our tool dir-->
  <PropertyGroup>
	<FscToolPath>$(SolutionDir)\tools\F#\3.0\Framework\v4.0</FscToolPath>
  </PropertyGroup>
  <Import Project="$(SolutionDir)\tools\F#\3.0\Framework\v4.0\Microsoft.FSharp.Targets" />
  
</Project>

One last thing that needs to be fixed is that the F# compiler (itself written in F#) also needs a copy of FSharp.Core, so I simply dropped one right next to it. That’s it, now you should be able to compile F# 3.0 projects on AppHarbor. It’s nice that F# is “standalone” enough from the rest of the .NET Framework that it can be pulled apart this easily, but it would be even better if Microsoft offered a F# SDK that the guys at AppHarbor could install on their build servers.

Running F# xUnit tests on AppHarbor

AppHarbor uses Gallio to run unit tests. Unfortunately, Gallio is not able to detect static test methods. That means you cannot write tests as modules. Instead you have to resort to declaring normal types with members, which is a bit heavier on the syntax and feels considerably less idiomatic (and its more typing…). I have filed a bug with the Gallio Team, which can be tracked here: http://code.google.com/p/mb-unit/issues/detail?id=902. It should be noted that the xUnit Visual Studio runner can run F# Xunit tests just fine. We’ll see if I see the need to switch to a more specific F# testing framework in the future.

Categories: .NET, F#, Testing

SubSpec available on NuGet

May 27, 2011 Leave a comment

SubSpec is finally available as a NuGet package. See http://nuget.org/ on how to get started with NuGet. Once you have NuGet installed, it’s a simple matter of running Install-Package SubSpec or Install-Package SubSpec.Silverlight from the Package Manager console to get SubSpec integrated into your project.

Integrated into your project you said? You mean “get the dll and reference it”? No, in fact, deployment as a separate dll is a thing of the past for SubSpec. SubSpec is an extremely streamlined extension of xUnit and as such it fits into less than 500 lines of C# (excluding xmlDocs). This approach has several advantages:

  1. Faster builds, 500 lines of C# are faster to compile than resolving and linking against a library
  2. It fosters the creation of extensions (which is extremely common, at least in my usage of it)
  3. No need to get the source separately, you already have it!
  4. Experimental extensions can be easily shared as single files too, such as Thesis, AutoFixture integration…

I hope you like the new packages, please feel free to upvote SubSpec and SubSpec.Silverlight on the NuGet gallery and feel encouraged to write a review.

Multiple Test Runner Scenarios in MSBuild

April 15, 2011 Leave a comment

Scenario:

SubSpec is built for .NET as well as for Silverlight. For the .NET test suite, we use the xUnit MSBuild task to execute the tests, for Silverlight we use a combination of Statlight and xunitContrib. Whenever you run a suite of tests, it’s usually desirable to have a failing test break the build, however under all circumstances the complete suite of tests should be run to give you an accurate feedback.

Our build script looks something like this:
SubSpec.msbuild:

    <Target Name="Test" DependsOnTargets="Build">
		<MSBuild
            Projects="SubSpec.tests.msbuild"
            Properties="Configuration=$(Configuration)" />
    </Target>

SubSpec.tests.msbuild:

  SilverlightTests"/>

  <Target Name="xUnitTests">
    <xunit
      Assemblies="@(TestAssemblies)"/>
  </Target>

  <Target Name="SilverlightTests">
    <Exec
      Command=""tools\StatLight\StatLight.exe" @(SilverlightTestXaps -> '-x="%(Identity)"', ' ') --teamcity" />
  </Target>

Problem:

When using each of the build runners (xUnit MSBuildTask, Statlight) in isolation with multiple assemblies, they do the right thing: Run all tests, fail if at least one test failed, succeed otherwise. Now imagine we have a test succeeding under .NET but failing under Silverlight. When we run xUnit first, we get the desired result. But if Statlight was to run before xUnit, we would never know if the .NET suite would actually succeed, because Statlight stops the build.

(Non-)Solutions:

The first (and most intuitive) idea was to move the test targets into a separate MSBuild project and call MSBuild on that project with ContinueOnError=”false”:

<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

  <Target Name="Build">
    <MSBuild
      Projects="test.msbuild"
      Targets="Test"
      ContinueOnError="true"/>
  </Target>

  <Target Name="Test" DependsOnTargets="Foo;Bar">
  </Target>

  <Target Name="Foo">
    <Error Text="Foo"/>
  </Target>

  <Target Name="Bar">
    <Error Text="Bar"/>
  </Target>
</Project>

But this yields only Foo as the error (I wanted to see error: Foo and error: Bar).

MSDN says about ContinueOnError:

Optional attribute. A Boolean attribute that defaults to false if not specified. If ContinueOnError is false and a task fails, the remaining tasks in the Target element are not executed and the entire Target element is considered to have failed.

This is probably why it doesn’t make sense on the MSBuild task, it would only allow another task after the MSBuild task in “Build” to execute. We confirm this by:

  <Target Name="Build">
    <MSBuild
      Projects="test.msbuild"
      Targets="Test"
      ContinueOnError="true"/>
    <Message Text="Some Message"/>
  </Target>
  

And we see Foo as well as Some Message. At this point, it was clear me to me that I want a target that fails if any of its tasks failed, but executes all of them.

In MSDN, we discover StopOnFirstFailure:

true if the task should stop building the remaining projects as soon as any one of them may not work; otherwise, false.

If we specified separate projects, it would work, but we’re in the same project, so unfortunately this won’t help

The next idea was to use CallTarget with ContinueOnError=”true”, like:

  <Target Name="Build">
    <MSBuild
      Projects="test.msbuild"
      Targets="Test"
      ContinueOnError="false"/>
        <Message Text="I should not be executed"/>
  </Target>

  <ItemGroup>
    <TestTargets
        Include="Foo;Bar" />
  </ItemGroup>

  <Target Name="Test">
    <CallTarget Targets="%(TestTargets.Identity)" ContinueOnError="true"/>
  </Target>

  <Target Name="Foo">
    <Error Text="Foo"/>
  </Target>

  <Target Name="Bar">
    <Error Text="Bar"/>
  </Target>
  

However, “I should not be executed” appears in the output log, what happened? Build called MSBuild with ContinueOnError=false (the default). Because all tasks in Test were ContinueOnError=true, no error bubbled up to MSBuild and it executed without error. This is dangerous, because it makes our build appear succeeded when it’s not.

The next option I tried was using RunEachTargetSeparately:

Gets or sets a Boolean value that specifies whether the MSBuild task invokes each target in the list passed to MSBuild one at a time, instead of at the same time. Setting this property to true guarantees that subsequent targets are invoked even if previously invoked targets failed. Otherwise, a build error would stop invocation of all subsequent targets. The default value is false.

  <Target Name="Build">
    <MSBuild
      Projects="test.msbuild"
      Targets="Foo;Bar"
      RunEachTargetSeparately="true"/>
  </Target>

  <Target Name="Test" DependsOnTargets="Foo;Bar">
    <Error Text="Foo"/>
  </Target>

  <Target Name="Foo">
    <Error Text="Foo"/>
  </Target>
  <Target Name="Bar">
    <Error Text="Bar"/>
  </Target>
  

This gives us exactly what we want, but it doesn’t allow test runs to be parallelized. To achieve that, we need to put each test target in a separate project file. It turns out, that using this strategy, we don’t need to worry about controlling our failure strategy: Both projects get build and the MSBuild task reports an error when any of the projects have failed:

  <Target Name="Build">

  </Target>

  <Target Name="Test">
    <MSBuild
      Projects="SubSpec.test.msbuild;SubSpec.Silverlight.test.msbuild"/>
  </Target>
  

Whats the alternative? The Alternative is capturing the ExitCodes of the runners, as described in http://stackoverflow.com/questions/1059230/trapping-error-status-in-msbuild/1059672#1059672, however I don’t like that approach since it’s a bit messy. The only thing we give up by using multiple projects is that it’s harder to get an overview of what happens where, but I think in this case the separation might also aid a proper separation of concerns.

Categories: .NET, MSBuild, Testing, Tools

Running MSTest 9 on a CI Server without installing Visual Studio

April 2, 2011 3 comments

Disclaimer: I would have loved to migrate to a different framework (and I would strongly advice you do so if you’re not a full stack TeamSystem shop), however I have a couple of consultants on that project who are not very test experienced and having built-in MSTest has compelling advantages. Having said that, I know that Gallio has nice VS integration that you can use to run any frameworks’ tests inside Visual Studios Test windows, however that would require each developer to install gallio on their machine (which is bad too).

Without reiterating the tirades of hate Microsoft has earned for making it impossible to run MSTest on a build server without installing Visual Studio, I want to present what I have compiled from several sources to get it working for me:

  1. See this post on Stackoverflow for an overview of the issue and possible solutions
  2. Mark Kharitonov has compiled a basic set of instructions that allow installing MSTest on a Build Server

My setup consists of a Teamcity Build Agent running on Windows Server 2008R2 x64, so I needed to change all registry keys in the reg file to point at HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\ instead of HKEY_LOCAL_MACHINE\SOFTWARE\.

Next, I am using Gallio to run the tests instead of executing them directly using MSTest. Even though Gallio is considerably slower than native MSTest, which you can also use with a built-in Teamcity buildstep, there are a couple of advantages:

  1. Pretty Reports
  2. No need to deal with test run configurations and test metadata (I’ve got no idea what they are and why I would need them)
  3. Teamcity picks up the test resulty properly
  4. I can use a MSBuild script to pick up my Test dlls via wildcards, no need to have extra MSTest build tasks.

As a reference, here’s my MSBuild script for running the tests using Gallio:

<Project DefaultTargets="Test" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    
	<!-- This is needed by MSBuild to locate the Gallio task -->
    <UsingTask AssemblyFile="tools\Gallio\Gallio.MSBuildTasks.dll" TaskName="Gallio" />
    
	<!-- Specify the tests assemblies -->
    <ItemGroup>
        <TestAssemblies Include="src\test\**\bin\$(Configuration)\*Tests.dll" />
	</ItemGroup>
    
	<Target Name="Test">
        <Gallio 
            Files="@(TestAssemblies)"
            IgnoreFailures="true" 
            ReportDirectory="build\"
            ReportTypes="html"> 
            <!-- This tells MSBuild to store the output value of the task's ExitCode property
                 into the project's ExitCode property -->
            <Output TaskParameter="ExitCode" PropertyName="ExitCode"/>
        </Gallio>
		<Error Text="Tests execution failed" Condition="'$(ExitCode)' != 0" />
	</Target>
	
</Project>
Categories: .NET, Testing, Tools

SubSpec: Assert and Observation

August 24, 2010 Leave a comment

When writing a test, we should make sure only to have one Assertion per test. The reasoning behind this constraint is simple. If we used multiple assertions and our first one fails, we are not able to retrieve the results from the other ones.



In this example, if the assertion on stack.IsEmpty() fails, we are unable to retrieve the results of the next two Assertions. We can see that our test consists of three parts:

  1. Arrange the System Under Test (SUT)
  2. Act on SUT
  3. Assert the SUT’s state has changed accordingly.

If we want to have one Assertions per test, we need to write three tests, duplicating the Arrange and Act for each test. As always, repetition is suboptimal, so let’s see what we can do about it.

SubSpecs’ core idea is that each test (we call them Specification) that you write consists of the above mentioned primitives.  Each primitive can be represented by an action and a corresponding description. Using fluent syntax, a SubSpec Specification for the above mentioned Scenario looks like this:

Each of the primitive test actions is represented by a description and a lambda statement.  The big difference to a traditional test is that SubSpec knows about these primitive actions and can compose them to generate three Tests from the above Specification, one for each Assertion. What it does under the hood is pretty much what you’d expect it to do: SubSpec repeats the Context and Do action for each Assertion and wraps it inside a single test. That’s the power of declarative tests!

This is one of the features SubSpec has supported since it’s beginning. But there’s one thing we can improve about the above example. We have got three Assertions in our above test, but only one of them is destructive. You guessed correct, it is the second one. By popping an element from the stack, it modifies the system under test. This is a more general problem. Although we should try to avoid this situation, sensing something in our  SUT cannot always be made side-effect free. (Anyone feels reminded of quantum physics? 😀 )

The first and third Assertion on the other hand are side effect free. If the Context and Do Action were possibly expensive (such as when involving an external resource), repeating them for each of our Isolated Assertions would be a waste of time. But tests need to be as fast as possible. What can we do about it?

Given the distinction between a destructive Assertion and a side effect-free Observation we can check against our SUT, we should split our Assert primitive accordingly. An Assertion is a destructive operation on our SUT, which therefore needs to be recreated for each Assertion we check. For an Observation on the other hand, the SUT can be shared. Let’s get back to our exmaple:

The Context and Do action are executed once for each Assertion (once in this case) and once for all Observations. Given the declarative nature of SubSpec, we can easily mix and match Observations and Assertions in one Specification and still get a single test for each. Pretty cool, isn’t it?

The distinction between Assert (verb) and Observation (noun) is intentional to highlight the difference between those two concepts.

Categories: .NET, SubSpec, Testing

SubSpec: A declarative test framework for Developers

August 23, 2010 Leave a comment

In my last post I described Acceptance Testing and why it is an important addition to the developer-centric way of integration and unit testing.

I also described that Acceptance Tests should  be as expressive as possible and therefore benefit from being written in a declarative style. From learning F# at the moment, I came to the conclusion that writing declarative code is the key to avoid accidental complexity (complexity in your solution domain that is not warranted by complexity in your problem domain). But not only acceptance tests benefit from a declarative style, I do also think that it helps a long way to make unit and integration tests easier to understand.

SubSpec has originally been written by Brad Wilson and Phil Haack. It was their motivation to write a framework that enables xUnit based BDD-Style testing. Given my desire to support a declarative approach for writing tests at all layers, I decided to fork the project and see what can be accomplished. I’m actively working on it and the code can be found on my bitbucket site. I like the idea of having a vision statement, so here is mine:

SubSpec allows developers to write declarative tests operating at all layers of abstraction. SubSpec consists of a small set of primitive concepts that are highly composable. Based on the powerful xUnit testing framework, SubSpec is easy to integrate with existing testing environments.

Here’s a short teaser to show you how expressive a SubSpec test is:

Appreciating Acceptance Testing

August 22, 2010 Leave a comment

One of the most important things I learnt to appreciate during my internship at InishTech is the value of Acceptance Testing.

Let me give a short definition of what I understand Acceptance Testing is:

There are different levels of testing you can do on your project, and they usually differ by the level of abstraction they work at. On the bottom you have Unit tests, tests that cover individual units in isolation. The next level is Integration testing. Integration tests exercise components of a system and usually cover scenarios where external resources are involved. Above that we have Scenario or Acceptance testing. Acceptance testing works at the level a perceived user of your system may operate. If you work in an agile process like we do at InishTech, you can translate each of your User Stories into an Acceptance test that verifies the story has been properly implemented. I don’t want to go deep on differentiating between these levels, he boundaries between them are blurry but all of them have a good raison d’etre.

Writing Acceptance tests is no different from writing any other kind of test. But what makes them so helpful is that they serve as a high level specification for the functionality of your system.

Acceptance tests will help you to:

  • Specify the behavior of your system from a user’s perspective
  • Make sure that functional requirements are met
  • Document expected behavior of your system
  • Discover bugs that you may else only find during manual tests

Acceptance tests will not help you to:

  • Locate Bugs

The key to success with acceptance testing is to write acceptance tests as declarative as possible: Test what is done instead of how it’s done. If this reminds you of BDD (Behavior-Driven-Design) you are correct, because this is exactly where the drive to acceptance testing comes from.

Categories: Testing

Notes on .NET Testing Frameworks

August 19, 2010 Leave a comment

When I was introduced to TDD the first testing framework I used was MsTest. Why? Because it was the next best thing available and it had the nice (beginner-) benefit of Visual Studio Integration. Soon after that, when I first hit the limitations of MsTest, MbUnit became my testing framework of choice. At the time I evaluated the existing testing frameworks, I came to the conclusion that MbUnit was the project with the most sophisticated facilities to write tests at all layers (unit, integration, scenarios) and was growing at a remarkable pace.

A year later, at  InishTech I started to use xUnit. There are several things about its design that I like better than what I have seen in other testing frameworks so far:

  • Classes containing tests can be plain C# classes, no deriving from a base class, no special attributes
  • No setup/teardown methods, instead convention based “Fixture” injection (using IUseFixture<T>)
  • No useless Assert.x( ) overloads that take custom messages, instead well formatted test output
  • Assert.Throws(Func<>) instead of [ExpectedException] attributes. Gives finer grained control over the location an exception is expected.
  • Clear, concise terminology. A test is a [Fact], a parameterized test is a [Theory]

To come to a conclusion, I think xUnit is a strong and lightweight testing framework.
MbUnit carries the massive overhead of the Gallio Plattform, making it’s test runner considerably slower than xUnits’. What I do especially like about xUnit is that it is an opinionated framework, it tries to force you into a certain way of thinking and thereby avoid common mistakes. From an extensibility point of view, xUnit has a lot to offer and I find the clear API and the few concepts it is built on compelling. Unfortunately I have no experience extending MbUnit, but extending xUnit is really, really easy.

Categories: .NET, Open Source, Testing

Whitebox Testing and Code Coverage

August 8, 2010 Leave a comment

In general I prefer blackbox testing. This means no testing of private methods etc, units are only tested by means of their publicly available interfaces. This prevents writing brittle tests that are bound to a specific implementation. On the other hand, blackbox testing is usually not sufficient when aiming at high test coverage. Even though high test coverage does not automatically equal bug free code, there are good arguments for it. Patrick Smacchia (the guy behind NDepend) has a great post about it. Another advantage of whitebox testing is increased control over the system under test. By having fine grained assertions, we improve our ability to locate errors in our code.

The problem with whitebox testing however, is that it leads to brittle tests. Because whitebox testing ties tests to a specific implementation of the system under test, chances are your tests will break whenever the implementation  changes. So what can we do about that?

“Un-brittle” Whitebox Testing

One solution is to use an automated approach for generating whitebox tests. This may sound weird at first but Microsoft has put great effort into its research project Pex and Moles that uses a combination of  analyzing the system under test and generating parametrized tests. Using the analysis results, Pex tries to generate a set of inputs for these tests that exercise as much code from the system under test as possible. Because these tests can be generated automatically, they can also be re-generated everytime your implementation changes.

Another technique is using CodeContracts (or there old-school equivalent Debug.Assert) that were introduced with .NET 4.0. CodeContracts mitigate the issue of brittle tests associated with whitebox testing by shifting assertions into the implementation code. Because the contract is part of the implementation, it is easier to keep it in sync when the implementation changes. However, CodeContracts are useless without tests that exercise them. If you use both tools in combination, you can get finer grained control over your code. Even though CodeContracts don’t particularly help you with increasing code coverage, but they can well help you to ensure code correctnes whenever the code is exercised and they also increase your ability to locate errors in your code. This is a great enhancement when using Scenario (sometimes also called System-) tests.

Conclusion

To conclude these thoughts on whitebox testing, the following can be said:

There are two separate motivations to employ whitebox testing:

  1. increasing code coverage
  2. getting fine grained assertions

Whitebox testing makes our tests brittle. We can use test generation to come around this issue. Using code contracts enables white box testing without creating brittle tests, but won’t increase test coverage.

Categories: .NET, Testing

GHUnit: Writing Custom Assert Macros

October 24, 2009 Leave a comment

When I evaluated unit testing frameworks for iPhone development, One of the reasons why I chose GHUnit was that it has more sophisticated Assert Macros than other available frameworks. Despite this fact, there are still some Asserts that I missed, so I simply took the time to write my own.

Unlike test frameworks in the .NET or Java ecosystem, all Objective-C Frameworks provide preprocessor macros to realize assertions instead of providing a static class with Assert methods. A typical assert macro looks like the following:


#define GHAssertEquals(a1, a2, description, ...) \
do { \
	@try {\
		if (@encode(__typeof__(a1)) != @encode(__typeof__(a2))) { \
			[self failWithException:[NSException ghu_failureInFile:@&quot;Type mismatch&quot;...]; \
		} else { \
			if (![a1encoded isEqualToValue:a2encoded]) { \
				[self failWithException:[NSException ghu_failureInEqualityBetweenValue...]; \
			} \
		} \
	} \
	@catch (id anException) {\
		[self failWithException:[NSException ghu_failureInRaise...]; \
	}\
} while(0)

The body of the macro consists of a do{} while(false) loop, which is used to provide local scope for variables needed to implement the assertion. It is clear that code executed only once, even though a loop construct is used. The macro first checks necessary preconditions, in this case argument types. This is necessary due to the nature of a macro being a simple text substitution rather than a true method call that the compiler checks argument types for (that’s why I don’t like assertions being implemented as macros but would rather like to see assert methods). Next is the actual assertion. The type check and the actual assertion are both wrapped in a try{} catch(){} block, so any errors occurring in the macro code let the test fail also (a real macro would have a lot of code for preparing exception descriptions etc.).

I consider the GHUnit macros as a very useful set of primitve’s that can be combined to construct more complicated assertions:


#define GHFileAssertNotEmpty(file) \
do { \
	GHAssertTrue([[NSFileManager defaultManager] fileExistsAtPath:file], nil); \
	NSString* written = [NSString stringWithContentsOfFile:file]; \
	GHAssertNotNil(written, nil); \
	GHAssertGreaterThan((int)[written length], 0, nil); \
} while (0)

Note that I don’t need to take care of all the nasty details that are needed to write a proper primitive macro as outlined above. The only disadvantage with a macro like the one above is localizing the failed assertion, as the exception thrown might not be directly obvious from the code using the macro. It is not a real disadvantage of the method itself but rather inherent to all macros. XCode right-click Jump to Definition comes to the rescue here.

Categories: GHUnit, iPhone, Open Source, Testing
%d bloggers like this: