Archive for the ‘Tools’ Category

Kiwi as a static framework or Universal Library

January 27, 2012 Leave a comment

A problem commonly encountered when using open-source iOS frameworks is the lack of a fully-functional framework facility in xCode. Part of the issue is that Apple does not allow dynamic linking on iOS devices, the other is that there are two different architectures that need to be supported by libraries targeting both armv6 (up to iPhone 3G) and armv7 devices (iPhone 3GS and later). On top of that, we also need a binary that will run on the simulator (x86).

The easiest solution to the library problem in XCode is using project dependencies to build libraries in the configuration you need them. When taking a source dependency is not desirable, you are pretty much left to your own if the OSS project doesn’t provide binaries.

Fortunately enough, it’s not too difficult to build your own universal frameworks. Below are the steps I use for building a version of Kiwi:

  1. Grab the Universal Framework XCode templates from
  2. Install the Fake framework flavor (although the Real framework flavor should work as well)
  3. Create a new xCode project with the Fake framework template
  4. Add all source files of Kiwi (make sure to check the Copy to destination group folder box)
  5. Select the Kiwi static library target, project editor, build phases, Copy Headers, select all headers in the Project Group, right click and select move to Public
  6. Select the Kiwi static library target, project editor, build phases, link binary with libraries and add SenTestingKit.framework
  7. Build
  8. Go to the  Project Navigator (Cmd-1) and select Products Kiwi.framework. Right-Click and select “Show in Finder”
  9. You should see two folders: Kiwi.framework and Kiwi.embeddedframework – Kiwi.framework is the one we need
  10. Copy the Kiwi.framework folder into your lib folder
  11. Open the project you want to use Kiwi.framework in and select your target, project editor, build phases, link binary with libraries, click + and add Kiwi.framework from your lib folder

That’s it. Takes less than two minutes once you know the trick.

Categories: iPhone, Objective-C, Tools

Joomla backups made easy

July 2, 2011 Leave a comment

This post sums up the backup strategy I’m using for the website of my next project RowMotion. RowMotion is hosted on a Joomla installation. My hoster is nice enough to provide a decent pre-built mysqldump based backup script, which can be found here. Did I mention it does email notifications too? All the necessary instructions to set it up are mentioned there too.

Okay, so now we have a nice backup script for our Joomla database that needs to be triggered and the resulting backup file downloaded? Since the web-request needs to be authenticated, I figured it would be easiest to use some PowerShell magic to leverage the .NET WebClient. Here’s the full script:

$username = "un"
$password = "pw"
$backupDir = "C:\backup\"

$web_client = New-Object System.Net.WebClient;
$web_client.Credentials = new-object System.Net.NetworkCredential($username, $password)

$response = $web_client.DownloadString("");
echo "Response:\n"
echo $response

$regex = "*?.gz";

$response -match $regex
$dumpUrl = $matches[0]

echo "URL:\n"
echo $dumpUrl

$fn = [System.IO.Path]::GetFileName($dumpUrl);

echo "Filename:\n"
echo $fn

$target = [System.IO.Path]::Combine($backupDir, $fn);

echo "Target:\n"
echo $target

$web_client.DownloadFile($dumpUrl, $target);

Sure enough, this is neither pretty nor the most robust, but it’s the simplest thing that could possibly work (and it does). Next, we need to schedule this task with the Windows Task Scheduler. I’m running this on my server together with all the other backup tasks.

Be sure to enter “powershell” as the command and “-noprofile -command “C:\backupJobs\yourps.ps1” “as the argument.

Categories: Powershell, Tools

Mercurial Server using hgweb.cgi on Ubuntu

June 30, 2011 Leave a comment

In a previous post, we set up a virtual machine template for an ubuntu server. Now that we have set up a clone of this machine, it is time to set up a Mercurial repository server


Mercurial provides an easy to use repository server via a python cgi script. Mercurials protocol facilitates fast transfers over http, making it superior over an ssh solution based (such as git) when considering minimal protocol overhead vs. ease of use. As my webserver of choice, we will use lighttpd. This guide will follow the instructions published here

Installing Python

This one is simple:

ubuntu@localhost: sudo apt-get install python

Installing Lighttpd

To install lighttpd, run:

ubuntu@localhost: sudo apt-get install lighttpd

Next, we need to create a specific configuration for our mercurial cgi script. We need to redirect all incoming requests to the cgi script, and then we apply some url rewrite magic to remove the ugly hgweb.cgi from our URLs. The hgweb.cgi script will be served from /var/www/hgweb.cgi. If you use a different location, make sure to chown it to www-data and chmod+x it (all described in the mercurial wiki). I created my config like this:

ubuntu@localhost:~$ sudo vi /etc/lighttpd/hg.conf
ubuntu@localhost:~$ cat /etc/lighttpd/hg.conf 
url.rewrite-once = (
  "^([/?].*)?$" => "/hgweb.cgi$1",
   "^/([/?].*)?$" => "/hgweb.cgi$1"

$HTTP["url"] =~ "^/hgweb.cgi([/?].*)?$" {
             server.document-root = "/var/www/"
             cgi.assign = ( ".cgi" => "/usr/bin/python" )

Next is the lighttpd config, that will need to reference our hg.conf and enable mod_cgi:

ubuntu@localhost:~$ cat /etc/lighttpd/lighttpd.conf 
include "hg.conf"

server.modules = (

Further configuration Tricks

You should force hgweb.cgi to serve UTF-8 content. Fortunately enough, this is as simple as adding (or uncommenting) the following lines to hgweb.cgi:

import os
os.environ["HGENCODING"] = "UTF-8"

You will also need a hgweb.conf right next to hgweb.cgi and reference it from there (again, described in the mercurial wiki). For reference, my configuration includes all repos sourced under /var/hg/repos (and subdirectories) and allows anonymous push (I’m authenticated via VPN policy):

ubuntu@localhost:~$ cat /var/www/hgweb.config
/ = /var/hg/repos/**

baseurl = /
allow_push = *
push_ssl = false

Final Words

That’s all there is to it. To make the server available via DNS, you need to make sure the servers hostname is registered with your local DNS. In my case, I simply added a static record to it.

GoodReader and Mercurial for the Ultimate Student Workflow

June 8, 2011 Leave a comment

Being the proud owner of a shiny new iPad2 for the last month or so, I found it to be a valuable companion at University. No, not for browsing stackoverflow, keeping up with RSS and E-Mail, but for managing lecture slide decks, assignments etc.

In the past semester, I started managing my documents in a mercurial repository that is synced against my private mercurial server installation. This made it easier for me to keep my multiple devices synchronized (who would need that iCloud thing…). My first attempts at using the iPad for these tasks was using iBooks. iBooks is not bad, but its so utterly limited that it sucks really hard from times to times. Especially annoying is that it has absolutely no file management capabilities whatsoever.

I found GoodReader to be a great alternative. It has excellent file management and supports annotating pdfs. But the best thing is its support for synchronizing your files:

  1. Make sure your iPad is connected to the same network as your host computer
  2. Launch GoodReader, open the WiFi sync mode via the WiFi symbol
  3. Mount http://yourIpad’sIpAdress:8080 has a network folder
  4. Open a terminal, cd into your mount point  and run hg clone /yourCentralRepository
  5. The next time you want to sync run hg pull -U
It takes less than 5 minutes to set up, and is a pretty damn smart workflow!
Categories: iPhone, Tools

SubSpec available on NuGet

May 27, 2011 Leave a comment

SubSpec is finally available as a NuGet package. See on how to get started with NuGet. Once you have NuGet installed, it’s a simple matter of running Install-Package SubSpec or Install-Package SubSpec.Silverlight from the Package Manager console to get SubSpec integrated into your project.

Integrated into your project you said? You mean “get the dll and reference it”? No, in fact, deployment as a separate dll is a thing of the past for SubSpec. SubSpec is an extremely streamlined extension of xUnit and as such it fits into less than 500 lines of C# (excluding xmlDocs). This approach has several advantages:

  1. Faster builds, 500 lines of C# are faster to compile than resolving and linking against a library
  2. It fosters the creation of extensions (which is extremely common, at least in my usage of it)
  3. No need to get the source separately, you already have it!
  4. Experimental extensions can be easily shared as single files too, such as Thesis, AutoFixture integration…

I hope you like the new packages, please feel free to upvote SubSpec and SubSpec.Silverlight on the NuGet gallery and feel encouraged to write a review.

Multiple Test Runner Scenarios in MSBuild

April 15, 2011 Leave a comment


SubSpec is built for .NET as well as for Silverlight. For the .NET test suite, we use the xUnit MSBuild task to execute the tests, for Silverlight we use a combination of Statlight and xunitContrib. Whenever you run a suite of tests, it’s usually desirable to have a failing test break the build, however under all circumstances the complete suite of tests should be run to give you an accurate feedback.

Our build script looks something like this:

    <Target Name="Test" DependsOnTargets="Build">
            Properties="Configuration=$(Configuration)" />



  <Target Name="xUnitTests">

  <Target Name="SilverlightTests">
      Command=""tools\StatLight\StatLight.exe" @(SilverlightTestXaps -> '-x="%(Identity)"', ' ') --teamcity" />


When using each of the build runners (xUnit MSBuildTask, Statlight) in isolation with multiple assemblies, they do the right thing: Run all tests, fail if at least one test failed, succeed otherwise. Now imagine we have a test succeeding under .NET but failing under Silverlight. When we run xUnit first, we get the desired result. But if Statlight was to run before xUnit, we would never know if the .NET suite would actually succeed, because Statlight stops the build.


The first (and most intuitive) idea was to move the test targets into a separate MSBuild project and call MSBuild on that project with ContinueOnError=”false”:

<Project DefaultTargets="Build" xmlns="">

  <Target Name="Build">

  <Target Name="Test" DependsOnTargets="Foo;Bar">

  <Target Name="Foo">
    <Error Text="Foo"/>

  <Target Name="Bar">
    <Error Text="Bar"/>

But this yields only Foo as the error (I wanted to see error: Foo and error: Bar).

MSDN says about ContinueOnError:

Optional attribute. A Boolean attribute that defaults to false if not specified. If ContinueOnError is false and a task fails, the remaining tasks in the Target element are not executed and the entire Target element is considered to have failed.

This is probably why it doesn’t make sense on the MSBuild task, it would only allow another task after the MSBuild task in “Build” to execute. We confirm this by:

  <Target Name="Build">
    <Message Text="Some Message"/>

And we see Foo as well as Some Message. At this point, it was clear me to me that I want a target that fails if any of its tasks failed, but executes all of them.

In MSDN, we discover StopOnFirstFailure:

true if the task should stop building the remaining projects as soon as any one of them may not work; otherwise, false.

If we specified separate projects, it would work, but we’re in the same project, so unfortunately this won’t help

The next idea was to use CallTarget with ContinueOnError=”true”, like:

  <Target Name="Build">
        <Message Text="I should not be executed"/>

        Include="Foo;Bar" />

  <Target Name="Test">
    <CallTarget Targets="%(TestTargets.Identity)" ContinueOnError="true"/>

  <Target Name="Foo">
    <Error Text="Foo"/>

  <Target Name="Bar">
    <Error Text="Bar"/>

However, “I should not be executed” appears in the output log, what happened? Build called MSBuild with ContinueOnError=false (the default). Because all tasks in Test were ContinueOnError=true, no error bubbled up to MSBuild and it executed without error. This is dangerous, because it makes our build appear succeeded when it’s not.

The next option I tried was using RunEachTargetSeparately:

Gets or sets a Boolean value that specifies whether the MSBuild task invokes each target in the list passed to MSBuild one at a time, instead of at the same time. Setting this property to true guarantees that subsequent targets are invoked even if previously invoked targets failed. Otherwise, a build error would stop invocation of all subsequent targets. The default value is false.

  <Target Name="Build">

  <Target Name="Test" DependsOnTargets="Foo;Bar">
    <Error Text="Foo"/>

  <Target Name="Foo">
    <Error Text="Foo"/>
  <Target Name="Bar">
    <Error Text="Bar"/>

This gives us exactly what we want, but it doesn’t allow test runs to be parallelized. To achieve that, we need to put each test target in a separate project file. It turns out, that using this strategy, we don’t need to worry about controlling our failure strategy: Both projects get build and the MSBuild task reports an error when any of the projects have failed:

  <Target Name="Build">


  <Target Name="Test">

Whats the alternative? The Alternative is capturing the ExitCodes of the runners, as described in, however I don’t like that approach since it’s a bit messy. The only thing we give up by using multiple projects is that it’s harder to get an overview of what happens where, but I think in this case the separation might also aid a proper separation of concerns.

Categories: .NET, MSBuild, Testing, Tools

Running MSTest 9 on a CI Server without installing Visual Studio

April 2, 2011 3 comments

Disclaimer: I would have loved to migrate to a different framework (and I would strongly advice you do so if you’re not a full stack TeamSystem shop), however I have a couple of consultants on that project who are not very test experienced and having built-in MSTest has compelling advantages. Having said that, I know that Gallio has nice VS integration that you can use to run any frameworks’ tests inside Visual Studios Test windows, however that would require each developer to install gallio on their machine (which is bad too).

Without reiterating the tirades of hate Microsoft has earned for making it impossible to run MSTest on a build server without installing Visual Studio, I want to present what I have compiled from several sources to get it working for me:

  1. See this post on Stackoverflow for an overview of the issue and possible solutions
  2. Mark Kharitonov has compiled a basic set of instructions that allow installing MSTest on a Build Server

My setup consists of a Teamcity Build Agent running on Windows Server 2008R2 x64, so I needed to change all registry keys in the reg file to point at HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\ instead of HKEY_LOCAL_MACHINE\SOFTWARE\.

Next, I am using Gallio to run the tests instead of executing them directly using MSTest. Even though Gallio is considerably slower than native MSTest, which you can also use with a built-in Teamcity buildstep, there are a couple of advantages:

  1. Pretty Reports
  2. No need to deal with test run configurations and test metadata (I’ve got no idea what they are and why I would need them)
  3. Teamcity picks up the test resulty properly
  4. I can use a MSBuild script to pick up my Test dlls via wildcards, no need to have extra MSTest build tasks.

As a reference, here’s my MSBuild script for running the tests using Gallio:

<Project DefaultTargets="Test" xmlns="">
	<!-- This is needed by MSBuild to locate the Gallio task -->
    <UsingTask AssemblyFile="tools\Gallio\Gallio.MSBuildTasks.dll" TaskName="Gallio" />
	<!-- Specify the tests assemblies -->
        <TestAssemblies Include="src\test\**\bin\$(Configuration)\*Tests.dll" />
	<Target Name="Test">
            <!-- This tells MSBuild to store the output value of the task's ExitCode property
                 into the project's ExitCode property -->
            <Output TaskParameter="ExitCode" PropertyName="ExitCode"/>
		<Error Text="Tests execution failed" Condition="'$(ExitCode)' != 0" />
Categories: .NET, Testing, Tools

Force iTunes to reread all ID3 Tags in your Library

January 16, 2011 3 comments

I recently did some cleanups in my music library (not that I did anything by hand, I used Music Brainz Picard, a fingerprint based song database).

Now the only trouble was that iTunes just doesn’t offer an easy way to update all Tags in its database with the updated ones from my library. Fortunately, I found this nice little Apple Script (note to myself: In the unlikely event I’d have some spare time, learn some Apple Script, it seems very very powerful).

Why am I sharing all this? Because it took me like 30 mins of googling to find out. Maybe I’m gonna increase the page rank a bit 😉

Categories: General, Tools

Check if a PDB matches a Dll

December 27, 2010 1 comment

A common issue when modifying .NET assemblies by using IL Round-Trip compiling or a library like Mono.Cecil is preserving Debug information across modifications. You will need to take big care not to lose your PDBs along the way.

Hence it would be handy to have a tool to check if your assembly and its PDB still line up. This is exactly where ChkMatch comes in, a very handy utility I found via this SO question.

Categories: .NET, Tools

SubSpec: A declarative test framework for Developers

August 23, 2010 Leave a comment

In my last post I described Acceptance Testing and why it is an important addition to the developer-centric way of integration and unit testing.

I also described that Acceptance Tests should  be as expressive as possible and therefore benefit from being written in a declarative style. From learning F# at the moment, I came to the conclusion that writing declarative code is the key to avoid accidental complexity (complexity in your solution domain that is not warranted by complexity in your problem domain). But not only acceptance tests benefit from a declarative style, I do also think that it helps a long way to make unit and integration tests easier to understand.

SubSpec has originally been written by Brad Wilson and Phil Haack. It was their motivation to write a framework that enables xUnit based BDD-Style testing. Given my desire to support a declarative approach for writing tests at all layers, I decided to fork the project and see what can be accomplished. I’m actively working on it and the code can be found on my bitbucket site. I like the idea of having a vision statement, so here is mine:

SubSpec allows developers to write declarative tests operating at all layers of abstraction. SubSpec consists of a small set of primitive concepts that are highly composable. Based on the powerful xUnit testing framework, SubSpec is easy to integrate with existing testing environments.

Here’s a short teaser to show you how expressive a SubSpec test is:

%d bloggers like this: