LLVM/Clang Code Coverage about to come

July 7, 2011 Leave a comment

According to information from this LLVM bug report, Nick Lewycky recently implemented support for generating gcov compatible coverage files from LLVM/Clang. I’m not keen to replace my local LLVM with a svn build, but I’m really looking forward to finally ditch gcc.

Categories: Open Source

Joomla backups made easy

July 2, 2011 Leave a comment

This post sums up the backup strategy I’m using for the website of my next project RowMotion. RowMotion is hosted on a Joomla installation. My hoster is nice enough to provide a decent pre-built mysqldump based backup script, which can be found here. Did I mention it does email notifications too? All the necessary instructions to set it up are mentioned there too.

Okay, so now we have a nice backup script for our Joomla database that needs to be triggered and the resulting backup file downloaded? Since the web-request needs to be authenticated, I figured it would be easiest to use some PowerShell magic to leverage the .NET WebClient. Here’s the full script:

$username = "un"
$password = "pw"
$backupDir = "C:\backup\"

$web_client = New-Object System.Net.WebClient;
$web_client.Credentials = new-object System.Net.NetworkCredential($username, $password)

$response = $web_client.DownloadString("http://yourDomain.com/backup/databaseBackup.phpx");
echo "Response:\n"
echo $response

$regex = "http://yourDomain.com.com/backup/.*?.gz";

$response -match $regex
$dumpUrl = $matches[0]

echo "URL:\n"
echo $dumpUrl

$fn = [System.IO.Path]::GetFileName($dumpUrl);

echo "Filename:\n"
echo $fn

$target = [System.IO.Path]::Combine($backupDir, $fn);

echo "Target:\n"
echo $target

$web_client.DownloadFile($dumpUrl, $target);

Sure enough, this is neither pretty nor the most robust, but it’s the simplest thing that could possibly work (and it does). Next, we need to schedule this task with the Windows Task Scheduler. I’m running this on my server together with all the other backup tasks.

Be sure to enter “powershell” as the command and “-noprofile -command “C:\backupJobs\yourps.ps1” “as the argument.

Categories: Powershell, Tools

Mercurial Server using hgweb.cgi on Ubuntu

June 30, 2011 Leave a comment

In a previous post, we set up a virtual machine template for an ubuntu server. Now that we have set up a clone of this machine, it is time to set up a Mercurial repository server

Abstract

Mercurial provides an easy to use repository server via a python cgi script. Mercurials protocol facilitates fast transfers over http, making it superior over an ssh solution based (such as git) when considering minimal protocol overhead vs. ease of use. As my webserver of choice, we will use lighttpd. This guide will follow the instructions published here http://mercurial.selenic.com/wiki/PublishingRepositories#multiple

Installing Python

This one is simple:

ubuntu@localhost: sudo apt-get install python

Installing Lighttpd

To install lighttpd, run:

ubuntu@localhost: sudo apt-get install lighttpd

Next, we need to create a specific configuration for our mercurial cgi script. We need to redirect all incoming requests to the cgi script, and then we apply some url rewrite magic to remove the ugly hgweb.cgi from our URLs. The hgweb.cgi script will be served from /var/www/hgweb.cgi. If you use a different location, make sure to chown it to www-data and chmod+x it (all described in the mercurial wiki). I created my config like this:

ubuntu@localhost:~$ sudo vi /etc/lighttpd/hg.conf
ubuntu@localhost:~$ cat /etc/lighttpd/hg.conf 
url.rewrite-once = (
  "^([/?].*)?$" => "/hgweb.cgi$1",
   "^/([/?].*)?$" => "/hgweb.cgi$1"
)

$HTTP["url"] =~ "^/hgweb.cgi([/?].*)?$" {
             server.document-root = "/var/www/"
             cgi.assign = ( ".cgi" => "/usr/bin/python" )
}

Next is the lighttpd config, that will need to reference our hg.conf and enable mod_cgi:

ubuntu@localhost:~$ cat /etc/lighttpd/lighttpd.conf 
include "hg.conf"

server.modules = (
	"mod_access",
	"mod_alias",
	"mod_compress",
 	"mod_redirect",
        "mod_rewrite",
        "mod_cgi"
)

Further configuration Tricks

You should force hgweb.cgi to serve UTF-8 content. Fortunately enough, this is as simple as adding (or uncommenting) the following lines to hgweb.cgi:

import os
os.environ["HGENCODING"] = "UTF-8"

You will also need a hgweb.conf right next to hgweb.cgi and reference it from there (again, described in the mercurial wiki). For reference, my configuration includes all repos sourced under /var/hg/repos (and subdirectories) and allows anonymous push (I’m authenticated via VPN policy):

ubuntu@localhost:~$ cat /var/www/hgweb.config
[paths]
/ = /var/hg/repos/**

[web]
baseurl = /
allow_push = *
push_ssl = false

Final Words

That’s all there is to it. To make the server available via DNS, you need to make sure the servers hostname is registered with your local DNS. In my case, I simply added a static record to it.

Objective-C Pitfall: Synthesized Properties without backing field

June 29, 2011 2 comments

This is just a quick and short post about an Objective-C pitfall I have encountered today. When using synthesized properties, you do normally supply a backing field:

@property (nonatomic, readwrite, retain) Message* message = message_;

This will synthesize a getter and setter, that will use message_ as its backing field. Since I found out one can go clever and ommit the backing field, so a simple line like this will work too:

@property (nonatomic, readwrite, retain) Message* message;

However, now we get into a bit of trouble when accessing the property. Contrary to the behavior in Java or C#, you now get something different when accessing 

self.message

 vs.

message

. While the former will use the synthesized getter, the latter will use the synthesized backing field directly. This is a bit unexpected (I thought the backing field would be anonymous). So, my general advice would be to always use backing fields in your synthesized backing fields, so you don’t accidentally forget a “self.”. (This is water on the mills of people that advocate _not_ using the dot syntax for properties).

 

 

Categories: iPhone, Objective-C

TeamCity Server on Ubuntu

June 18, 2011 7 comments

Last time, we set up a virtual machine template for an ubuntu server. Now that we have set up a clone of this machine, it is time to set up Teamcity on it.

Abstract

Teamcity on linux is meant to be run from its integrated Tomcat server. We will use the default Teamcity installation procedure in combination with the lightweight lighttpd to act as a front end server listening on port 80 and forwarding requests to Teamcity’s Tomcat installation. This setup is both, easier than configuring Tomcat on Port 80 (remember it requires root permissions to allocate) and we could add authentication or https access more easily later (though I will not do that for now).

Installing Teamcity

To install Teamcity, follow the instructions from JetBrains, which can be found here (takes less than 10 minutes). I chose to install mine at /var/TeamCity:
http://confluence.jetbrains.net/display/TCD65/Installing+and+Configuring+the+TeamCity+Server#InstallingandConfiguringtheTeamCityServer-installingWithTomcat 

Follow the instructions to set up your external database (recommended approach). I am using a SQL Server 2008 Installation that is already present and regularly backed up in my “private cloud”.  Edit your server.conf to configure a port for the Tomcat Server :

ubuntu@localhost: sudo vi /var/TeamCity/conf/server.xml

Permissions are a chore, but we don’t want the Teamcity Server directory to be owned by our admin user, so we change the owner of our Teamcity install directory to the default www-data user.

ubuntu@localhost: sudo chown -R www-data /var/TeamCity

Next, we want Teamcity to start automatically when the server is booted, so we add a small init script. Be sure to adjust the TEAMCITY_DATA_PATH environment variable to a static directory of your choice, otherwise TCs default will make end up in www-data’s home directory, which is, frankly, a very inconvenient location.

ubuntu@localhost:/var/TeamCity$ cat /etc/init.d/teamcity 
#!/bin/sh
# /etc/init.d/teamcity -  startup script for teamcity
export TEAMCITY_DATA_PATH="/var/TeamCity/.BuildServer"

case $1 in
start)
 start-stop-daemon --start  -c www-data --exec /var/TeamCity/bin/teamcity-server.sh start
;;

stop)
 start-stop-daemon --start -c www-data  --exec  /var/TeamCity/bin/teamcity-server.sh stop
;;

esac

exit 0

Now we need to register the startup script to run automatically:

ubuntu@localhost: sudo update-rc.d teamcity defaults

Next, we start the server manually (you can reboot too):

ubuntu@localhost: sudo /etc/init.d/teamcity start

Installing Lighttpd

Now we need to install lighttpd:

ubuntu@localhost: sudo apt-get install lighttpd

And configure it to forward requests from port 80 to the port we configured for Tomcat (8080 in my case).

ubuntu@localhost: sudo vi /etc/lighttpd/lighttpd.conf
server.modules = (
        "mod_access",
        "mod_alias",
        "mod_compress",
        "mod_redirect",
#       "mod_rewrite",
        "mod_proxy"
)

$HTTP["host"] =~ "teamcity.yourdomain.com" {
        proxy.server = (
                "" => (
                        "tomcat" => (
                                "host" => "127.0.0.1",
                                "port" => 8080,
                                "fix-redirects" => 1
                        )
                )
        )
}

Final Words

That’s it. By now you should have a running teamcity server. If something goes wrong, be sure to check the logs which can be found at /var/TeamCity/logs. To make the server available via DNS, you need to make sure the servers hostname is registered with your local DNS. In my case, I simply added a static record to it.

Ubuntu Server on HyperV

June 17, 2011 Leave a comment

As my Linux distro of choice for a set of lightweight virtualized Servers, Ubuntu Sever provides several advantages that made me go for it:

* Driver support for HyperV
* Active Community, number of HowTo’s available considered superior to Debian
* Packages available are cutting edge
* Well documented
* Good experience with Ubuntu Desktop

At the time of writing this, there are two choices of Ubuntu Server: Ubuntu 10 with LTS or cutting edge Ubuntu 11. LTS stands for LongTermSupport and Cannonical guarantees there will be updates for at least 3 years. It’s a matter of preference, but I chose Ubuntu 11.

To install it in HyperV, I recommend you follow this guide: http://social.technet.microsoft.com/wiki/contents/articles/how-to-install-ubuntu-server-10-04-in-hyper-v.aspx

Of course, you should adapt your network configuration to your requirements. Before templating this machine, I installed openssh because I consider it a core part of my server administration toolkit.

By this point, we should have a core installation of Ubuntu Server template that is ready to be cloned. Depending on your virtualization solution of choice, different steps apply here. (In HyperV it is as simple as exporting and re-importing the machine.) Make sure you create a unique copy of the machine, so that its network adapter gets assigned a new MAC Adress (everything else is calling for trouble).

After instantiating your template, we now need to customize that template:

1. Change hostname
 sudo hostname "NewHostName"

2. Configure Networking (it is likely your adapter will now show up as eth1 instead of eth0, rember that instantiating a VM tempate involves changing the MAC adress of the server)

sudo vi /etc/network/interfaces

3. Change user name/password

sudo passwd

4. Reboot

sudo shutdown -r now

GoodReader and Mercurial for the Ultimate Student Workflow

June 8, 2011 Leave a comment

Being the proud owner of a shiny new iPad2 for the last month or so, I found it to be a valuable companion at University. No, not for browsing stackoverflow, keeping up with RSS and E-Mail, but for managing lecture slide decks, assignments etc.

In the past semester, I started managing my documents in a mercurial repository that is synced against my private mercurial server installation. This made it easier for me to keep my multiple devices synchronized (who would need that iCloud thing…). My first attempts at using the iPad for these tasks was using iBooks. iBooks is not bad, but its so utterly limited that it sucks really hard from times to times. Especially annoying is that it has absolutely no file management capabilities whatsoever.

I found GoodReader to be a great alternative. It has excellent file management and supports annotating pdfs. But the best thing is its support for synchronizing your files:

  1. Make sure your iPad is connected to the same network as your host computer
  2. Launch GoodReader, open the WiFi sync mode via the WiFi symbol
  3. Mount http://yourIpad’sIpAdress:8080 has a network folder
  4. Open a terminal, cd into your mount point  and run hg clone /yourCentralRepository
  5. The next time you want to sync run hg pull -U
It takes less than 5 minutes to set up, and is a pretty damn smart workflow!
Categories: iPhone, Tools

iOS Development Continous Integration Setup

June 7, 2011 Leave a comment

After almost a year of absence, I’m in the middle of getting back to iOS Development. Since my departure from the Apple ranch, a lot has changed and new developer tools have emerged. Professional software development has become significantly easier on this platform, but I feel the tooling still isn’t on par with what other ecosystems provide.

Nonetheless, where there’s will there’s a way. In this Series of posts, I’m going to outline the setup I’m currently running. I’ll point to resources that helped me along the way and will describe how to combine the pieces to make it all fit together. I don’t plan publishing in any particular order and since business goes first, I am going to write when I find the time for it.

All posts of the series will be put into the category iOS Continous Integration Series, which also is the best place to find them.

The setup will consist of:
The Dev’s Private Cloud (aka a HyperV Server) hosting:
– Teamcity CI Server
– Build Agents
– Mercurial via hgweb.cgi

OCUnit
Kiwi for Acceptance Testing
OCMock as Isolation framework
GCov for Code Coverage

Planned Posts:

Running Ubuntu Server in HyperVi
Setting up a TeamCity Server on Ubuntu
Setting up a Mercurial Server
iOS Testing Frameworks revisited
Running OCUnit on a build agent
Connecting OCUnit to Teamcity
Integrating OCMock
Using Kiwi for Acceptance Testing
Retrieving Coverage information with GCov
XCode Alternatives

SubSpec available on NuGet

May 27, 2011 Leave a comment

SubSpec is finally available as a NuGet package. See http://nuget.org/ on how to get started with NuGet. Once you have NuGet installed, it’s a simple matter of running Install-Package SubSpec or Install-Package SubSpec.Silverlight from the Package Manager console to get SubSpec integrated into your project.

Integrated into your project you said? You mean “get the dll and reference it”? No, in fact, deployment as a separate dll is a thing of the past for SubSpec. SubSpec is an extremely streamlined extension of xUnit and as such it fits into less than 500 lines of C# (excluding xmlDocs). This approach has several advantages:

  1. Faster builds, 500 lines of C# are faster to compile than resolving and linking against a library
  2. It fosters the creation of extensions (which is extremely common, at least in my usage of it)
  3. No need to get the source separately, you already have it!
  4. Experimental extensions can be easily shared as single files too, such as Thesis, AutoFixture integration…

I hope you like the new packages, please feel free to upvote SubSpec and SubSpec.Silverlight on the NuGet gallery and feel encouraged to write a review.

Multiple Test Runner Scenarios in MSBuild

April 15, 2011 Leave a comment

Scenario:

SubSpec is built for .NET as well as for Silverlight. For the .NET test suite, we use the xUnit MSBuild task to execute the tests, for Silverlight we use a combination of Statlight and xunitContrib. Whenever you run a suite of tests, it’s usually desirable to have a failing test break the build, however under all circumstances the complete suite of tests should be run to give you an accurate feedback.

Our build script looks something like this:
SubSpec.msbuild:

    <Target Name="Test" DependsOnTargets="Build">
		<MSBuild
            Projects="SubSpec.tests.msbuild"
            Properties="Configuration=$(Configuration)" />
    </Target>

SubSpec.tests.msbuild:

  SilverlightTests"/>

  <Target Name="xUnitTests">
    <xunit
      Assemblies="@(TestAssemblies)"/>
  </Target>

  <Target Name="SilverlightTests">
    <Exec
      Command=""tools\StatLight\StatLight.exe" @(SilverlightTestXaps -> '-x="%(Identity)"', ' ') --teamcity" />
  </Target>

Problem:

When using each of the build runners (xUnit MSBuildTask, Statlight) in isolation with multiple assemblies, they do the right thing: Run all tests, fail if at least one test failed, succeed otherwise. Now imagine we have a test succeeding under .NET but failing under Silverlight. When we run xUnit first, we get the desired result. But if Statlight was to run before xUnit, we would never know if the .NET suite would actually succeed, because Statlight stops the build.

(Non-)Solutions:

The first (and most intuitive) idea was to move the test targets into a separate MSBuild project and call MSBuild on that project with ContinueOnError=”false”:

<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

  <Target Name="Build">
    <MSBuild
      Projects="test.msbuild"
      Targets="Test"
      ContinueOnError="true"/>
  </Target>

  <Target Name="Test" DependsOnTargets="Foo;Bar">
  </Target>

  <Target Name="Foo">
    <Error Text="Foo"/>
  </Target>

  <Target Name="Bar">
    <Error Text="Bar"/>
  </Target>
</Project>

But this yields only Foo as the error (I wanted to see error: Foo and error: Bar).

MSDN says about ContinueOnError:

Optional attribute. A Boolean attribute that defaults to false if not specified. If ContinueOnError is false and a task fails, the remaining tasks in the Target element are not executed and the entire Target element is considered to have failed.

This is probably why it doesn’t make sense on the MSBuild task, it would only allow another task after the MSBuild task in “Build” to execute. We confirm this by:

  <Target Name="Build">
    <MSBuild
      Projects="test.msbuild"
      Targets="Test"
      ContinueOnError="true"/>
    <Message Text="Some Message"/>
  </Target>
  

And we see Foo as well as Some Message. At this point, it was clear me to me that I want a target that fails if any of its tasks failed, but executes all of them.

In MSDN, we discover StopOnFirstFailure:

true if the task should stop building the remaining projects as soon as any one of them may not work; otherwise, false.

If we specified separate projects, it would work, but we’re in the same project, so unfortunately this won’t help

The next idea was to use CallTarget with ContinueOnError=”true”, like:

  <Target Name="Build">
    <MSBuild
      Projects="test.msbuild"
      Targets="Test"
      ContinueOnError="false"/>
        <Message Text="I should not be executed"/>
  </Target>

  <ItemGroup>
    <TestTargets
        Include="Foo;Bar" />
  </ItemGroup>

  <Target Name="Test">
    <CallTarget Targets="%(TestTargets.Identity)" ContinueOnError="true"/>
  </Target>

  <Target Name="Foo">
    <Error Text="Foo"/>
  </Target>

  <Target Name="Bar">
    <Error Text="Bar"/>
  </Target>
  

However, “I should not be executed” appears in the output log, what happened? Build called MSBuild with ContinueOnError=false (the default). Because all tasks in Test were ContinueOnError=true, no error bubbled up to MSBuild and it executed without error. This is dangerous, because it makes our build appear succeeded when it’s not.

The next option I tried was using RunEachTargetSeparately:

Gets or sets a Boolean value that specifies whether the MSBuild task invokes each target in the list passed to MSBuild one at a time, instead of at the same time. Setting this property to true guarantees that subsequent targets are invoked even if previously invoked targets failed. Otherwise, a build error would stop invocation of all subsequent targets. The default value is false.

  <Target Name="Build">
    <MSBuild
      Projects="test.msbuild"
      Targets="Foo;Bar"
      RunEachTargetSeparately="true"/>
  </Target>

  <Target Name="Test" DependsOnTargets="Foo;Bar">
    <Error Text="Foo"/>
  </Target>

  <Target Name="Foo">
    <Error Text="Foo"/>
  </Target>
  <Target Name="Bar">
    <Error Text="Bar"/>
  </Target>
  

This gives us exactly what we want, but it doesn’t allow test runs to be parallelized. To achieve that, we need to put each test target in a separate project file. It turns out, that using this strategy, we don’t need to worry about controlling our failure strategy: Both projects get build and the MSBuild task reports an error when any of the projects have failed:

  <Target Name="Build">

  </Target>

  <Target Name="Test">
    <MSBuild
      Projects="SubSpec.test.msbuild;SubSpec.Silverlight.test.msbuild"/>
  </Target>
  

Whats the alternative? The Alternative is capturing the ExitCodes of the runners, as described in http://stackoverflow.com/questions/1059230/trapping-error-status-in-msbuild/1059672#1059672, however I don’t like that approach since it’s a bit messy. The only thing we give up by using multiple projects is that it’s harder to get an overview of what happens where, but I think in this case the separation might also aid a proper separation of concerns.

Categories: .NET, MSBuild, Testing, Tools