Archive

Archive for the ‘Projects’ Category

Continous Deployment for Apps via testflightapp

January 26, 2012 Leave a comment

The benefits of continous integration are widely known. By extending the ideas of continous integration to the full software lifecycle, continous delivery becomes an inevitable practice. Especially in the context of managing a beta program for mobile devices, to most of which I as a developer have no physical access, the ability to have fully automated deployments is crucial.

Testflightapp.com provides a great service to iOS developers by managing app provisioning and deployments. They do also provide easy to use instrumentation facilities.

Continous deployment with testflightapp.com is a breaze, all you need your build-server to do is interact with a straightforward web-api to upload your ipa packages.
Here’s the script I’m using for RowMotion:

#!/bin/bash
# 

# testflightapp.com tokens
API_TOKEN="YOUR_API_TOKEN"
TEAM_TOKEN="YOUR_TEAM_TOKEN"

PRODUCT_NAME="RowMotion"
ARTEFACTS="$PWD/Artefacts"

SIGNING_IDENTITY="iPhone Distribution"
PROVISIONING_PROFILE="$PWD/id/RowMotionAdHoc.mobileprovision"

# calculated vars
OUT_IPA="${ARTEFACTS}/${PRODUCT_NAME}.ipa"
OUT_DSYM="${ARTEFACTS}/${PRODUCT_NAME}.dSYM.zip"

# kill artefacts directory
rm -rf $ARTEFACTS
mkdir $ARTEFACTS

# compile
echo "##teamcity[compilationStarted compiler='xcodebuild']"
xcodebuild -workspace RowMotion.xcworkspace -scheme RowMotion -sdk iphoneos5.0 -configuration Release build archive
buildSucess=$?

if [[ $buildSucess != 0 ]] ; then
echo "##teamcity[message text='compiler error' status='ERROR']"
echo "##teamcity[compilationFinished compiler='xcodebuild']"
exit $buildSucess
fi

echo "##teamcity[compilationFinished compiler='xcodebuild']"

#ipa
echo "##teamcity[progressMessage 'Creating .ipa for ${PRODUCT_NAME}']"

DATE=$( /bin/date +"%Y-%m-%d" )
ARCHIVE=$( /bin/ls -t "${HOME}/Library/Developer/Xcode/Archives/${DATE}" | /usr/bin/grep xcarchive | /usr/bin/sed -n 1p )
DSYM="${HOME}/Library/Developer/Xcode/Archives/${DATE}/${ARCHIVE}/dSYMs/${PRODUCT_NAME}.app.dSYM"
APP="${HOME}/Library/Developer/Xcode/Archives/${DATE}/${ARCHIVE}/Products/Applications/${PRODUCT_NAME}.app"

/usr/bin/xcrun -sdk iphoneos PackageApplication -v "${APP}" -o "${OUT_IPA}" --sign "${SIGNING_IDENTITY}" --embed "${PROVISIONING_PROFILE}"

#symbols
echo "##teamcity[progressMessage 'Zipping .dSYM for ${PRODUCT_NAME}']"
/usr/bin/zip -r "${OUT_DSYM}" "${DSYM}"

# prepare build notes
NOTES=`hg tip`

#upload
echo "##teamcity[progressMessage 'Uploading ${PRODUCT_NAME} to TestFlight']"

/usr/bin/curl "http://testflightapp.com/api/builds.json" \
-F file=@"${OUT_IPA}" \
-F dsym=@"${OUT_DSYM}" \
-F api_token="${API_TOKEN}" \
-F team_token="${TEAM_TOKEN}" \
-F notes="${NOTES}" \
-F notify="True" \
-F distribution_lists="Private"

Make sure to adapt the script to your requirements. One trick I’m fond of is automatically providing SCM information in the build notes (the step with executing hg tip does just that).
For deployments I’m using two lists, a private one to which all builds will be published, and a public one to which I can selectively deploy. What’s so great about testflightapp.com is that it will automatically send emails to notify my testers about the new build and will then allow me to monitor installs.

SubSpec available on NuGet

May 27, 2011 Leave a comment

SubSpec is finally available as a NuGet package. See http://nuget.org/ on how to get started with NuGet. Once you have NuGet installed, it’s a simple matter of running Install-Package SubSpec or Install-Package SubSpec.Silverlight from the Package Manager console to get SubSpec integrated into your project.

Integrated into your project you said? You mean “get the dll and reference it”? No, in fact, deployment as a separate dll is a thing of the past for SubSpec. SubSpec is an extremely streamlined extension of xUnit and as such it fits into less than 500 lines of C# (excluding xmlDocs). This approach has several advantages:

  1. Faster builds, 500 lines of C# are faster to compile than resolving and linking against a library
  2. It fosters the creation of extensions (which is extremely common, at least in my usage of it)
  3. No need to get the source separately, you already have it!
  4. Experimental extensions can be easily shared as single files too, such as Thesis, AutoFixture integration…

I hope you like the new packages, please feel free to upvote SubSpec and SubSpec.Silverlight on the NuGet gallery and feel encouraged to write a review.

SubSpec: Assert and Observation

August 24, 2010 Leave a comment

When writing a test, we should make sure only to have one Assertion per test. The reasoning behind this constraint is simple. If we used multiple assertions and our first one fails, we are not able to retrieve the results from the other ones.



In this example, if the assertion on stack.IsEmpty() fails, we are unable to retrieve the results of the next two Assertions. We can see that our test consists of three parts:

  1. Arrange the System Under Test (SUT)
  2. Act on SUT
  3. Assert the SUT’s state has changed accordingly.

If we want to have one Assertions per test, we need to write three tests, duplicating the Arrange and Act for each test. As always, repetition is suboptimal, so let’s see what we can do about it.

SubSpecs’ core idea is that each test (we call them Specification) that you write consists of the above mentioned primitives.  Each primitive can be represented by an action and a corresponding description. Using fluent syntax, a SubSpec Specification for the above mentioned Scenario looks like this:

Each of the primitive test actions is represented by a description and a lambda statement.  The big difference to a traditional test is that SubSpec knows about these primitive actions and can compose them to generate three Tests from the above Specification, one for each Assertion. What it does under the hood is pretty much what you’d expect it to do: SubSpec repeats the Context and Do action for each Assertion and wraps it inside a single test. That’s the power of declarative tests!

This is one of the features SubSpec has supported since it’s beginning. But there’s one thing we can improve about the above example. We have got three Assertions in our above test, but only one of them is destructive. You guessed correct, it is the second one. By popping an element from the stack, it modifies the system under test. This is a more general problem. Although we should try to avoid this situation, sensing something in our  SUT cannot always be made side-effect free. (Anyone feels reminded of quantum physics? 😀 )

The first and third Assertion on the other hand are side effect free. If the Context and Do Action were possibly expensive (such as when involving an external resource), repeating them for each of our Isolated Assertions would be a waste of time. But tests need to be as fast as possible. What can we do about it?

Given the distinction between a destructive Assertion and a side effect-free Observation we can check against our SUT, we should split our Assert primitive accordingly. An Assertion is a destructive operation on our SUT, which therefore needs to be recreated for each Assertion we check. For an Observation on the other hand, the SUT can be shared. Let’s get back to our exmaple:

The Context and Do action are executed once for each Assertion (once in this case) and once for all Observations. Given the declarative nature of SubSpec, we can easily mix and match Observations and Assertions in one Specification and still get a single test for each. Pretty cool, isn’t it?

The distinction between Assert (verb) and Observation (noun) is intentional to highlight the difference between those two concepts.

Categories: .NET, SubSpec, Testing

SubSpec: A declarative test framework for Developers

August 23, 2010 Leave a comment

In my last post I described Acceptance Testing and why it is an important addition to the developer-centric way of integration and unit testing.

I also described that Acceptance Tests should  be as expressive as possible and therefore benefit from being written in a declarative style. From learning F# at the moment, I came to the conclusion that writing declarative code is the key to avoid accidental complexity (complexity in your solution domain that is not warranted by complexity in your problem domain). But not only acceptance tests benefit from a declarative style, I do also think that it helps a long way to make unit and integration tests easier to understand.

SubSpec has originally been written by Brad Wilson and Phil Haack. It was their motivation to write a framework that enables xUnit based BDD-Style testing. Given my desire to support a declarative approach for writing tests at all layers, I decided to fork the project and see what can be accomplished. I’m actively working on it and the code can be found on my bitbucket site. I like the idea of having a vision statement, so here is mine:

SubSpec allows developers to write declarative tests operating at all layers of abstraction. SubSpec consists of a small set of primitive concepts that are highly composable. Based on the powerful xUnit testing framework, SubSpec is easy to integrate with existing testing environments.

Here’s a short teaser to show you how expressive a SubSpec test is:

New Project: DirectoryVersioningService

March 15, 2010 Leave a comment

My brother asked me the other day whether there’s any software that can keep track of a directories’ content and automatically create a backup on each change. He works at an equipment supplier for events (sound, light and rigging etc.) and they have  a software for managing their inventory and rental business. This software generates a variety of reports using the List&Label report engine, which is driven by report templates that are stored on a network share. The templates that shipped with the software didn’t look nice, nor were they sophisticated enough to capture all the required information, so they find themselves messing with the report templates very often. And from time to time they break something. Figuring out what exactly broke is a time-consuming process and is especially annoying when you sit right next to a customer and simply want to check-in some equipment and give him an invoice.

“This is a perfect use case for Mercurial”, came immediately to my mind. The idea was to have a Windows Service monitor the directory on the network share using File System events and perform a commit on each change. I did a little googling just to check if there’s anyone who’d done this before but didn’t find anything useful. Four solid hours later, I had the first version of my DirectoryVersioningService ready, including a simple GUI to install/uninstall and rename the service, so you can install multiple copies of it in order to monitor different directories. A side effect of this is that I do now know how Windows Services work. Especially the installing and uninstalling process takes a little time to grasp, but it’s easy once you’ve got it.

Each time a change is made to the directory, a timer starts and is set to execute a commit in 5 seconds. This is because operations like moving or renaming a file cause several file system events to be triggered and committing intermediate states is not what we want. If an event is triggered while the timer is already running, the timer is restarted, effectively establishing a 5 seconds “threshold” before a commit. The versioning service needs to track added, removed and renamed files automatically too, so the following mercurial commands are issued for each commit:

hg addremove -s 50

hg commit -m “Automatic commit.”

So far, it works pretty well. You can find the code and an executable download at my bitbucket repository. It’s bare bones at the moment and I haven’t had time to write usage instructions but will get around it after my exams.

There are a couple of alternatives though. One solution is to use a Versioning File System. Sadly enough, there is none supported on the Windows platform. Another possibility is using some commercial software like FileHamster. Both solutions don’t feel right to me. From what I know about VFS, the tool support is very immature and it would require setting up a samba server. Commercial software costs money, may have bugs I can’t fix and is yet another tool people have to learn. From looking at it, I get the impression it’s more like version control done badly. Nothing that any mature VCS couldn’t do better.

Categories: Open Source, Projects

TDD as a Means to Explore New Platforms

January 15, 2010 Leave a comment

On of my motivations behind the iRow project was to try a 100% TDD approach on a real world project. Being familiar with  features of .net testing frameworks (my favorite is MbUnit), my baseline expectations on the way such a framework should work and integrate into my development environment where set. Unfortunately, I was soon disappointed by the frameworks available. I have written about my research on iPhone unit testing frameworks before, so I won’t list their shortcomings here. In retrospective, working with a testing framework gave me unique opportunities to gain insight into the new platform.

The concrete advantages I experienced were:

  • learn about platform specific build systems and deployment details
  • forced to develop components in a loosely coupled fashion from the ground up
  • explore unique mechanisms of the language, that might require new or make known patterns redundant
  • fast compile-test cycle, less time spent in front of the debugger
  • combined with source control: painless experiments
  • combined with isolation framework: implementation shows how runtime manipulations can be made
  • testing framework implementation shows how code meta-data can be leveraged (or not leveraged)

I can imagine taking this approach to learning new platforms in the future. Plus, I think knowing how to verify the own code is an essential skill on every platform.

Categories: Design, Projects, Tools

Modelshredder: Tracking down InvalidProgramException

January 1, 2010 Leave a comment

I received my first bug report for modelshredder the other day. When trying to convert a sequence of objects into a DataTable, the following exception occurred:

I did some immediate research on possible causes for such an exception to be thrown. Microsofts KnowledgeBase indicated there might be a problem with the amount of local variables being allocated inside the injected method, however this was not the case since modelshredder uses only three local variables, regardless of the type of object. After some back and forth with the bug reporter, we were able to conduct a sample to reproduce the bug. Some trial and error with ShredderOptions including different subsets of members revealed, that the exception only occurred when the injected code tried to access an Indexer Property. The cause for this is pretty clear when taking a look at the MSIL generated for a property access.


ilgen.Emit(OpCodes.Ldloc_0);     // Load array on evaluation stack
ilgen.Emit(OpCodes.Ldc_I4_S, i); // Load array position on eval stack
ilgen.Emit(OpCodes.Ldarg_0);     // Load ourselves on the eval stack
ilgen.Emit(OpCodes.Call, pi.GetGetMethod());
// Check if we need to box a value type
if (pi.PropertyType.IsValueType)
{
    ilgen.Emit(OpCodes.Box, pi.PropertyType);
}

// Store value in array, this pops all fields from eval stack that were added this for loop
ilgen.Emit(OpCodes.Stelem_Ref);

As you can see, the code expects the getter to be callable without any parameter, which is not the case if (PropertyInfo) pi.GetGetMethod() returns an indexer method. Since I can’t imagine there’s any use in representing the contents of an indexer property in tabular form, I decided to simply ban indexer properties from the ShredderOptions. To do so, I have added a validation inside the ShredderOptions constructor to check all PropertyInfos for Index parameters.


PropertyInfo pi = member as PropertyInfo; 

if (pi != null)

{ 

if (pi.GetIndexParameters().Length > 0 ) 

throw new ArgumentException("May not contain indexer properties.", "members");

}

Even though the fix was pretty easy once the cause was identified, bugs in MSIL injection are very hard to track down. The exception could point to any other part of the injected code being incorrect. I haven’t seen any effective way (or tool for that matter) to debug or review runtime injected code yet. It appears, one is pretty much left with nothing but trial and error in such cases.

Categories: Open Source, Projects

Announcing Modelshredder/MoreLinq project merge

December 30, 2009 Leave a comment

I have been able to conduct some effort into the modelshredder project, and after a little consultation with John Skeet I am considering merging it with the morelinq project. morelinq provides some very useful IEnumerable extensions such as ForEach to execute an Action on each element of a sequence. Since morelinq is licensed under the Apache License it will be necessary to re-license the modelshredder code (which is LGPL currently).

I think there are a lot of reasons in favor for such a project merge:

  • morelinq has excellent code documentation and test coverage
  • both projects have equal scope (IEnumerable extensions)
  • simplified deployment for adopters of both libraries (one dll)
  • broader base of maintainers/contributors
  • Silverlight support for modelshredder

I have requested a code review on the morelinq mailing list and have incorporated the suggestions made. Until my code will be ready to be merged into the morelinq code base, I will have to write some unit tests and I want to further improve code documentation. The most notable change introduced is  that modelshredder is dropping support for non-generic IEnumerables. Restricting scope to the generic IEnumerable interface makes the code a lot less complex and easier to read.

I do still have a lot of ideas for the modelshredder project and will extend it as soon as I see fit. Other than all those “future plans” for modelshredder, I have been able to fix some nasty bugs regarding invalid MSIL being generated. But I will leave that for another post.

Categories: Open Source, Projects

GHUnit: Writing Custom Assert Macros

October 24, 2009 Leave a comment

When I evaluated unit testing frameworks for iPhone development, One of the reasons why I chose GHUnit was that it has more sophisticated Assert Macros than other available frameworks. Despite this fact, there are still some Asserts that I missed, so I simply took the time to write my own.

Unlike test frameworks in the .NET or Java ecosystem, all Objective-C Frameworks provide preprocessor macros to realize assertions instead of providing a static class with Assert methods. A typical assert macro looks like the following:


#define GHAssertEquals(a1, a2, description, ...) \
do { \
	@try {\
		if (@encode(__typeof__(a1)) != @encode(__typeof__(a2))) { \
			[self failWithException:[NSException ghu_failureInFile:@"Type mismatch"...]; \
		} else { \
			if (![a1encoded isEqualToValue:a2encoded]) { \
				[self failWithException:[NSException ghu_failureInEqualityBetweenValue...]; \
			} \
		} \
	} \
	@catch (id anException) {\
		[self failWithException:[NSException ghu_failureInRaise...]; \
	}\
} while(0)

The body of the macro consists of a do{} while(false) loop, which is used to provide local scope for variables needed to implement the assertion. It is clear that code executed only once, even though a loop construct is used. The macro first checks necessary preconditions, in this case argument types. This is necessary due to the nature of a macro being a simple text substitution rather than a true method call that the compiler checks argument types for (that’s why I don’t like assertions being implemented as macros but would rather like to see assert methods). Next is the actual assertion. The type check and the actual assertion are both wrapped in a try{} catch(){} block, so any errors occurring in the macro code let the test fail also (a real macro would have a lot of code for preparing exception descriptions etc.).

I consider the GHUnit macros as a very useful set of primitve’s that can be combined to construct more complicated assertions:


#define GHFileAssertNotEmpty(file) \
do { \
	GHAssertTrue([[NSFileManager defaultManager] fileExistsAtPath:file], nil); \
	NSString* written = [NSString stringWithContentsOfFile:file]; \
	GHAssertNotNil(written, nil); \
	GHAssertGreaterThan((int)[written length], 0, nil); \
} while (0)

Note that I don’t need to take care of all the nasty details that are needed to write a proper primitive macro as outlined above. The only disadvantage with a macro like the one above is localizing the failed assertion, as the exception thrown might not be directly obvious from the code using the macro. It is not a real disadvantage of the method itself but rather inherent to all macros. XCode right-click Jump to Definition comes to the rescue here.

Categories: GHUnit, iPhone, Open Source, Testing

GHUnit: Parallel test execution performance implications

October 23, 2009 Leave a comment

As my unit test suite for the iRow project starts to grow, I am running into issues regarding test execution speed. I have maintained a clear distinction between integration and unit tests, so there are no external (possibly slow) resources such as disc i/o (including nib’s) or Sqlite databases involved.

I usually run my unit test suite in the Simulator. Having set GHUnit to automatically run my tests on startup this makes it as simple as hitting cmd-r (Xcode Build&Run). It takes some time to update the app in the Simulator, usually around 1-3 secs but I haven’t found this to be an issue as I usually take the time to do some formatting on the code I am currently working on. GHUnit makes it also very convenient to select a subset of tests  that shall be run and persists these settings between builds, so I don’t have to browse through a hundred of tests if one was failing.

Even though only running a subset of tests, it clearly took to long for me (measured 4-5 secs with stopwatch from app startup). This number also had no coincidence with what GHUnit reported as test execution time (around 0.2 secs). Browsing the GHUnit code to see where the time is wasted, I noticed that GHTestCase default implementation of the

– (BOOL)shouldRunOnMainThread

method , which GHUnit uses to determine if the runner needs to spawn off a child thread for executing this testcase, always returns false. Creating a thread is a costly operation in terms of overhead, the necessary synchronization to retrieve test results another. That’s why I suggest deriving all your testcases from a baseclass (which inherits GHTestCase) to have a central point of control about unit test execution (via shouldRunOnMainThread). This yields another positive effect for integration testing. My integration tests often need to be run on the main thread because they require certain input dispatched to the main threads runLoop only.

This is how my implementation of the shouldRunOnMainThread method looks like:

– (BOOL)shouldRunOnMainThread

{

#ifdef IROW_INTEGRATION_TESTING

return TRUE;

#endif

#ifndef IROW_INTEGRATION_TESTING

return TRUE;

#endif

}

The IROW_INTEGRATION_TESTING symbol is defined in  the integration test project’s prefix header. I think it is a pretty simple but effective solution to control test execution.

Executing all tests on the main thread brought astonishing results: Test time is down to 0.1.secs (measured with stopwatch).  However, it might be interesting to run tests on different threads from time to time to detect possible unintended side effects regarding global state. If tests seem to fail randomly if run multiple times in a row, this is a good indicator for such problems.

%d bloggers like this: