Debugging on a mobile device using Fiddler and IIS Express

December 28, 2012 Leave a comment

When writing mobile web applications that work against  a RESTful API its useful to be able to trace all HTTP traffic generated by the app. In this post, I’m going to describe how to set up an ASP.NET Web API project hosted within IIS Express so that you can view the traffic generated by a mobile device (i.e. your iPhone). The technique I’m outlining here does _not_ rely on configuring a proxy on the mobile device, which is cumbersome in case you don’t have a device reserved exclusively for development. Replace the ports and hostnames in the instructions below to fit your environment. This setup requires that you can resolve your dev machines’ hostname using DNS in your local network.

Expose IIS_Express on the FQDN of your host

Open C:\Users\YourUser\Documents\IISExpress\config\applicationHost.config and edit the binding for your web api project.
Replace <binding protocol="http" bindingInformation="*:1182:localhost" /> with <binding protocol="http" bindingInformation="*:1182:hostname.domain.com" />

Add a URL Reservation:

We need to allow external connections on the port used by IIS Express. Run the following command from an elevated prompt: netsh http add urlacl url=http://hostname.domain.com:1182/ user=everyone

Configure Fiddler to proxy incoming traffic

See http://www.fiddler2.com/Fiddler/Help/ReverseProxy.asp and use option#2.

Connect with your mobile device

Use http://hostname.domain.com:1182/ to access IIS Express directly and http://hostname.domain.com:8888/ to route all HTTP requests through fiddler.

Categories: Uncategorized

F# 3.0 on AppHarbor

November 1, 2012 6 comments

The online Analytics service for Rowing in Motion has some intense data processing requirements. A typical logfile that users may want to work with for a 90minutes training session is about 5 megabytes in compressed size. The in-memory models we need to work with for data analysis need to encompass millions of data points and can easily exceed 30mb of memory when fully unfold.

It’s pretty clear we could not offer a good user experience when processing all this data locally on a device, so we decided to build the data analysis software as an online service. There are some other benefits to this model too, especially in the space of data archival and historical comparisons. F# excels at expressing our calculation models in a short and concise manner and makes parallellizing these calculations easy, which is crucial to achieve acceptable response times in our scenario.

Deciding to use F# was easy, but it turns out I faced some problems integrating with our cloud hosting platform of choice AppHarbor. This post will explain what needs to be done to get F# code to compile on AppHarbor and also how to run unit tests there.

Compiling F# 3.0 code on AppHarbor

Visual Studio 2012 installs the F# “SDK” (there is no official one for F#3.0) into C:\Program Files\Microsoft SDKs\F#, and that’s where the default F# project templates point to.

<Import Project="$(MSBuildExtensionsPath32)\..\Microsoft SDKs\F#\3.0\Framework\v4.0\Microsoft.FSharp.Targets" Condition=" Exists('$(MSBuildExtensionsPath32)\..\Microsoft SDKs\F#\3.0\Framework\v4.0\Microsoft.FSharp.Targets')" />

We will fix this (and another issue) by copying the whole “SDK” folder into our source repository at tools/F# (yes, everything). Next up, we will create a Custom.FSharp.Targets file, that we will reference instead. Replace the project line above with:

  <Import Project="$(SolutionDir)\build\RowingInMotion.FSharp.Targets" />

We will also have to delete the FSharp.Core reference from the fsproj file. Since the AppHarbor build machines don’t have FSharp.Core 4.3.0 in the GAC (or in a ReferenceAssemblies location), we have to include this into the project too. I copied mine from C:\Program Files (x86)\Reference Assemblies\Microsoft\FSharp to lib\FSharp

The Custom.FSharp.Targets we created earlier will take care of including the correct reference, as well as pointing the Microsoft.FSharp.Targets to the correct F# compiler directory in our source tree.

<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  
  <!--Include a default reference to the correct FSharp.Core assembly-->
  <ItemGroup>
	  <Reference Include="FSharp.Core, Version=4.3.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a">
		  <HintPath>$(SolutionDir)\lib\FSharp\3.0\Runtime\v4.0\FSharp.Core.dll</HintPath>
	  </Reference>
  </ItemGroup>
  
  <!--Override the Path to the FSharp Compiler to point to our tool dir-->
  <PropertyGroup>
	<FscToolPath>$(SolutionDir)\tools\F#\3.0\Framework\v4.0</FscToolPath>
  </PropertyGroup>
  <Import Project="$(SolutionDir)\tools\F#\3.0\Framework\v4.0\Microsoft.FSharp.Targets" />
  
</Project>

One last thing that needs to be fixed is that the F# compiler (itself written in F#) also needs a copy of FSharp.Core, so I simply dropped one right next to it. That’s it, now you should be able to compile F# 3.0 projects on AppHarbor. It’s nice that F# is “standalone” enough from the rest of the .NET Framework that it can be pulled apart this easily, but it would be even better if Microsoft offered a F# SDK that the guys at AppHarbor could install on their build servers.

Running F# xUnit tests on AppHarbor

AppHarbor uses Gallio to run unit tests. Unfortunately, Gallio is not able to detect static test methods. That means you cannot write tests as modules. Instead you have to resort to declaring normal types with members, which is a bit heavier on the syntax and feels considerably less idiomatic (and its more typing…). I have filed a bug with the Gallio Team, which can be tracked here: http://code.google.com/p/mb-unit/issues/detail?id=902. It should be noted that the xUnit Visual Studio runner can run F# Xunit tests just fine. We’ll see if I see the need to switch to a more specific F# testing framework in the future.

Categories: .NET, F#, Testing

Configuring Diffmerge for Git

October 4, 2012 Leave a comment

One thing that I do particulary _not_ like about Git is that it doesn’t integrate with mergetools automatically. Mercurial is a bit smarter here – if you have a merge tool installed, it will find and configure it for you. With git, you have to resort to the following commands to setup diffmerge:

 

git config –global diff.tool diffmerge
git config –global difftool.diffmerge.cmd “C:\Program Files\SourceGear\Common\DiffMerge\sgdm.exe \$LOCAL \$REMOTE”

git config –global merge.tool diffmerge
git config –global mergetool.diffmerge.cmd “C:\Program Files\SourceGear\Common\DiffMerge\sgdm.exe –merge –result=\$MERGED \$LOCAL \$BASE \$REMOTE”
git config –global mergetool.diffmerge.trustExitCode true

Categories: Uncategorized

Analyzing Facebook Ad AB-Test Performance with R

September 9, 2012 Leave a comment

I’m experimenting with Facebook Advertising to help increase awareness for my micro-startup Rowing in Motion. As I’m trying various content and target combinations analyzing the campaign statistics for significant differences in ad-performance is  important to find the best way to reach your target audience.

This is just a quick post to outline the steps necessary to do an ANOVA  analysis with R on Facebook Ad Campaign Reports. I won’t go into the details of ANOVA here, but in short it let’s you analyse whether the means for any number of groups are equal or not and to what level of significance.

For a proper ANOVA, your data must have three important properties:

  1. Normal Distribution
  2. Homogenous Variance
  3. Indepedence

Independence is easily statisfied because your ads are never shown together (each impression is independent from a previous one). There are various ways to check these, but the easiest is to plot the variance and if it looks homogenous enough use that (you can also use a Levene test). Normal distribution can be assumed.

To get the data from Facebook, generate a report with all the campaigns you want to compare and select daily summary and download it as CSV.

Next, fire up R and load the data

> fb.data names(fb.data)
[1] "Date" "Campaign" "Campaign.ID" "Impressions" "Social.Impressions" "Social.."
[7] "Clicks" "Social.Clicks" "CTR" "Social.CTR" "CPC" "CPM"
[13] "Spent" "Reach" "Frequency" "Social.Reach" "Actions" "Page.Likes"
[19] "App.Installs" "Event.Responses" "Unique.Clicks" "Unique.CTR"

Next up, we want to attach the data to spare us some typing down the road and then create a simple Whisker plot (I’m plotting Campaign vs. Likes but you can substitute that with Clicks etc.):

> attach(fb.data)
> plot(Campaign, Page.Likes)

Now we’re going to create the ANOVA:

fb.aov summary.lm(fb.aov)

Call:
aov(formula = Page.Likes ~ Campaign, data = fb.data)

Residuals:
Min 1Q Median 3Q Max
-1.6667 -1.2222 0.0000 0.6667 3.3333

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.986e-16 4.120e-01 0.000 1.00000
CampaignRiM_PageAds_02_01 1.667e+00 5.827e-01 2.860 0.00669 **
CampaignRiM_PageAds_02_02 1.222e+00 5.827e-01 2.098 0.04230 *
CampaignRiM_PageAds_02_03 3.333e-01 5.827e-01 0.572 0.57047
CampaignRiM_PageAds_02_04 1.222e+00 5.827e-01 2.098 0.04230 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 1.236 on 40 degrees of freedom
Multiple R-squared: 0.221, Adjusted R-squared: 0.1431
F-statistic: 2.836 on 4 and 40 DF, p-value: 0.03674

The Null-hypothesis of an ANOVA is that the expected value is equal among all groups. If the test shows a low p-value (<0.05) we call that a significant difference. The above lm.summary has chosen the first group as the intercept, which means the reference group. If we want to conclude that a certain type of ad has performed better than the reference ad, there better be a significant difference (indicated with confidence level by the ** characters). This ANOVA has shown that CampaignRiM_PageAds_02_01 has performed significantly different than the reference campaign, while we can’t draw that conclusion for Campaign RiM_PageAds_02_03 (no significant difference).

You can choose another reference group using relevel:

refCampaign summary.lm(aov(Page.Likes ~ refCampaign))
>....

We can also do a pairwise t-test (we need to do a Bonferroni Adjustment though, Holms method is a safe choice) to see which groups have significantly different means. The p-value reflects the probability of achieving the measured or an even more extreme outcome if the Null-hypothesis holds (H0: all means are equal). In this case, only the difference between the reference campaign RiM_PageAd_01 and RiM_PageAds_02_01 is significant enough.

> pairwise.t.test(Page.Likes, Campaign, p.adj = "holm")

Pairwise comparisons using t tests with pooled SD

data: Page.Likes and Campaign

RiM_PageAd_01 RiM_PageAds_02_01 RiM_PageAds_02_02 RiM_PageAds_02_03
RiM_PageAds_02_01 0.067 - - -
RiM_PageAds_02_02 0.338 1.000 - -
RiM_PageAds_02_03 1.000 0.247 0.810 -
RiM_PageAds_02_04 0.338 1.000 1.000 0.810

P value adjustment method: holm

Categories: Uncategorized

OpenCL Work Item Ids: Global/Group/Local

February 3, 2012 1 comment

This post is my notepad while figuring out how OpenCL handles assigning work item ids.

Important links:

The basics:

  • A Kernel is invoked once for each work item. Each work item has private memory.
  • Work items are grouped into a work group. Each work group shares local memory
  • The total number of all work items is specified by the global work size. global and constants memory is shared across al work work items of all work groups.
Here’s the standard picture, each rectangle represents a work item and each of the grouped rectangles represents a work group.

Image

OpenCL works with the notion of dimension, that means you can declare your number of work items by giving them dimensional indices. In the above example, the size of a work group Sx=4 and Sy=4. How many dimensions you use is up to you, however there’s a physical limit on the maximum number of total work items per group as well as globally.

Inside a kernel, you can query the position of the work item this kernel instance is executing relative to the group or global.

Querying the global position is done using get_global_id(dim) where dim is the dimension index (0 for first, 1 for second dimension etc.) The above call is equivalent to get_local_size(dim)*get_group_id(dim) + get_local_id(dim). get_local_size(dim) is the group size in dim, get_group_id(dim) is the group position in dim relative to all other groups (globally) and  get_local_id(dim) is the position of a work item relative to the group. You can see this in the following annotated figure:

Image

Since the OpenCL APIs only reuire you to specify global size (total number of work items in a dimension) and local size (number of work items per group) this means that the number of groups is inferred from that data.

Categories: OpenCL

Kiwi as a static framework or Universal Library

January 27, 2012 Leave a comment

A problem commonly encountered when using open-source iOS frameworks is the lack of a fully-functional framework facility in xCode. Part of the issue is that Apple does not allow dynamic linking on iOS devices, the other is that there are two different architectures that need to be supported by libraries targeting both armv6 (up to iPhone 3G) and armv7 devices (iPhone 3GS and later). On top of that, we also need a binary that will run on the simulator (x86).

The easiest solution to the library problem in XCode is using project dependencies to build libraries in the configuration you need them. When taking a source dependency is not desirable, you are pretty much left to your own if the OSS project doesn’t provide binaries.

Fortunately enough, it’s not too difficult to build your own universal frameworks. Below are the steps I use for building a version of Kiwi:

  1. Grab the Universal Framework XCode templates from https://github.com/kstenerud/iOS-Universal-Framework
  2. Install the Fake framework flavor (although the Real framework flavor should work as well)
  3. Create a new xCode project with the Fake framework template
  4. Add all source files of Kiwi (make sure to check the Copy to destination group folder box)
  5. Select the Kiwi static library target, project editor, build phases, Copy Headers, select all headers in the Project Group, right click and select move to Public
  6. Select the Kiwi static library target, project editor, build phases, link binary with libraries and add SenTestingKit.framework
  7. Build
  8. Go to the  Project Navigator (Cmd-1) and select Products Kiwi.framework. Right-Click and select “Show in Finder”
  9. You should see two folders: Kiwi.framework and Kiwi.embeddedframework – Kiwi.framework is the one we need
  10. Copy the Kiwi.framework folder into your lib folder
  11. Open the project you want to use Kiwi.framework in and select your target, project editor, build phases, link binary with libraries, click + and add Kiwi.framework from your lib folder

That’s it. Takes less than two minutes once you know the trick.

Categories: iPhone, Objective-C, Tools

Continous Deployment for Apps via testflightapp

January 26, 2012 Leave a comment

The benefits of continous integration are widely known. By extending the ideas of continous integration to the full software lifecycle, continous delivery becomes an inevitable practice. Especially in the context of managing a beta program for mobile devices, to most of which I as a developer have no physical access, the ability to have fully automated deployments is crucial.

Testflightapp.com provides a great service to iOS developers by managing app provisioning and deployments. They do also provide easy to use instrumentation facilities.

Continous deployment with testflightapp.com is a breaze, all you need your build-server to do is interact with a straightforward web-api to upload your ipa packages.
Here’s the script I’m using for RowMotion:

#!/bin/bash
# 

# testflightapp.com tokens
API_TOKEN="YOUR_API_TOKEN"
TEAM_TOKEN="YOUR_TEAM_TOKEN"

PRODUCT_NAME="RowMotion"
ARTEFACTS="$PWD/Artefacts"

SIGNING_IDENTITY="iPhone Distribution"
PROVISIONING_PROFILE="$PWD/id/RowMotionAdHoc.mobileprovision"

# calculated vars
OUT_IPA="${ARTEFACTS}/${PRODUCT_NAME}.ipa"
OUT_DSYM="${ARTEFACTS}/${PRODUCT_NAME}.dSYM.zip"

# kill artefacts directory
rm -rf $ARTEFACTS
mkdir $ARTEFACTS

# compile
echo "##teamcity[compilationStarted compiler='xcodebuild']"
xcodebuild -workspace RowMotion.xcworkspace -scheme RowMotion -sdk iphoneos5.0 -configuration Release build archive
buildSucess=$?

if [[ $buildSucess != 0 ]] ; then
echo "##teamcity[message text='compiler error' status='ERROR']"
echo "##teamcity[compilationFinished compiler='xcodebuild']"
exit $buildSucess
fi

echo "##teamcity[compilationFinished compiler='xcodebuild']"

#ipa
echo "##teamcity[progressMessage 'Creating .ipa for ${PRODUCT_NAME}']"

DATE=$( /bin/date +"%Y-%m-%d" )
ARCHIVE=$( /bin/ls -t "${HOME}/Library/Developer/Xcode/Archives/${DATE}" | /usr/bin/grep xcarchive | /usr/bin/sed -n 1p )
DSYM="${HOME}/Library/Developer/Xcode/Archives/${DATE}/${ARCHIVE}/dSYMs/${PRODUCT_NAME}.app.dSYM"
APP="${HOME}/Library/Developer/Xcode/Archives/${DATE}/${ARCHIVE}/Products/Applications/${PRODUCT_NAME}.app"

/usr/bin/xcrun -sdk iphoneos PackageApplication -v "${APP}" -o "${OUT_IPA}" --sign "${SIGNING_IDENTITY}" --embed "${PROVISIONING_PROFILE}"

#symbols
echo "##teamcity[progressMessage 'Zipping .dSYM for ${PRODUCT_NAME}']"
/usr/bin/zip -r "${OUT_DSYM}" "${DSYM}"

# prepare build notes
NOTES=`hg tip`

#upload
echo "##teamcity[progressMessage 'Uploading ${PRODUCT_NAME} to TestFlight']"

/usr/bin/curl "http://testflightapp.com/api/builds.json" \
-F file=@"${OUT_IPA}" \
-F dsym=@"${OUT_DSYM}" \
-F api_token="${API_TOKEN}" \
-F team_token="${TEAM_TOKEN}" \
-F notes="${NOTES}" \
-F notify="True" \
-F distribution_lists="Private"

Make sure to adapt the script to your requirements. One trick I’m fond of is automatically providing SCM information in the build notes (the step with executing hg tip does just that).
For deployments I’m using two lists, a private one to which all builds will be published, and a public one to which I can selectively deploy. What’s so great about testflightapp.com is that it will automatically send emails to notify my testers about the new build and will then allow me to monitor installs.

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: