Upgrading to RavenDb 3.0 from 2.5

June 10, 2015 Leave a comment

Haven’t blogged in a long time but thought I’d quickly share my experience upgrading Rowing in Motion Analytics from RavenDb to RavenDb 3.0.

The upgrade was not as painless as anticipated and we hit quite a few surprises along the way.

Web Api Upgrade Mess

By far the biggest problem we had is that the RavenDb Server migrated to using Asp.Net Web API in 3.0 and this massively clashes with projects using WebApi themselves.

This only affected our unit and acceptance tests which use an in-memory embedded instance of RavenDb, but still it forced us to upgrade to Web Api 2.2 throughout our full solution. We only discovered this when upgrading the RavenDb Nuget Packages and got presented with the notice linked above. At least we got that notice but it would’ve been nice of the RavenDb team to document this in the release notes of 3.0.

Since we were still using WebApi v1, the upgrade required considerable work (e.g. Authentication is now moved into the HttpRequestContext and no longer done via Thread.CurrentPrincipal). DotNetOpenAuth doesn’t work with WebApi 2.2 either, only the current 5.0 alpha 3 release does…

Once upgrade to Web Api 2.2, you will also need to ensure the RavenDb controllers will not be routed to by providing a custom IAssembliesResolver that excludes the RavenDb assemblies, e.g.:


config.Services.Replace( typeof( IAssembliesResolver ), new ThisAssemblyOnlyResolver() );

class ThisAssemblyOnlyResolver : IAssembliesResolver
{
public ICollection GetAssemblies()
{
return new List() { typeof( WebApiConfiguration ).Assembly };
}
}

Api Changes

There’s been a couple of API changes to the RavenDb.Client that were relatively easy to adapt to:

  • .AsProjection() was removed, replaced by .ProjectFromIndexFieldsInto()
  • .LuceneQuery() was removed, replaced by .DocumentQuery()
  • Session.Advanced.Defer() behaves differently and will now throw if multiple deferred operations on a document are pending on a session (e.g. a delete followed by a store)
  • Formatting of document Ids in exception messages has changed to all lowercase, even when the document id itself has a MixedCase prefix. We need this in a few places to handle concurrency exceptions (I know relying on exception strings is a bad idea, but it’s currently the only way Raven will tell you about the source of a conflict which has a meaning in our domain)

Subtly Breaking Changes

There has also been at least onesubtle change that may break your existing code. It appears RavenDb has changed the ObjectCreationHandling policy of the Newtonsoft.Json library it internally uses for serializing/deserializing documents to “Auto”. If you have objects with collection properties (IEnumerable<T> is already enough) you may suddenly find that a deserialized object simply appends the deserialized property instead of replacing it.

You can work around this by doing:

store.Conventions.CustomizeJsonSerializer = x =>
{
x.ObjectCreationHandling = Raven.Imports.Newtonsoft.Json.ObjectCreationHandling.Replace;
};

Deployment Changes

The RavenDb.Embedded NuGet package is no longer necessary and can be removed safely.

Categories: Uncategorized

Debugging on a mobile device using Fiddler and IIS Express

December 28, 2012 Leave a comment

When writing mobile web applications that work against  a RESTful API its useful to be able to trace all HTTP traffic generated by the app. In this post, I’m going to describe how to set up an ASP.NET Web API project hosted within IIS Express so that you can view the traffic generated by a mobile device (i.e. your iPhone). The technique I’m outlining here does _not_ rely on configuring a proxy on the mobile device, which is cumbersome in case you don’t have a device reserved exclusively for development. Replace the ports and hostnames in the instructions below to fit your environment. This setup requires that you can resolve your dev machines’ hostname using DNS in your local network.

Expose IIS_Express on the FQDN of your host

Open C:\Users\YourUser\Documents\IISExpress\config\applicationHost.config and edit the binding for your web api project.
Replace <binding protocol="http" bindingInformation="*:1182:localhost" /> with <binding protocol="http" bindingInformation="*:1182:hostname.domain.com" />

Add a URL Reservation:

We need to allow external connections on the port used by IIS Express. Run the following command from an elevated prompt: netsh http add urlacl url=http://hostname.domain.com:1182/ user=everyone

Configure Fiddler to proxy incoming traffic

See http://www.fiddler2.com/Fiddler/Help/ReverseProxy.asp and use option#2.

Connect with your mobile device

Use http://hostname.domain.com:1182/ to access IIS Express directly and http://hostname.domain.com:8888/ to route all HTTP requests through fiddler.

Categories: Uncategorized

F# 3.0 on AppHarbor

November 1, 2012 6 comments

The online Analytics service for Rowing in Motion has some intense data processing requirements. A typical logfile that users may want to work with for a 90minutes training session is about 5 megabytes in compressed size. The in-memory models we need to work with for data analysis need to encompass millions of data points and can easily exceed 30mb of memory when fully unfold.

It’s pretty clear we could not offer a good user experience when processing all this data locally on a device, so we decided to build the data analysis software as an online service. There are some other benefits to this model too, especially in the space of data archival and historical comparisons. F# excels at expressing our calculation models in a short and concise manner and makes parallellizing these calculations easy, which is crucial to achieve acceptable response times in our scenario.

Deciding to use F# was easy, but it turns out I faced some problems integrating with our cloud hosting platform of choice AppHarbor. This post will explain what needs to be done to get F# code to compile on AppHarbor and also how to run unit tests there.

Compiling F# 3.0 code on AppHarbor

Visual Studio 2012 installs the F# “SDK” (there is no official one for F#3.0) into C:\Program Files\Microsoft SDKs\F#, and that’s where the default F# project templates point to.

<Import Project="$(MSBuildExtensionsPath32)\..\Microsoft SDKs\F#\3.0\Framework\v4.0\Microsoft.FSharp.Targets" Condition=" Exists('$(MSBuildExtensionsPath32)\..\Microsoft SDKs\F#\3.0\Framework\v4.0\Microsoft.FSharp.Targets')" />

We will fix this (and another issue) by copying the whole “SDK” folder into our source repository at tools/F# (yes, everything). Next up, we will create a Custom.FSharp.Targets file, that we will reference instead. Replace the project line above with:

  <Import Project="$(SolutionDir)\build\RowingInMotion.FSharp.Targets" />

We will also have to delete the FSharp.Core reference from the fsproj file. Since the AppHarbor build machines don’t have FSharp.Core 4.3.0 in the GAC (or in a ReferenceAssemblies location), we have to include this into the project too. I copied mine from C:\Program Files (x86)\Reference Assemblies\Microsoft\FSharp to lib\FSharp

The Custom.FSharp.Targets we created earlier will take care of including the correct reference, as well as pointing the Microsoft.FSharp.Targets to the correct F# compiler directory in our source tree.

<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  
  <!--Include a default reference to the correct FSharp.Core assembly-->
  <ItemGroup>
	  <Reference Include="FSharp.Core, Version=4.3.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a">
		  <HintPath>$(SolutionDir)\lib\FSharp\3.0\Runtime\v4.0\FSharp.Core.dll</HintPath>
	  </Reference>
  </ItemGroup>
  
  <!--Override the Path to the FSharp Compiler to point to our tool dir-->
  <PropertyGroup>
	<FscToolPath>$(SolutionDir)\tools\F#\3.0\Framework\v4.0</FscToolPath>
  </PropertyGroup>
  <Import Project="$(SolutionDir)\tools\F#\3.0\Framework\v4.0\Microsoft.FSharp.Targets" />
  
</Project>

One last thing that needs to be fixed is that the F# compiler (itself written in F#) also needs a copy of FSharp.Core, so I simply dropped one right next to it. That’s it, now you should be able to compile F# 3.0 projects on AppHarbor. It’s nice that F# is “standalone” enough from the rest of the .NET Framework that it can be pulled apart this easily, but it would be even better if Microsoft offered a F# SDK that the guys at AppHarbor could install on their build servers.

Running F# xUnit tests on AppHarbor

AppHarbor uses Gallio to run unit tests. Unfortunately, Gallio is not able to detect static test methods. That means you cannot write tests as modules. Instead you have to resort to declaring normal types with members, which is a bit heavier on the syntax and feels considerably less idiomatic (and its more typing…). I have filed a bug with the Gallio Team, which can be tracked here: http://code.google.com/p/mb-unit/issues/detail?id=902. It should be noted that the xUnit Visual Studio runner can run F# Xunit tests just fine. We’ll see if I see the need to switch to a more specific F# testing framework in the future.

Categories: .NET, F#, Testing

Configuring Diffmerge for Git

October 4, 2012 Leave a comment

One thing that I do particulary _not_ like about Git is that it doesn’t integrate with mergetools automatically. Mercurial is a bit smarter here – if you have a merge tool installed, it will find and configure it for you. With git, you have to resort to the following commands to setup diffmerge:

 

git config –global diff.tool diffmerge
git config –global difftool.diffmerge.cmd “C:\Program Files\SourceGear\Common\DiffMerge\sgdm.exe \$LOCAL \$REMOTE”

git config –global merge.tool diffmerge
git config –global mergetool.diffmerge.cmd “C:\Program Files\SourceGear\Common\DiffMerge\sgdm.exe –merge –result=\$MERGED \$LOCAL \$BASE \$REMOTE”
git config –global mergetool.diffmerge.trustExitCode true

Categories: Uncategorized

Analyzing Facebook Ad AB-Test Performance with R

September 9, 2012 Leave a comment

I’m experimenting with Facebook Advertising to help increase awareness for my micro-startup Rowing in Motion. As I’m trying various content and target combinations analyzing the campaign statistics for significant differences in ad-performance is  important to find the best way to reach your target audience.

This is just a quick post to outline the steps necessary to do an ANOVA  analysis with R on Facebook Ad Campaign Reports. I won’t go into the details of ANOVA here, but in short it let’s you analyse whether the means for any number of groups are equal or not and to what level of significance.

For a proper ANOVA, your data must have three important properties:

  1. Normal Distribution
  2. Homogenous Variance
  3. Indepedence

Independence is easily statisfied because your ads are never shown together (each impression is independent from a previous one). There are various ways to check these, but the easiest is to plot the variance and if it looks homogenous enough use that (you can also use a Levene test). Normal distribution can be assumed.

To get the data from Facebook, generate a report with all the campaigns you want to compare and select daily summary and download it as CSV.

Next, fire up R and load the data

> fb.data names(fb.data)
[1] "Date" "Campaign" "Campaign.ID" "Impressions" "Social.Impressions" "Social.."
[7] "Clicks" "Social.Clicks" "CTR" "Social.CTR" "CPC" "CPM"
[13] "Spent" "Reach" "Frequency" "Social.Reach" "Actions" "Page.Likes"
[19] "App.Installs" "Event.Responses" "Unique.Clicks" "Unique.CTR"

Next up, we want to attach the data to spare us some typing down the road and then create a simple Whisker plot (I’m plotting Campaign vs. Likes but you can substitute that with Clicks etc.):

> attach(fb.data)
> plot(Campaign, Page.Likes)

Now we’re going to create the ANOVA:

fb.aov summary.lm(fb.aov)

Call:
aov(formula = Page.Likes ~ Campaign, data = fb.data)

Residuals:
Min 1Q Median 3Q Max
-1.6667 -1.2222 0.0000 0.6667 3.3333

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.986e-16 4.120e-01 0.000 1.00000
CampaignRiM_PageAds_02_01 1.667e+00 5.827e-01 2.860 0.00669 **
CampaignRiM_PageAds_02_02 1.222e+00 5.827e-01 2.098 0.04230 *
CampaignRiM_PageAds_02_03 3.333e-01 5.827e-01 0.572 0.57047
CampaignRiM_PageAds_02_04 1.222e+00 5.827e-01 2.098 0.04230 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 1.236 on 40 degrees of freedom
Multiple R-squared: 0.221, Adjusted R-squared: 0.1431
F-statistic: 2.836 on 4 and 40 DF, p-value: 0.03674

The Null-hypothesis of an ANOVA is that the expected value is equal among all groups. If the test shows a low p-value (<0.05) we call that a significant difference. The above lm.summary has chosen the first group as the intercept, which means the reference group. If we want to conclude that a certain type of ad has performed better than the reference ad, there better be a significant difference (indicated with confidence level by the ** characters). This ANOVA has shown that CampaignRiM_PageAds_02_01 has performed significantly different than the reference campaign, while we can’t draw that conclusion for Campaign RiM_PageAds_02_03 (no significant difference).

You can choose another reference group using relevel:

refCampaign summary.lm(aov(Page.Likes ~ refCampaign))
>....

We can also do a pairwise t-test (we need to do a Bonferroni Adjustment though, Holms method is a safe choice) to see which groups have significantly different means. The p-value reflects the probability of achieving the measured or an even more extreme outcome if the Null-hypothesis holds (H0: all means are equal). In this case, only the difference between the reference campaign RiM_PageAd_01 and RiM_PageAds_02_01 is significant enough.

> pairwise.t.test(Page.Likes, Campaign, p.adj = "holm")

Pairwise comparisons using t tests with pooled SD

data: Page.Likes and Campaign

RiM_PageAd_01 RiM_PageAds_02_01 RiM_PageAds_02_02 RiM_PageAds_02_03
RiM_PageAds_02_01 0.067 - - -
RiM_PageAds_02_02 0.338 1.000 - -
RiM_PageAds_02_03 1.000 0.247 0.810 -
RiM_PageAds_02_04 0.338 1.000 1.000 0.810

P value adjustment method: holm

Categories: Uncategorized

OpenCL Work Item Ids: Global/Group/Local

February 3, 2012 2 comments

This post is my notepad while figuring out how OpenCL handles assigning work item ids.

Important links:

The basics:

  • A Kernel is invoked once for each work item. Each work item has private memory.
  • Work items are grouped into a work group. Each work group shares local memory
  • The total number of all work items is specified by the global work size. global and constants memory is shared across al work work items of all work groups.
Here’s the standard picture, each rectangle represents a work item and each of the grouped rectangles represents a work group.

Image

OpenCL works with the notion of dimension, that means you can declare your number of work items by giving them dimensional indices. In the above example, the size of a work group Sx=4 and Sy=4. How many dimensions you use is up to you, however there’s a physical limit on the maximum number of total work items per group as well as globally.

Inside a kernel, you can query the position of the work item this kernel instance is executing relative to the group or global.

Querying the global position is done using get_global_id(dim) where dim is the dimension index (0 for first, 1 for second dimension etc.) The above call is equivalent to get_local_size(dim)*get_group_id(dim) + get_local_id(dim). get_local_size(dim) is the group size in dim, get_group_id(dim) is the group position in dim relative to all other groups (globally) and  get_local_id(dim) is the position of a work item relative to the group. You can see this in the following annotated figure:

Image

Since the OpenCL APIs only reuire you to specify global size (total number of work items in a dimension) and local size (number of work items per group) this means that the number of groups is inferred from that data.

Categories: OpenCL

Kiwi as a static framework or Universal Library

January 27, 2012 Leave a comment

A problem commonly encountered when using open-source iOS frameworks is the lack of a fully-functional framework facility in xCode. Part of the issue is that Apple does not allow dynamic linking on iOS devices, the other is that there are two different architectures that need to be supported by libraries targeting both armv6 (up to iPhone 3G) and armv7 devices (iPhone 3GS and later). On top of that, we also need a binary that will run on the simulator (x86).

The easiest solution to the library problem in XCode is using project dependencies to build libraries in the configuration you need them. When taking a source dependency is not desirable, you are pretty much left to your own if the OSS project doesn’t provide binaries.

Fortunately enough, it’s not too difficult to build your own universal frameworks. Below are the steps I use for building a version of Kiwi:

  1. Grab the Universal Framework XCode templates from https://github.com/kstenerud/iOS-Universal-Framework
  2. Install the Fake framework flavor (although the Real framework flavor should work as well)
  3. Create a new xCode project with the Fake framework template
  4. Add all source files of Kiwi (make sure to check the Copy to destination group folder box)
  5. Select the Kiwi static library target, project editor, build phases, Copy Headers, select all headers in the Project Group, right click and select move to Public
  6. Select the Kiwi static library target, project editor, build phases, link binary with libraries and add SenTestingKit.framework
  7. Build
  8. Go to the  Project Navigator (Cmd-1) and select Products Kiwi.framework. Right-Click and select “Show in Finder”
  9. You should see two folders: Kiwi.framework and Kiwi.embeddedframework – Kiwi.framework is the one we need
  10. Copy the Kiwi.framework folder into your lib folder
  11. Open the project you want to use Kiwi.framework in and select your target, project editor, build phases, link binary with libraries, click + and add Kiwi.framework from your lib folder

That’s it. Takes less than two minutes once you know the trick.

Categories: iPhone, Objective-C, Tools

Continous Deployment for Apps via testflightapp

January 26, 2012 Leave a comment

The benefits of continous integration are widely known. By extending the ideas of continous integration to the full software lifecycle, continous delivery becomes an inevitable practice. Especially in the context of managing a beta program for mobile devices, to most of which I as a developer have no physical access, the ability to have fully automated deployments is crucial.

Testflightapp.com provides a great service to iOS developers by managing app provisioning and deployments. They do also provide easy to use instrumentation facilities.

Continous deployment with testflightapp.com is a breaze, all you need your build-server to do is interact with a straightforward web-api to upload your ipa packages.
Here’s the script I’m using for RowMotion:

#!/bin/bash
# 

# testflightapp.com tokens
API_TOKEN="YOUR_API_TOKEN"
TEAM_TOKEN="YOUR_TEAM_TOKEN"

PRODUCT_NAME="RowMotion"
ARTEFACTS="$PWD/Artefacts"

SIGNING_IDENTITY="iPhone Distribution"
PROVISIONING_PROFILE="$PWD/id/RowMotionAdHoc.mobileprovision"

# calculated vars
OUT_IPA="${ARTEFACTS}/${PRODUCT_NAME}.ipa"
OUT_DSYM="${ARTEFACTS}/${PRODUCT_NAME}.dSYM.zip"

# kill artefacts directory
rm -rf $ARTEFACTS
mkdir $ARTEFACTS

# compile
echo "##teamcity[compilationStarted compiler='xcodebuild']"
xcodebuild -workspace RowMotion.xcworkspace -scheme RowMotion -sdk iphoneos5.0 -configuration Release build archive
buildSucess=$?

if [[ $buildSucess != 0 ]] ; then
echo "##teamcity[message text='compiler error' status='ERROR']"
echo "##teamcity[compilationFinished compiler='xcodebuild']"
exit $buildSucess
fi

echo "##teamcity[compilationFinished compiler='xcodebuild']"

#ipa
echo "##teamcity[progressMessage 'Creating .ipa for ${PRODUCT_NAME}']"

DATE=$( /bin/date +"%Y-%m-%d" )
ARCHIVE=$( /bin/ls -t "${HOME}/Library/Developer/Xcode/Archives/${DATE}" | /usr/bin/grep xcarchive | /usr/bin/sed -n 1p )
DSYM="${HOME}/Library/Developer/Xcode/Archives/${DATE}/${ARCHIVE}/dSYMs/${PRODUCT_NAME}.app.dSYM"
APP="${HOME}/Library/Developer/Xcode/Archives/${DATE}/${ARCHIVE}/Products/Applications/${PRODUCT_NAME}.app"

/usr/bin/xcrun -sdk iphoneos PackageApplication -v "${APP}" -o "${OUT_IPA}" --sign "${SIGNING_IDENTITY}" --embed "${PROVISIONING_PROFILE}"

#symbols
echo "##teamcity[progressMessage 'Zipping .dSYM for ${PRODUCT_NAME}']"
/usr/bin/zip -r "${OUT_DSYM}" "${DSYM}"

# prepare build notes
NOTES=`hg tip`

#upload
echo "##teamcity[progressMessage 'Uploading ${PRODUCT_NAME} to TestFlight']"

/usr/bin/curl "http://testflightapp.com/api/builds.json" \
-F file=@"${OUT_IPA}" \
-F dsym=@"${OUT_DSYM}" \
-F api_token="${API_TOKEN}" \
-F team_token="${TEAM_TOKEN}" \
-F notes="${NOTES}" \
-F notify="True" \
-F distribution_lists="Private"

Make sure to adapt the script to your requirements. One trick I’m fond of is automatically providing SCM information in the build notes (the step with executing hg tip does just that).
For deployments I’m using two lists, a private one to which all builds will be published, and a public one to which I can selectively deploy. What’s so great about testflightapp.com is that it will automatically send emails to notify my testers about the new build and will then allow me to monitor installs.

Retrieving Coverage Information – LLVM, CoverStory and Teamcity

August 10, 2011 1 comment

Abstract

This time, we will set up automated code coverage metrics retrieval from our unit and integration test suites. When I started writing my iOS Continous Integration Series, we needed to use GCC and GCov to generate coverage information. Fortunately, this has changed and starting with iOS 5 Beta 4 XCode ships a version of LLVM that is capable of generating coverage information.

We will use CoverStory to create a (not-so-pretty but useful) HTML report of our coverage information and teamcity will pick this up and display it in the Build Result tab.

Collecting Metrics

Setting up LLVM to generate coverage information is easy. The following is an augmented version of the instructions found at the CoverStory Wiki:

  1. Edit all test targets, add -fprofile-arcs and -ftest-coverage to Other C Flags
  2. Edit all test targets, select the “Build Phases” tab and add /Developer/usr/lib/libprofile_rt.dylib to the stage “Link Binary WIth Libraries”
  3. Build and run, you should see *.gcda and *.gcno files generated in your bin directory

HTML Reports with CoverStory

Once the Build Script compiled the product and executed all the tests, we need to pick up the coverage information generated. From this data, we need to generate a report that shows the number of lines covered. I’m using CoverStory to do just this. Unfortunately, it doesn’t come with a command line interface, so I had to resort to some hacked AppleScript to drive the process (note that I checked in CoverStory in my source tree under tool/CoverStory.app). The script takes two command line parameters, the first being the directory to search for coverage information, the second being the output directory for the generated html report.

on run argv
	
	tell application "tool/CoverStory.app"
	    quit
		activate
		set x to open (item 1 of argv)
		tell x to export to HTML in (item 2 of argv)
		quit
	end tell
	
	return item 1 of argv & "|" & item 2 of argv
	
end run

Ok, so that’s all nice and sweet. However, there is another issue I hit with CoverStory (which has already been filed as a bug with the maintainers). Basically, CoverStory would startup and freeze upon opening your coverage information. The Link also has a patch fixing it, that did not make it’s way into an official release yet. I do have a private build of CoverStory with the fix applied, If you are feel unable to build CoverStory yourself email me and I’ll be happy to provide you with my build. I hope this becomes obsolete very soon.

Alright, where did we left off? We have an apple script for collecting coverage information. This needs to be included in our build script. So our Shell based build script looks like this now:

# build here

# test here

# collect coverage results
echo "##teamcity[progressStart 'Collecting Coverage results']"
osascript coverStory.applescript $PWD $PWD/Artefacts/Coverage
echo "##teamcity[progressFinish 'Collecting Coverage results']"

Note that $PWD denotes the current working directory, i.e. the directory the build script is executed from. CoverStory will generate  a HTML based report and put it in our Artefacts directory in a subdirectory called Coverage. Next, we’re going to make Teamcity pick up this report.

Integrating Coverage Reports with Teamcity

Teamcity does an admirable job at managing artefacts. As a first step, we need to edit our build configuration and add an Artifact Path: “%system.teamcity.build.checkoutDir%/Artefacts/Coverage => coverage.zip”. This will cause TeamCity to take the Coverage Report and put it into a coverage.zip archive that it sucks in and stores along the build results.

Next, we edit the Teamcity Server Configuration, chose “Report Tabs” and hit “Create new report tab” with the following settings:

  • Tab Title: Code Coverage
  • Base path: coverage.zip
  • Start page: index.html

Teamcity will display an iframe with the contents of index.html of the coverage.zip artifact we created in the previous step. If everything went well, your build results should look like this now:

iOS: Detect Personal Hotspot

July 22, 2011 2 comments

When you want to detect the type of available connections on an iPhone, the best resource you can find on the web is the sample code from Erica Sadun’s excellent iPhone Cookbook book (which I can wholeheartedly recommend). The sample code can be found on github (look into 02 and 03): https://github.com/erica/iphone-3.0-cookbook-/tree/master/C13-Networking

While the solution presented is great, it fails to work on an iPhone 4 that has the Personal Hotspot feature enabled. In this scenario, the iPhone will create a network interface called “ap0” that bridges through to “en0” (WiFi) and “pdp_ip0” (3G) . Since “en0” will not be marked as AF_INET interface in this scenario, the approach Erica outlined will fail here. Here’s a dump of the available interfaces, their loopback and AF_INET status and their assigned address:

2011-07-22 12:59:07.120 RowMotion[286:707] name: lo0, inet: 0, loopback: 0, adress: 24.3.0.0
2011-07-22 12:59:07.126 RowMotion[286:707] name: lo0, inet: 0, loopback: 0, adress: 0.0.0.0
2011-07-22 12:59:07.129 RowMotion[286:707] name: lo0, inet: 1, loopback: 0, adress: 127.0.0.1
2011-07-22 12:59:07.134 RowMotion[286:707] name: lo0, inet: 0, loopback: 0, adress: 0.0.0.0
2011-07-22 12:59:07.137 RowMotion[286:707] name: en0, inet: 0, loopback: 1, adress: 6.3.6.0
2011-07-22 12:59:07.141 RowMotion[286:707] name: ap0, inet: 0, loopback: 1, adress: 6.3.6.0
2011-07-22 12:59:07.145 RowMotion[286:707] name: pdp_ip0, inet: 0, loopback: 1, adress: 255.7.0.0
2011-07-22 12:59:07.149 RowMotion[286:707] name: pdp_ip0, inet: 1, loopback: 1, adress: 10.217.22.129
2011-07-22 12:59:07.154 RowMotion[286:707] name: pdp_ip1, inet: 0, loopback: 1, adress: 255.7.0.0
2011-07-22 12:59:07.157 RowMotion[286:707] name: pdp_ip2, inet: 0, loopback: 1, adress: 255.7.0.0
2011-07-22 12:59:07.161 RowMotion[286:707] name: pdp_ip3, inet: 0, loopback: 1, adress: 255.7.0.0
2011-07-22 12:59:07.165 RowMotion[286:707] name: en1, inet: 0, loopback: 1, adress: 6.3.6.0
2011-07-22 12:59:07.168 RowMotion[286:707] name: bridge0, inet: 0, loopback: 1, adress: 6.7.6.0
2011-07-22 12:59:07.172 RowMotion[286:707] name: bridge0, inet: 1, loopback: 1, adress: 172.20.10.1

See that last line? Yep, that’s the bridge interface we need to use to communicate with other devices on our “personal hotspot”. Here’s how to ammend Erica’s code to make personal hotspots transparent:

// Matt Brown's get WiFi IP addy solution
// http://mattbsoftware.blogspot.com/2009/04/how-to-get-ip-address-of-iphone-os-v221.html
+ (NSString *) localWiFiIPAddress
{
    BOOL success;
    struct ifaddrs * addrs;
    const struct ifaddrs * cursor;
    
    success = getifaddrs(&amp;addrs) == 0;
    if (success) {
        cursor = addrs;
        while (cursor != NULL) {
            
            NSString *name = [NSString stringWithUTF8String:cursor-&gt;ifa_name];
            
            NSLog(@"available network interfaces: name: %@, inet: %d, loopback: %d, adress: %@", name, cursor-&gt;ifa_addr-&gt;sa_family == AF_INET, (cursor-&gt;ifa_flags &amp; IFF_LOOPBACK) == 0, [NSString stringWithUTF8String:inet_ntoa(((struct sockaddr_in *)cursor-&gt;ifa_addr)-&gt;sin_addr)]);
            
            // the second test keeps from picking up the loopback address
            if (cursor-&gt;ifa_addr-&gt;sa_family == AF_INET &amp;&amp; (cursor-&gt;ifa_flags &amp; IFF_LOOPBACK) == 0) 
            {
                if ([name isEqualToString:@"en0"] || [name isEqualToString:@"bridge0"])  //  Wi-Fi adapter, or iPhone 4 Personal hotspot bridge adapter
                    return [NSString stringWithUTF8String:inet_ntoa(((struct sockaddr_in *)cursor-&gt;ifa_addr)-&gt;sin_addr)];
            }
            cursor = cursor-&gt;ifa_next;
        }
        freeifaddrs(addrs);
    }
    return nil;
}

+ (BOOL) activeWLAN
{
    return ([self localWiFiIPAddress] != nil);
}

+ (BOOL) activePersonalHotspot
{

    // Personal hotspot is fixed to 172.20.10
    return ([self activeWLAN] &amp;&amp; [ hasPrefix:@"172.20.10"]);
}

+ (BOOL) activeWLAN
{
    return ([self localWiFiIPAddress] != nil);
}

+ (BOOL) activePersonalHotspot
{
    // Personal hotspot is fixed to 172.20.10
    NSString* localWifiAddress = [self localWiFiIPAddress];
    return (localWifiAddress != nil &amp;&amp; [localWifiAddress hasPrefix:@"172.20.10"]);
}

I hope this will find it’s way into the sample code soon. Pull request is pending.

%d bloggers like this: