I recently did some cleanups in my music library (not that I did anything by hand, I used Music Brainz Picard, a fingerprint based song database).
Now the only trouble was that iTunes just doesn’t offer an easy way to update all Tags in its database with the updated ones from my library. Fortunately, I found this nice little Apple Script (note to myself: In the unlikely event I’d have some spare time, learn some Apple Script, it seems very very powerful).
Why am I sharing all this? Because it took me like 30 mins of googling to find out. Maybe I’m gonna increase the page rank a bit😉
I’ve had a big fight to get my OpenVPN setup working to properly connect my remote office to my home network via a Mac Mini serving as a gateway on one side. I’m going to leave all the security/certificate issues out of this, as this is very well covered elsewhere.
The desired network topology is a fully bidirectional site to site link and looks like this:
Home (192.168.160.0)-> VPN (10.8.0.0) / Internet -> Remote Office (192.168.163.0) Home <-> VPN Server (192.168.160.21) <-> Home Router (192.168.160.1)<-> Internet <-> Remote Router (192.168.163.1) <-> Office VPN Gateway (192.168.163.21) <-> Office Clients
To achieve this, the server configuration needs to contain:
local 192.168.160.21 # The Network Interface to use proto udp # We're using UDP port 1194 # This UDP port must be forwarded to local by the home Router dev tun # We're using routing, so we need the tun device server 10.8.0.0 255.255.255.0 # this is the transit network pool ifconfig-pool-persist ipp.txt # persist the leases topology subnet # more on this later push "route 192.168.160.0 255.255.255.0" # make clients push packets for the home network into the VPN route 192.168.163.0 255.255.255.0 10.8.0.2 # route packets for the remote office into the tunnel client-config-dir ccd # next to the config file, create a directory "ccd" which will contain client specific settings push "dhcp-option DNS 192.168.0.20" # anounce the home office dns server to the connected clients, we only want a single dns for active directory to work keepalive 10 120 # check connectivity every ten seconds, kill link after two minutes comp-lzo # compression is a good idea to improve bandwith status openvpn-status.log
In the ccd directory, we can create a file for each client that connects to make OpenVPN push client specific settings. To make this happen, create a file with the Common Name of the certificate the remote office gateway uses to authenticate itself to the server (I looked it up in the ipp.txt pool file after the client has connected).
That file needs to contain a single setting:
iroute 192.168.163.0 255.255.255.0 # do not push traffic for the remote network into the vpn, we _are_ the remote network
Note that because we persist the DHCP lease log in ipp.txt, the remote gateway will always be assigned 10.8.0.2 in our example (you can edit this by editing ipp.txt and restrarting the OpenVPN Server Service).
Additionally, we need to set up a cople of routes in our routers:
- Home Router:
- 10.8.0.0 to OpenVPN Server (192.168.160.21)
- 192.168.163.0 to OpenVPN Server (192.168.160.21)
- Obviously open up UDP port 1194 on the firewall and forward it to 192.168.0.21
- Remote Router:
- 192.168.160.0 to OpenVPN Gateway (192.168.163.21)
The topology subnet setting has caused some issue for me, but I finally got them resolved. The solution was to add the remote offices gateway adress to the route setting:
route 192.168.163.0 255.255.255.0 10.8.0.2 # route packets for the remote office into the tunnel, make the remote offices vpn adress the gateway for this traffic.
OpenVPN ROUTE: OpenVPN needs a gateway parameter for a –route option and no default was specified by either –route-gateway or –ifconfig options
OpenVPN ROUTE: failed to parse/resolve route for host/network
I am in the process of setting up my lab environment fully based on Enterprise Server 2008R2 Hyper-V. Migrating my Repository Server, SQL Server, Web Server and the Domain Controller has been quite easy, however my newly setup OpenVPN appliance caused me some serious headaches.
Since OpenVPN and some other services that I do regularly use rely on Certificates (Github, Apple Developer Connection) I thought it might be a wise idea to use Active Directory Certificate Services with Auto-Enrollment and Auto-Renewal for the various certificates I need. While this in itself works far more reliable than in my old Windows Server 2003 setup, I couldn’t get the OpenVPN CryptoAPI integration (cryptoapicert) to work in the first place. Here’s a quick rundown of what I have done to make it work:
Steps For AutoEnrollment:
- Create two AD Computer Groups, OpenVPN Servers and OpenVPN Clients
- Join Computers to those groups accordingly
- Configure Computer and User Group Policy to enable Auto-Enrollment (see http://technet.microsoft.com/en-us/library/cc731522.aspx), run gpupdate on the clients/servers
- Go to Certificate Authority Manager, Select “Certificate Templates” and then “Manage” from the context menu
- Create two new certificate templates by duplicating the “Computer” and “User” template. Under Subject Name, select DNS name and Fully distinguished name for Subject name format. Under Security, add the appropriate group (server/clients) and allow Read, Enroll and Autoenroll. The extensions persisted in the certificate can be ignored.
- Log on to your client/server, run mmc and add a Certificate Snap/In for Computer/User. Make sure you have got your certs, or manually trigger autoenrollment by right-clicking the Certificate Snap-In node, All Tasks, Automatically enroll…
- Now that you have this setup, make OpenVPN use these certs:
- Export your CA cert and specify it to openvpn, e.g. (in server.ovpn): ca ca.cer
- Specify your computer/user cert e.g. (in server.ovpn): cryptoapicert “THUMB: ff ad …” or cryptoapicert “SUBJ:VPN.conso.com”
- The OpenVPN Service or the OpenVPN GUI need to be run with Administrator Rights to access the certificate store. Else you may get the following error message: “Cannot load certificate “SUBJ:RUNOPS.runworks.com” from Microsoft Certificate Store: error:C5066064:microsoft cryptoapi:CryptAcquireCertificatePrivateKey:Keyset does not exist”
- For the template used to issue certificates to users/computers you must not select Windows 2008 CA compatible! Select Windows 2003 CA compatibility. Else you may get the following error message: “Invalid provider type specified”
Hope this saves someone else the pain of going through this.
Today is “bank holiday” in Ireland, giving me the time to reflect over the first four weeks of my internship at InishTech. It’s the first time for me to work with a real software company and one thing that I find particularly delightful is reasoning about code and design with great colleagues. Working solo on my past employers’ project (and only recently introducing a second developer to hand it over to) it has been one of my key motivations to find an internship opportunity where I could experience working on a developer team.
Talking about code and design all day made me realize how important it is to have a common understanding of the terminology we use to describe code and it’s design. Of course it is also important to have a common view for the problem domain you’re working on, but talking about the solution domain on its own poses interesting challenges enough.
Similar to the way a compiler assigns tokens to a series of input characters, we assign terms and descriptions to certain constructs. This can range from syntactic symbols (e.g. operator, variable declaration) to abstract concepts such as design patterns (singleton). Often, there are different terms we can use describe a syntactic symbol. As you can easily imagine, the number of opportunities we have grows with the “abstraction level” of the concept we try to describe.
For example when talking about “==” the “equality comparison operator” it is fairly easy to describe its behavior: “Returns true if the left hand and right hand expression are equal, otherwise false.”
But when we talk about a Singleton, things are not so concise any more: “Ensure a class has only one instance and provide a global point of access to it.” ( from the G04 Design Patterns book). We can imagine this design pattern has lots of different incarnations. Nonetheless, I regard the design patterns movement as a valuable contribution to our ability to effectively communicate concepts among programmers.
So far, I have noticed that communicating the lower abstraction level and the higher abstraction level is usually easy. The terminology we use here is pretty fixed and a variety of good definitions and “sources of truth” are available. For example recently I was writing a set of unit tests. Each test follows the pattern “arrange, act, assert”, which means that I configure my system under test (SUT), execute a command on it and then assert on the outcome. I wanted to split out the assert part to a different method because the outcome of the test depended on some complex external condition. My initial attempt had left me with a first method that contained the arrange and act portions of the test and a second that contained the assertions. The methods of the first kind were called something like TestXXX and the second were called ValidateTestXXX(object result). During code review, one of my colleagues pointed out that the prefix for the second kind of methods should be something different. After we popped the first ten books from my 1m tall book-stack on my desk (pictures to follow) he found a copy of xUnit Test Patterns: Refactoring Test Code and pointed me to the section where “custom verification methods” were described. Because what I had done was an exact implementation of this design pattern, we chose to prefix my verification methods with “Verify” rather than “Validate”.
What design pattern books do at the higher abstraction levels, language references can do at the lower abstraction levels. However, there is a grey area somewhere in between that I have found not so well covered. It might be that it’s just my missing formal CS education (which I am about to get soon🙂 ) but I found it difficult to describe certain code constructs (like method parameters vs. arguments) precisely enough so that I can express small differences between two almost similar constructs. I will give concrete examples in a future post.
Another dimension of the communication problem is with the terminology used by APIs. Framework designers must pay close attention to use consistent terminology when naming public API’s and take care to document these terms precisely. But even when talking about the same API, the public facade might use a different terminology than the implementors do behind that facade. I’m getting the impression this is the case for the .NET Generics Implementation. I will do some further research to back that claim but stay tuned for my findings.
Coming from a solution-domain/code- centric view-point, it might also be interesting to see what challenges we have when communicating about our problem-domain. Eric Evans has a very good treatise of this subject in his DDD book. To facilitate communication among the developer team and with domain experts or users, he advocates using a “ubiquitous language” that draws its terms from the problem domain. Developing and refining this language is one of the core tenets of domain driven design. At InishTech we have a company-wiki that we use as an up-to-date reference of the terms in our ubiquitous language.
Three weeks ago my (ongoing) Stackoverflow endevaour finally began impacting my real-life when I started my Internship at InishTech in Dublin, Ireland. It’s a really fortunate incident I met one of their devs on SO and was offered this position after a short follow up. Now that I’m here I cannot express just how much this internship exceeds my expectations.
So what the the hell does InishTech do then? Let me just quote that from their web presence:
InishTech provides Software Developers and Product Managers globally with code protection and licence management solutions to protect their most valuable assets and enable them to provide their customers with the ability to deploy and use their products in the most flexible manner possible.
I’m working on a top secret project to bring support for all the new constructs CLR evolution brought along to their Code Protection solution. Here’s what it does:
Using a unique technology from InishTech called Code Transformation, the InishTech SLP Code Protector takes selected DLL’s and functions within the DLL and virtually compiles (or transforms) them into a vendor-specific format called Secure Virtual Machine Language (SVML). The functions that are transformed to SVML format appear like regular MSIL functions (in terms of interfaces), but are much harder to reverse engineer. Furthermore, SVML runs on top of the .NET platform (CLR), to help ensure interoperability and code optimization.
If you now think “hey that’s cool!” I can assure you it definately is. Working with low-level stuff like this involves a lot of knowledge about CLR internals and is deeply technical but highly interesting . What I’ll be discovering on my way (and have already discovered) will likely be shared on this blog, so stay tuned for some interesting stuff on CLR internals.
Ok, so how’s it like being an Intern at InishTech? It’s great! Let me give you a few reasons why:
- They took care of organizing my accommodation and helped me with all the usual relocation troubles (public transport etc.)
- Modern, bright office in Central Dublin. Two beautiful parks are just around the corner.
- Dual monitors for every developer, cutting edge hardware (QuadCore, 8Gb), ergonomic chairs
- Agile development process paired with a passion for high quality code
- A great team that is dedicated to constant self-improvement in technical skill and process
- Joel-Test Score: 10 out of 12 (sums up all the above points)
- Doughnuts every Friday, Social activities (FIFA WM, Pub strolls…)
I could probably continue like this for a while and still forget all the small things that make it a great company to work for. Oh, and they’re hiring too.
Today I found myself in the situation to fix a real nasty but critical application for the Rhein-Main Ergo Cup event about to take place this weekend. It’s a crufty ASP Application written in VB, uses 3 static frames (please don’t shoot me) for “navigation” and it’s built on top of an Access database with strings as primary keys. Please put that gun down, I have left the worst things unmentioned. And I will spare you the actual code as it’s not my application and I do actually believe even by the virtue of fixing it I broke the License in say, some hundreds of points. So, the word legacy doesn’t quite fit it. It probably has been legacy even before it was written. The code is as procedural as it can get, the notion of a “function” seems to be a woggle among ASP programmers. Instead of using loops and this crazy thing called variables, the original creators decided to copy paste repeating code.
I had to fix bugs in this application before to make it usable for this event and have also written some extensions for it using JScript. It’s been the first application I have ever worked on and I think the reason why I put so strong emphasis on clean and readable code in everything I do today.
Okay, so we needed some functionality this year which the application claimed to support. Testing it, we found it actually did work in some way. The tiny detail that didn’t work was that the application was unable to assign a competitors place correctly by time ascending. So my idea was to execute a simple SQL statement after the script did it’s job and correct that. Having grown up using ORMs my SQL is not all too strong, so I decided asking a quick question on Stackoverflow. Within thirty minutes, I got an answer pointing me in the right direction. The idea was using a Subselect to infer the place of a competitor (a start in this case, each competitor can have multiple starts in different races) by counting the competitors of the same race with better times. Looks like this:
SELECT s.StartsID, s.Time, (SELECT COUNT(*) FROM Starts as raceStarts WHERE raceStarts.LaufID = s.LaufID AND raceStarts.Time < s.Time)+1 as CalculatedPlace FROM Starts AS s WHERE s.RaceID = @raceID
Worked very well, though I guess performance of this must be horrible. Doesn’t matter for the case. So next up, I wanted to use this idea for an Update statement.
Update Starts as s SET Place = (SELECT COUNT(*) FROM Starts as raceStarts WHERE raceStarts.LaufID = s.LaufID AND raceStarts.Time < s.Time)+1 WHERE s.RaceID = @raceID
Turns out, Access hates me today and gives the following error message (before that I needed to assure it I wanted to enable “dangerous database content”):
Operation must use an updatable query.
Huh? Any clue what happened? Me neither so after digging around the internet for the next half hour, it seems the only people experiencing this same error are those who have never heard of joins before and prefer to use subselects in all their queries. Great. And on goes the search, which finally found me this nugget of information from an MSMVP who says one should use the DLookup() function to do this. I can even use it inside “query expressions”, that’s what they name SQL in the Access world I guess. A short “binging” (I like the analogy to “banging”) on MSDN reveils, I’d actually want to use the DCount() function, no wait a sec, I don’t because I can’t specify my complicated criteria (time must be shorter than current rows’ time) with it. Great.
So in the spirit of a really cool Sam Fisher move I decided to hack a little bit of ugly code that will perfectly melt with it’s environment:
sql = "SELECT s.StartsID, s.Time, (SELECT COUNT(*) FROM Starts as raceStarts WHERE raceStarts.LaufID = s.LaufID AND raceStarts.Time < s.Time)+1 as CalculatedPlace<span style="font-family: Georgia, 'Times New Roman', 'Bitstream Charter', Times, serif; font-size: xx-small;"><span style="white-space: normal;"> </span></span>FROM Starts AS s WHERE s.RaceID = &LaufID RS.OPEN sql,nameConn DO WHILE NOT RS.EOF StartsID=RS("StartsID") sql = "UPDATE Starts SET Platz='"&RS("CalcPlace")&"' WHERE StartsID="&StartsID response.write(sql&"<br>") nameConn.Execute(sql) RS.MOVENEXT LOOP RS.CLOSE