I Am Not Myself

Bills.Pay(Developer.Skills).ShouldBeTrue()

Category Archives: Learning in Public

Giving Back to the Education System That Gave Me A Career


Yesterday, my high school Facebook group blew up with chatter about a group of middle school kids from my home town, Green Forest, Arkansas who had published an iOS app. Apparently, these kids got together in their school computer lab and developed this app collaboratively. They even scored some coverage from a local television station.

Having grown up and graduated high school in this economically depressed area, I was blown away with what these kids accomplished. I cannot imagine myself having achieved as much with the meager computer resources we had back in 1993. I want to help encourage their curiosity, reward their determination and assist anyway I can with their technical education.

The kids have expressed an interest in porting their app to Windows Phone and Android, So I am attempting to put together a programming knowledge and tools care package to send to their computer lab that will give them a leg up on modern programming techniques, languages and tooling. A grab bag of things to explore if you will.

I have reached out to a few contacts at Microsoft and O’Reilly. But would love to cast as huge a net as possible. Think you can help? I would love to hear from authors who would like to contribute copies of their books, screencast authors who want to donate their videos and companies who would like to donate their software. Shoot me a message and let’s blow the minds of some awesome kids together. We can encourage five kids to pursue their dreams when it means the most.

I can be reached via twitter, facebook, linkedin.

Implementing Asynchronous Actions Based On User Preferences Client Side

On my current project, we needed a way to check a user set preference before taking action on behalf of the user. To be specific, we wanted to check if the user prefers for us to post an open graph action to facebook when they favorite a meme on our site. The trick is the user preferences are stored in our profile database and all of our open graph work is purely client side.

In our open graph module, we really didn’t want to care how the user preferences were stored. We simply wanted to consume them in a clean way. An example looks like this:

f.onFavorite = function (postUrl) {
        var options = {
            data: postUrl,
            allowed: function(postUrl){
                f.postFavorite(postUrl);
            },
            unknown: function(postUrl){
                $('#js-og-enable-favorite-btn').data('postUrl', postUrl);
                $('#js-og-favorite-gate').modal("show");
            }
        };
        preferences.userPrefers('favoriting', options );
    };

This is the click handler for a favorite button. We pass in the url to the meme the user favorited and construct an options object. The options object defines data associated with the preference as well as a function to perform if the user allows the action. We also include a function to execute if the preference is not currently set. This way we can prompt the user to make a preference. Finally, we call the preferences module with the preference in question and the options.

Deep in the bowels of our preferences module, is the userPrefers method. It looks like this.

f.userPrefers = function(preferenceName, options){
         f.withCurrentPreferences(function(preferences){
         if(preferences[preferenceName])
            options.allowed(options.data);

         if(preferences[preferenceName] == null)
            options.unknown(options.data);
        });
    };

This function calls withCurrentPreferences and passes in a function describing what to do with a set of current preferences. We check to see if the preference we are checking is enabled and call the allowed method passing along the data if it is. Finally, it check is the preference is explicitly null and calls the unknown method if it is.

So far fairly clear and concise. But what magic is this withCurrentPreferences method?

f.withCurrentPreferences = function(action){
        var preferences = f.getPreferencesCookie();
        if(preferences)
            action(preferences);
        else
            f.getPreferences(action);
    };

f.getPreferences = function(action) {
        $.ajax({
            dataType: "jsonp",
            url: cfg.ProfileDomain + '/' + cfg.Username + '/Preferences',
            success: function(preferences){
                f.setPreferencesCookie(preferences);
                if(action)
                    action(preferences);
            }
        });
    };

The method takes an action to execute with preferences and attempts to read a locally stored preference cookie. We cache preferences locally to not bombard our app servers with unneeded calls. If the cookie based preference exists, we simply call the action passing along the preference. If not, we call getPreferences passing along the action. Finally the getPreferences function makes a ajax call out to our app server to get the preferences. On success it saves a preference cookie and if an action was passed in it calls it.

And there you have it a nice clean asynchronous method of taking actions based on a users preference that is managed completely client side and it uses a local caching mechanism to make it zippy.

Here is the full source of the AMD module.

define(['jquery', 'mods/ono-config', 'mods/utils/utils'], function ($, config, cookieJar) {
    var cfg = config.getConfig();
    var f = {};

    f.getPreferences = function(action) {
        $.ajax({
            dataType: "jsonp",
            url: cfg.ProfileDomain + '/' + cfg.Username + '/Preferences',
            success: function(preferences){
                f.setPreferencesCookie(preferences);
                if(action)
                    action(preferences);
            }
        });
    };

    f.setPreferencesCookie = function (preferences) {
       cookieJar.destroyCookie('preferences', cfg.CookieHostname);
       cookieJar.setCookie('preferences', JSON.stringify(preferences), 1000, cfg.CookieHostname);
    };

    f.getPreferencesCookie = function(){
      return JSON.parse(cookieJar.getCookie('preferences'));
    };

    f.userPrefers = function(preferenceName, options){
         f.withCurrentPreferences(function(preferences){
         if(preferences[preferenceName])
            options.allowed(options.data);

         if(preferences[preferenceName] == null)
            options.unknown(options.data);
        });
    };

    f.withCurrentPreferences = function(action){
        var preferences = f.getPreferencesCookie();
        if(preferences)
            action(preferences);
        else
            f.getPreferences(action);
    };

    f.savePreference = function(preferenceName, value){
        f.withCurrentPreferences(function(preferences){
            preferences[preferenceName] = value;
            f.setPreferencesCookie(preferences);
            f.setPreference(preferenceName, value);
        });
    };
    
    f.setPreference = function (preferenceName, value) {
        $.ajax({
            dataType: "jsonp",
            url: cfg.ProfileDomain + '/' + cfg.Username + '/SetPreference',
            data: {
                preferenceToSet:preferenceName,
                preferenceValue: value
            }
        });    
    };
    
    return f;
});

How Deep a Simple Problem Can Get: Moment, Node, Heroku & Time Zones

Over the weekend I started building my first real node.js application. I had watched the Hello Node series from TekPub, read the LeanPub books and attended NodePDX this year. I was ready to get down in the weeds and start writing a real application.

I also have been wanting to try to connect into the local non-.NET community in Olympia, not that I ever see my day job not involving .NET, but I am interested in learning different ecosystems, languages and frameworks I think it makes me a more well rounded better developer in the long run. So I started a meetup group for Olympia, WA Node users and beginners.

My idea for a node app, was to create a site that consumes the meetup api and displays upcoming meetings. Fairly, simple. You can see the result of my weekend worth of work here. The site is a simple twitter bootstrap based single page that has a carousel widget displaying the upcoming meetings, currently only one scheduled.

You can see meetup specific api data including the number of members who have said they were attending, the location and a google map link, as well as a date and time. I was pretty happy with myself and blasted the link out to the world via twitter and facebook. Little did I know I had missed something in the details, which Chris Bilson was so kind to point out. The date being displayed on the site said the meeting was being held at 1:30 AM.

The meetup api returns an event object containing two bits of information related to the event’s date and time, time and utc_offset. The time is based on milliseconds since the UNIX Epoch. And the utc_offset is milliseconds based as well. Because I was in full on cowboy mode coding up a storm, my initial implementation of prettifying the date looked like this with no tests.

var moment = require('moment');

exports.helpers = {
	prettyDate: function(input) {
		return moment(input).format("dddd, MMMM Do YYYY h:mm:ss A");
	}
}

This node module uses the awesome Moment module to parse a UNIX Epoch number into a date and then format it using standard date formatting. This worked awesomely on my local machine. So I didn’t think about it any more and moved on, until Chris chimed in.

Chris had suggested that it might have something to do with UTC. I was also a little embarrassed that I didn’t have such a simple thing under unit test. So I started fixing the bug by getting the code under test. I had a couple well known values for the currently scheduled meeting.

var helpers = require('../lib/helpers').helpers;

describe('helpers', function(){
	
	describe('pretty date', function(){
		var input = { time: 1341279000000, utc_offset: -25200000 },
		    expected = 'Monday, July 2nd 2012 6:30:00 PM',
		    actual = helpers.prettyDate(input.time, input.utc_offset);

		it('prints a pretty date in the correct time zone', function(){
			expected.should.equal(actual);
		});
	});
});

The interesting thing here is the test passed with out modifying the implementation code at all. You see Moment automatically sets an offset based on the current environment. So if I were able to run this test on Heroku the test would fail. I was a bit stumped and came back around to my sad little cowboy ways and modified the implementation like this.

var moment = require('moment');

exports.helpers = {
	prettyDate: function(input_date, utc_offset) {
		return moment(input_date).utc()
		       .add('milliseconds', utc_offset)
		       .format("dddd, MMMM Do YYYY h:mm:ss A");
	}
}

I was grasping at straws, but this modification didn’t effect the test running locally. I was curious what would happen when running the site on Heroku. I suspected I would have the same issue. I was very surprised to see that the code worked.

The downside was that I didn’t understand why and that bugs the crap out of me. I couldn’t let it go. Getting the code to work was not enough for me, I needed to understand why. So I started googling. I lucked out and found this blog post on Adevia Software’s blog.

It clicked for me after that. The reason the test for the new code passed locally and the code worked on Heroku all had to do with the time zone settings of the environment running the code. My local environment is set to PST, so taking a Unix Epoch based date parsing it with Moment gives a PST date, which is then converted to UTC and then reduced by a PST UTC Offset resulting in the original PST date created by Moment.

Heroku’s default apparently is UTC. Apply the same logic and you end up with a UTC date that has been reduced by 8 hours that is still a UTC date. It looks right on Heroku because my pretty printer doesn’t include the timezone. If it had it would be wrong.

Once again I understood how the code worked and it was working, but it was wrong. The nag in the back of my head would not let it go. It’s a bug, bugs must die. Now that I understood what was going on, I went back and reverted by helper back to this implementation.

var moment = require('moment');

exports.helpers = {
	prettyDate: function(input_date) {
		return moment(input_date).format("dddd, MMMM Do YYYY h:mm:ss A");
	}
}

I then issued the following command to Heroku from the commandline.

Derp:website cheezburger$ heroku config:add TZ=America/Los_Angeles

Finally redeployed the site and all is right with the world, I can get some work done now. Thanks, Bilson. This episode of OCD is brought to you by the letters W, T & F.

Displaying a Map of the Current Location with MonoTouch

Today, I started spiking on displaying maps in iOS using MonoTouch. I wanted to discover the minimum amount of code needed to get the users current location via the iOS GPS services and then display that location on the a map.

To get the devices current location you need an instance of CLLocationManager found in the MonoTouch.CoreLocation namespace. The location manager accesses the actual hardware on the device and can be a real power drain, so you want to use it as little as possible.

using System;
using MonoTouch.CoreLocation;

namespace App.UI
{
	public class LocationService
	{
		private CLLocationManager locationManager;

		public LocationService()
		{
			locationManager = new CLLocationManager();
		}

		public CLLocationCoordinate2D GetCurrentLocation()
		{
			//dirty for now just to get some info.
			locationManager.StartUpdatingLocation();
			while(locationManager.Location == null);
			locationManager.StopUpdatingLocation();

			return locationManager.Location.Coordinate;
		}
	}
}

This simple service shows the absolute minimum needed to retrieve the devices current location. First, we tell the manager to StartUpdatingLocation which will trigger a dialog to the user requesting access to the devices location services. The current location is then available from the Location property. It takes a few seconds to populate though, which is why most of the demo code you will find uses a LocationDelegate to consume the data. I wanted something a little simpler because I don’t actually need updates to the location, so I spin until the Location property is not null. I then grab the Coordinate from the Location property and tell the manager to StopUpdatingLocation.

On a side note the user can disable location globally on the device. You can check to see if the user has done so via the static member LocationServicesEnabled. You could use this to display a dialog to ask the user to put in a zip code maybe as an alternative.

To display the location on a map for the user you will need to use the MKMapView from the MonoTouch.MapKit namespace. I have chosen to create a MapViewController class that extends UIViewController to wrap all this code in.

using System;
using System.Drawing;

using MonoTouch.Foundation;
using MonoTouch.UIKit;
using MonoTouch.MapKit;
using MonoTouch.CoreLocation;

namespace App.UI
{
	public class MapViewController : UIViewController
	{
		private LocationService locationService;
		private MKMapView mapView;

		public ProductsViewController(LocationService locationService)
		{
			this.locationService = locationService;
		}

		public override void ViewDidLoad()
		{
			base.ViewDidLoad();

			var currentLocation = locationService.GetCurrentLocation();
			var visibleRegion = BuildVisibleRegion(currentLocation);

			mapView = BuildMapView(true);
			mapView.SetRegion(visibleRegion, true);

			this.View.AddSubview(mapView);
		}

		private MKMapView BuildMapView(bool showUserLocation)
		{
			var view = new MKMapView()
			{
				ShowsUserLocation = showUserLocation
			};

			view.SizeToFit();
			view.Frame = new RectangleF(0, 0, this.View.Frame.Width, this.View.Frame.Height);
			return view;
		}

		private MKCoordinateRegion BuildVisibleRegion(CLLocationCoordinate2D currentLocation)
		{
			var span = new MKCoordinateSpan(0.2,0.2);
			var region = new MKCoordinateRegion(currentLocation,span);

			return region;
		}
	}
}

This simple ViewController overrides the ViewDidLoad method, consumes the LocationService to get the current location and then displays that location on with a MKMapView. We build up the map view in two steps, the first creates the instance of the map view and sizes it to fit the viewable area of the given display. The MKMapView has a property that will automatically show an indicator of where the curent location is, setting this property is all you need to do to display the map and location. There is a catch though, if you were to display this map now you would see a map of the entire United States and not a reasonable local view of the area. The second step is what zooms the map into a reasonable region to display.

If we display this controller now in the iPhone Simulator, this is what we see.

Getting Started Building iOS Applications with MonoTouch

I have been flirting with the idea of getting into mobile development in my spare time. I have went so far as to offer myself out as a developer to startups in Chicago to develop simple iOS applications in exchange for tool licenses. I am a firm believer in the idea that a craftsman buys his own tools. I am not buying them with my own money here, but I certainly am with my effort.

I also wanted to turn this experience into a few blog posts that might help someone else who has chosen to go down the same path that I have.

So what does a .NET developer need to get started writing an iOS application? There are three things to consider here. The first being picking up Objective-C as a language, second is learning the iOS API and third is relearning how to do stuff we already know like consuming a JSON service. Put another way, we need to learn a language, a platform and the idiomatic way of doing things in this particular development culture. That is a lot to bite off in a short period of time and still be productive and getting stuff done.

For this reason, I chose to develop my first few applications using Xamarin’s MonoTouch. Taking this route allows me to leverage my current skills with C# and the .NET platform while learning the iOS API and still manage to be productive. I can noodle around with Objective-C at a later time.

The first thing you will notice about MonoTouch is the price, at $399 it is quite a leap of faith. But you can install and use the evaluation version to your hearts content. You only have to pony up money once you are ready to ship to the App Store.

To get up and running with MonoTouch, you will need to install the following things in this order:

  1. XCode 4 – If you don’t mind paying $5, you can get this from the Apple App Store.
  2. MonoDevelop – This is a completely free IDE for the Mono framework.
  3. MonoTouch Evaulation

You will also need a Mac. Yes sorry, thems the breaks.

To create an iOS application simply select File > New Project and dig down to the MonoTouch iPhone project template. There are also templates for iPad and Universal. Universal allows you to create a single application that will work on both the iPhone and the iPad. You can also create library assemblies just like you would expect in Visual Studio.

There is one catch though, MonoTouch project types are limited in the amount of the framework you will have access to work against and what 3rd party libraries you can reference. It is similar to working with Client Profile projects in Visual Studio. Everything you reference needs to target MonoTouch as well.

For instance, I have been working on an application that consumes a Xml-Rpc service.  There is already a fairly awesome OSS library from Cook Computing for all your Xml-Rpc needs. You cannot simply download the dll and reference it in your project. The dll needs to be compiled as a MonoTouch assembly. And all code needs to conform to the limited framework profile of MonoTouch. Cook Computing’s library has both client and server components in one library. The server components depend on HttpResponse & HttpReqeust which are not available in a MonoTouch application.

I was able to solve this problem fairly easily by creating my own fork of the Cook Computing library and pulling in only the types needed for client communication. I even went so far as to publish this work on GitHub so others can simply use my fork and get back to making things awesome. Yay, OSS!

If you would like to look at a couple nontrivial applications written using MonoTouch to get an idea of where to start, the Washington State Department of Transportation has a great application published to the App Store now that is fully open source. I used this application as a guide post when creating my first MonoTouch app which you can find here.

This should be enough to get you on your way to writing your first application. I have some more tips around using OSS libraries and Unit Testing, but that will have to wait for another day in another blog post. Enjoy and happy non-conformist .NET application development day.

Test Driven Evolutionary Design with Entity Framework

NOTE: I am not an Entity Framework expert nor do I claim to be. I am a user of ORMs in the general developer space and this exercise is an attempt to map my workflow and knowledge to EF and it’s associated tools. Here there be dragons.

Introduction

I have been an ORM fan for many years using open source projects in the NHibernate/Castle ecosystem. I try to push my development experience as close as I can get to the Ruby on Rails way. Combining NHibernate, Fluent NHiberate and Fluent Migrator, I am able to achieve level of malleability that allows me to evolve my data storage needs around the growing discovery of the business problem at hand. Despite getting a “thank you” in Julia Lerman‘s book on it, I have not taken Entity Framework for a serious spin because I did not feel I could achieve the same level of malleability and persistence ignorance. With the recent release of Entity Framework Migrations the last missing piece has been added and EF is now on feature parity with my beloved OSS stack.

I wanted to build something real with Entity Framework, so I came up with a simple web application to drive the project. I am going to create a simple web site that allows a user to enter their first name, last name and email address to sign up for a news letter. The system should verify that the email has not already been registered and send the registered user a validation email. The validation email should contain a link the user can click to validate the email address goes to a real person. This is a fairly simple application with a little bit of complexity to keep it interesting.

Spiking On Data Access

To get started, I know I will need to be able to save the users details. I realize this might be an odd place to start but it allows me to get to the interesting data access bits that have me curious. I start my solution with a Core library where all my core business logic will live. I like to have this assembly be completely environment agnostic, so you will notice that all dependent assemblies in the solution tend to point in toward this one. Here is where I will put my User object.

namespace TestingEntityFramework.Core
{
    public class User
    {
        public int Id { get; set; }
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public string Email { get; set; }
    }
}

I know that my current goal is to be able to persist a user object and retrive it. So, I want to clearly state that goal in the form of a unit test. The idea here being that the unit test describes a portion of the behavior of the system and then that test is run thought the development of the application ensuring that the behavior still works the way we specified it should. These are of course basic Test Driven Development principals. So, I create my next assembly Tests, fire up the Nuget Package Manager Console and get a test environment set up using the following commands.

install-package NUnit
install-package NSubstitute
install-package Shouldly

These are the standard three testing tools I typically start out with. NUnit is my default unit testing system, Shouldly adds a nice fluent assertion syntax over the top of NUnit asserts and NSubstitute is my current favorite mocking framework. I don’t have a need for NSubstitute yet, but the style of testing I do generally leads me to it. So I add it by default. Now to make sure my testing environment is all ready to go, I add my first test.

using NUnit.Framework;
using Shouldly;

namespace TestingEntityFramework.Tests
{
    [TestFixture]
    public class RealityTests
    {
        [Test]
        public void true_should_be_true()
        {
            true.ShouldBe(true);
        }
    }
}

Hitting F1 shows me that everything compiles, test run and I am ready to get going with my current goal. With Entity Framework the unit of work concept is wrapped up in the DbContext type. Typically a developer will inherit from DbContext and modify to expose aggregate roots. So I know that I will need to create a DbContext to be able to save my User instance. I think I understand enough to get my first test written.

using NUnit.Framework;
using TestingEntityFramework.Core;

namespace TestingEntityFramework.Tests
{
    [TestFixture]
    public class UserPersistenceTests
    {

        private DataContext context;

        [Test]
        public void can_persist_user()
        {
            var user = new User { Id = -1, FirstName = "Dirk", LastName = "Diggler", Email = "dirk@diggler.com"};

            context.Users.Add(user);
            context.SaveChanges();

            Assert.That(user.Id, Is.EqualTo(1));
        }
    }
}

This test fails to even compile because I do not have a DataContext type in the system yet. To get the test to compile I need to create that type. So I add my third assembly to the solution, Data. The goal with the Data assembly is to completely encapsulate persistance concerns. It will have a dependency on the Core library but the Core library will have no knowledge of Data. I know that I will be using Entity Framework and Migrations, I can add them to the assembly via NuGet with the following command.

install-package entityframework.migrations

This will reach out to the NuGet server and download the migrations assemblies as well as check it’s dependencies and pull them as well. So I end up with the latest version of Entity Framework including Code First and Migrations. The Migrations package includes some initialization code that creates a Migrations folder in my Data project and adds a Configuration object that we will get to later. I can now add my DataContext type to the Data project, It looks like this.

using System.Data.Common;
using System.Data.Entity;
using TestingEntityFramework.Core;

namespace TestingEntityFramework.Data
{
    public class DataContext : DbContext
    {
        public DbSet Users { get; set; }
    }
}

My unit test will now compile and fail with a null reference exception on the calls to the DataContext object. I need some way to construct a DataContext object that has been configured for unit testing purposes.

Now I am attempting to unit test actual data access, this is generally considered an anti-pattern in unit testing circles. If I need to actually have an instance of sql server running with a database that I can communicate with that is set up with perfect test data to run my test against this is going to add a large level of complexity and brittleness to my testing. How do I interpret a failing test that failed because data was missing or the sql server was down for maintenance?

The idea solution for this would be to have a way to spin up a database instance, push my schema into it, add test data and then execute my tests against it. Optimally this process should happen for every test case and be very fast. In the OSS stack I mentioned above I would accomplish this with a combination of Fluent Migrator, Sqlite and Fluent NHibernate. Here I plan to tackle it with Code First, SqlCE and EF Migrations.

I’ll start by creating a migration for my Users table. Migrations are a way of declaratively describing a set of schema operations like creating tables, columns and keys and indexes. Migrations typically have two methods Up and Down. The up method applies the database changes to upgrade the database to the latest version an the down method removes those changes in the event of a downgrade. My User migration looks like this.

using System.Data.Entity.Migrations;

namespace TestingEntityFramework.Data.Migrations
{
    public class AddUser : DbMigration
    {
        public override void Up()
        {
            CreateTable("Users", b => new
            {
                Id = b.Int(nullable: false, identity: true),
                FirstName = b.String(nullable: false, maxLength: 255),
                LastName = b.String(nullable: false, maxLength: 255),
                Email = b.String(nullable: false, maxLength: 255),
            })
                .PrimaryKey(t => t.Id);
        }

        public override void Down()
        {
                DropTable("Users");
        }
    }
}

This migration will create a new table called Users with four columns Id, FirstName, LastName & Email with Id being the primary key.

Next up I need to tell Code First how to map my User type to the Users table defined by my migration. I can do this by modifying The DataContext and overriding the OnModelCreating method to look like this.

protected override void OnModelCreating(DbModelBuilder modelBuilder)
        {
            modelBuilder.Entity().HasKey(x => x.Id)
                .ToTable("Users");

            base.OnModelCreating(modelBuilder);
        }

Notice that I am leaving a lot of information here out. I am simply telling Code First that the entity type User has a key Id and should be saved to the Users table. I am taking advantage of Code First’s default conventions. It will figure out what to do with all the other columns based on their type and names base on sensible conventions. For instance the User property FirstName will be saved in the column FirstName because the types and names match. This is a feature of Fluent NHibernate that I love and am glad that EF is following the same model. This allows me to focus on functionality instead of writing reams of code to specify every detail of persistence. I hope that the conventions in EF are as easily overridden as FNH because it is truly a killer feature. See my previous post on the topic for more details.

Now that I have my migrations and mapping defined I need a way to combine them and create a DataContext that communicates with an instance of SqlCE database. Back in my Test library, I add a dependency to SqlCE via NuGet with the following command.

install-package sqlservercompact

This adds a bin deployable version of SqlCE to my project that does not require any installs. This is an awesome change in SqlCe as well. In the past I have done this with Sqlite and was bitten by odd differences between Sql Server and Sqlite, but the friction added by switching to SqlCE was to great due to required installations. The latest bits removes that friction completely which is great. The only advantage that Sqlite has in this situation now is it’s ability to create in memory databases which are insanely fast which is a good thing from a unit testing stand point. SqlCE forces us to use a file which adds the overhead of IO communication into your tests. Please SqlCE team, make in memory databases happen.

With all the pieces available I can now create a factory to build up my test data context.

using System;
using System.Data.Common;
using System.Data.Entity.Migrations;
using TestingEntityFramework.Core.Extensions;
using TestingEntityFramework.Data;
using TestingEntityFramework.Data.Migrations;

namespace TestingEntityFramework.Tests.Helpers
{
    public class TestDataContextFactory
    {
        private const string ConnectionString = "Data Source={0}.sdf";
        private const string ProviderName = "System.Data.SqlServerCe.4.0";

        public static DataContext Build()
        {
            var databaseName = DateTime.Now.Ticks.ToString();
            StandUpTestDatabase(databaseName);
            return CreateDataContext(databaseName);
        }

        private static DataContext CreateDataContext(string databaseName)
        {
            var connection = DbProviderFactories.GetFactory(ProviderName).CreateConnection();
            connection.ConnectionString = ConnectionString.FormatWith(databaseName);
            return new DataContext(connection);
        }

        private static void StandUpTestDatabase(string databaseName)
        {
            var config = new Configuration
                             {
                                 ConnectionString = ConnectionString.FormatWith(databaseName),
                                 ConnectionProviderName = ProviderName,
                                 AutomaticMigrationsEnabled = true
                             };

            new DbMigrator(config).Update();
        }
    }
}

This factory generates a random database name, creates a database connection object for SqlCE, runs all migrations on it and then constructs a DataContext and returns it. Now all I need to do is wire it up in my persistance unit test like this.

using NUnit.Framework;
using TestingEntityFramework.Core;
using TestingEntityFramework.Data;
using TestingEntityFramework.Tests.Helpers;

namespace TestingEntityFramework.Tests
{
    [TestFixture]
    public class UserPersistenceTests
    {

        private DataContext context;

        [SetUp]
        public void SetUp()
        {
            context = TestDataContextFactory.Build();
        }

        [Test]
        public void can_persist_user()
        {
            var user = new User { Id = -1, FirstName = "Dirk", LastName = "Diggler", Email = "dirk@diggler.com"};

            context.Users.Add(user);
            context.SaveChanges();

            Assert.That(user.Id, Is.EqualTo(1));
        }
    }
}

Hitting F1, I discover I now have a passing unit test. Every run of this test fixture generates a completely new database file, runs migrations and saves a new user validating that all the pieces (migration & mapping) work as expected. During the process of developing this application, this test will get run thousands of times ensuring that as the application evolves persistance still works the way it is expected to work. With minimal effort deployment scripts can be generated based on the migrations or I can use the migrations themselves. This gives a high level of confidence in this data access layer that you usually do not get with traditional approaches.

Driving Functionality from the Outside In with Nancy

My original goal was to build an application, I have my data access spike complete and want to get my focus back on that goal. In my next post, I will discuss test driving web applications using the NancyFX framework. If you want to take a peek at what I am up to the source is currently evolving along with the source for this post.

I Am Sold on Pair Programming

The benefits of pair programming keep presenting themselves to me the more I actively participate in the practice. One of the greatest benefits for me is the opportunity for accelerated learning and teaching.

A quick example While working at Russell Investments pairing with Trevor Redfern I was amazed at his masterful use of the linq method syntax to condense large block of code into tight succinct chained statements. I started to see alot of the code I wrote as manipulation of various collections of objects and my functions as a set of actions that funneled those objects into different applications.

At The New Gig That Shall Not Be Named, I have had the opportunity to pair with Robert Ream. His use of the linq query syntax is just as impressive and mind blowing. It is currently twisting my brain a bit and getting me interested in learning more about functional programming.

Let me give you an example. We have a function in our application that takes a 2-dimensional array and serves it up as an IEnumerable of some business object. This bit of code is near the interface between the .NET and the APL portions of our application so it has a bit of primitive obsession at these boundaries. This particular function had a bug in that it could not handle empty char multidimensional arrays.

My initial fix looked something like this. _AplData is a private member of the class that has this method and is stored as an Array and can actually be of type object[,] or char[,] or other types. Note that these are true 2-dimmensional arrays not arrays of arrays ala object[][].

public IEnumerable<BizObj> Rows()
{
  for (var i = 0; i <= _AplData.GetUpperBound(0); i++)
  {
    var items = new object[_AplData.GetUpperBound(1) + 1];
    for (var j = 0; j < items.Length; j++)
    {
    items[j] = _AplData.GetValue(i, j);
    }
    yield return new BizObj(items.ToArray());
  }
}

This code worked and passed our failing unit test, but I was unhappy about the creation and tracking of a new array for every object created. Robert Suggested we could refactor the method and make it a bit more functional like this.

public IEnumerable<BizObj> Rows()
{
  return from i in 0.Through(_AplData.GetUpperBound(0))
         let row = from j in 0.Through(_AplData.GetUpperBound(1))
                   select _AplData.GetValue(i, j)
         select new BizObj(row);
}

This is still blowing my mind today and really makes we want to dig into F#. So through two different experiences pair programming, I am successfully adding features, becoming a better more well rounded programmer and being inspired to explore new concepts. And that is just one benefit of the practice.

Questions from a Power User on the Road to Programmer

I have spent the last few weeks working with a power user in our investment services division. This guy has moved beyond Excel Macros and Access databases to ASP.NET WebForms and SQL Server. From what I can tell, he is completely self taught which is something I can respect. We have been working together building prototype tools that are useful to the investment division in the short term to flesh out ideas for larger scale projects.

It has been interesting trying to balance my strong opinions with his GDS enthusiasm, certainly a reality check in the architecture astronaut in me. It has also been really fun to figure out how to introduce new tools into his toolbox with out throwing everything and the kitchen sink at him. He took to source control with Subversion like a fat kid on cake and was a bit skeptical about continuous integration with TeamCity but I think I am winning him over there.

I have suggested some training resources and it looks like he has been digging in. This morning I got an email from him with a list of questions, that I wanted to share. It is an interesting insight into what it must look like today to get started as a programmer.

Below are the questions he asked me and my responses. How do you think I did? Would you have responded differently?

Hope you don’t mind me pestering you with basic questions… Feel free to ignore me if it’s annoying.  And don’t feel the need to type out full answers if you don’t want to – we can discuss next week if you prefer…

Do I need to know JavaScript in order to use jQuery?

jQuery is a framework built in JavaScript that makes differences in browsers easier to work with and ajax a snap. Some basic understanding of JavaScript is required yes.

Does it make sense to use jQuery with WebForms, or only with MVC?

jQuery can be used in webforms yes. But this is the wrong question. jQuery and JavaScript are platform agnostic meaning you can use them in any web site regardless of the platform the site was built with. There doesn’t even need to be a application platform, it works with static HTML files.

Doesn’t ORM encourage poor database design?

Yes. But lazy developers will build bad systems regardless. Understand what your tools are doing for you.

Does a calculator make you bad at math? ORM is a tool that makes certain things easier and makes other things awkward. Like any tool it is not appropriate for all jobs. And the use of an ORM does not excuse you from needing to know good database design and make good choices within the context of the application you are building.

How do you get data from POCO objects into a jQuery control (i.e. DataTables.net)? Do you have to write a web service, or somehow serialize it to JSON, or is there an easier way?

There are many ways of accomplishing this, the one I prefer is serialization to JSON which is ridiculously easy in MVC but not so much in WebForms. DataTables can consume JSON or simply apply itself to an existing HTML table in the page markup. The level of interactivity you want in the table really dictates which route you want to take.

Do you have to actively interact with JSON in order to use jQuery/MVC, or is it one of those things that you know is there but never see…

You have to actively interact with it just like you would a POCO object in C#. The neat thing about JSON is you don’t have to define the structure of an object before you use it.. {Id: 1, Name: ‘Dirk Diggler’} there I just created a JSON object.

If you’re not exposing data to third-party clients, are web services relevant/necessary anymore, or is there always an easier way?

Web services can still be useful but are increasingly becoming less relevant as lightweight RESTful services become popular. The shear overhead associated with SOAP based messaging is ridiculous when trying to publish something like a twitter stream. JSON is far simpler and easier to work with.

Does anybody use the built in Forms authentication provider in ASP.NET, or is there a better solution available for non-windows authentication?

Forms based auth is a decent start but you can quickly outgrow it. We mostly use windows based authentication internally. I am unaware of what the current practice is in the .NET community for authentication on public sites, it’s been a really long time since I have worked on a public site that required auth in .NET. The only advice I have is NEVER under any circumstances should you EVER store a password in clear text in a database. 8) See the recent Sony debacle for reference.

New Sample NancyFx App TickerViewer

I found some time yesterday to finally play around a bit with some tech that I have been tracking recently; AppHarbor, NancyFx, Highcharts & this spiffy CSV feed on Yahoo Finance.

It took a while to figure out how to set up Nancy. I think the documentation is a bit out of date and more internal developer focused due to the high rate of change for the prerelease framework, but I was able to look at NerdBeers for guide posts.

In the end, I was able to put together a functioning demo of all the features I was interested in in about two to two and a half hours. And that is from scratch, meaning never having seen any of the frameworks I was working with. That is pretty damn simple. The vast majority of time was spent figuring how how to get Highcharts to display the data the way I wanted.

You can see the functioning demo up on AppHarbor and the source can be found on my github account. Try entering MSFT or AAPL and hitting enter. If you are watching with FireBug you will see an Ajax call go out to the path /ticker/MSFT or /ticker/AAPL which will return a year or so worth of price data for that symbol as JSON. Which is then rendered by Highcharts.

The bit of code that handles the /ticker/MSFT response is called a Module in Nancy parlance and it looks like this.

using System.Collections.Generic;
using Nancy;
using TickerPerformanceViewer.Web.Models;
using TickerPerformanceViewer.Web.Services;

namespace TickerPerformanceViewer.Web.Modules
{
    public class TickerModule : AppModule
    {
        public TickerModule(ILookUpPrices lookUp) : base("/ticker")
        {
            Get["/{symbol}"] = parameters =>
                                   {
                                       var prices = lookUp.GetFor(parameters.symbol);
                                       return Response.AsJson(prices);
                                   };
        }
    }
}

Notice how both routing and handling concerns are handled by the module. In the constructor for the module I pass a the string “/ticker” to the base constructor. This tell Nancy that this modules handles all requests to /ticker. Next up I add a delegate to the Get dictionary with the key “/{symbol}”. This basically means that any requests to /ticker/anystringhere should be handled by this delegate. I then simply look up my prices with a service class using the parameters collection that is of type dynamic and has my symbol picked out of the URL as a property. Finally I return the collection as Json.

A quick “git push appharbor master” in Powershell and I get to watch as AppHarbor builds and deploys my solution. All .NET deployments should be this easy.

And finally a shot of the working application, man what’s going on with Apple, eh?


So this was pretty much a “LOOK WHAT I DID!” kind of post. I am still figuring out the usage of these tools. I intend to talk about each of them individually in the future. But it is kind of nice to see how all of them can be plugged in together to rapidly build a slick application with a full stack of tools from source control to deployment. And not a bit of it using the stock Microsoft solution. Open source is indeed alive in .NET and getting better every day.

Effective Distributed Teams Over-communicate

The project that I am currently working on just hired a developer in London. Toyin is the sole developer in the London office. She has a huge burden on her shoulders. Not only does she have to get up to speed on the project, she needs to establish working relationships with the business folks in the office that have never had direct access to a developer. She also needs to hit the ground running providing value on a project that is having some logistical problems.

It is a tall order made even more difficult by having an ocean and several time zones between the team.

With these obstacles, we can’t take anything for granted. It is far to easy to slip in to the background while decisions are being made on this side of the pond. Out of sight; out of mind, if you will. So, we have made a concentrated effort to over-communicate with Toyin.

Technology is big help when trying to keep communication flowing and open. Conference bridges, Office Communicator desktop sharing, Jira with Green Hopper, TeamCity and a Confluence all help to keep our team assets digital and easily consumable no mater where the developer is physically located.

Toyin’s first week was actually spent here in Seattle. A whole week was spent sitting face to face with the voices she will be attempting to work with remotely. Our morning stand ups are held over a conference bridge. We held a meeting earlier this week with the project development team for the sole purpose of allowing Toyin to ask questions. And just this morning, Toyin and I had a two hour remote pairing session via Office Communicator.

All of this emphasis on over-communicating with Toyin is bearing fruit. She is asking questions; sometimes pointed. She has started to bring the London office’s concerns to the stand up. It feels like Toyin is a part of the team actively participating.

By going to extremes to include Toyin in the process, we have gained an active, productive team member who happens to be on another continent. Not just another faceless resource we send grunt work to via email.