I Am Not Myself

Bills.Pay(Developer.Skills).ShouldBeTrue()

Implementing Asynchronous Actions Based On User Preferences Client Side

On my current project, we needed a way to check a user set preference before taking action on behalf of the user. To be specific, we wanted to check if the user prefers for us to post an open graph action to facebook when they favorite a meme on our site. The trick is the user preferences are stored in our profile database and all of our open graph work is purely client side.

In our open graph module, we really didn’t want to care how the user preferences were stored. We simply wanted to consume them in a clean way. An example looks like this:

f.onFavorite = function (postUrl) {
        var options = {
            data: postUrl,
            allowed: function(postUrl){
                f.postFavorite(postUrl);
            },
            unknown: function(postUrl){
                $('#js-og-enable-favorite-btn').data('postUrl', postUrl);
                $('#js-og-favorite-gate').modal("show");
            }
        };
        preferences.userPrefers('favoriting', options );
    };

This is the click handler for a favorite button. We pass in the url to the meme the user favorited and construct an options object. The options object defines data associated with the preference as well as a function to perform if the user allows the action. We also include a function to execute if the preference is not currently set. This way we can prompt the user to make a preference. Finally, we call the preferences module with the preference in question and the options.

Deep in the bowels of our preferences module, is the userPrefers method. It looks like this.

f.userPrefers = function(preferenceName, options){
         f.withCurrentPreferences(function(preferences){
         if(preferences[preferenceName])
            options.allowed(options.data);

         if(preferences[preferenceName] == null)
            options.unknown(options.data);
        });
    };

This function calls withCurrentPreferences and passes in a function describing what to do with a set of current preferences. We check to see if the preference we are checking is enabled and call the allowed method passing along the data if it is. Finally, it check is the preference is explicitly null and calls the unknown method if it is.

So far fairly clear and concise. But what magic is this withCurrentPreferences method?

f.withCurrentPreferences = function(action){
        var preferences = f.getPreferencesCookie();
        if(preferences)
            action(preferences);
        else
            f.getPreferences(action);
    };

f.getPreferences = function(action) {
        $.ajax({
            dataType: "jsonp",
            url: cfg.ProfileDomain + '/' + cfg.Username + '/Preferences',
            success: function(preferences){
                f.setPreferencesCookie(preferences);
                if(action)
                    action(preferences);
            }
        });
    };

The method takes an action to execute with preferences and attempts to read a locally stored preference cookie. We cache preferences locally to not bombard our app servers with unneeded calls. If the cookie based preference exists, we simply call the action passing along the preference. If not, we call getPreferences passing along the action. Finally the getPreferences function makes a ajax call out to our app server to get the preferences. On success it saves a preference cookie and if an action was passed in it calls it.

And there you have it a nice clean asynchronous method of taking actions based on a users preference that is managed completely client side and it uses a local caching mechanism to make it zippy.

Here is the full source of the AMD module.

define(['jquery', 'mods/ono-config', 'mods/utils/utils'], function ($, config, cookieJar) {
    var cfg = config.getConfig();
    var f = {};

    f.getPreferences = function(action) {
        $.ajax({
            dataType: "jsonp",
            url: cfg.ProfileDomain + '/' + cfg.Username + '/Preferences',
            success: function(preferences){
                f.setPreferencesCookie(preferences);
                if(action)
                    action(preferences);
            }
        });
    };

    f.setPreferencesCookie = function (preferences) {
       cookieJar.destroyCookie('preferences', cfg.CookieHostname);
       cookieJar.setCookie('preferences', JSON.stringify(preferences), 1000, cfg.CookieHostname);
    };

    f.getPreferencesCookie = function(){
      return JSON.parse(cookieJar.getCookie('preferences'));
    };

    f.userPrefers = function(preferenceName, options){
         f.withCurrentPreferences(function(preferences){
         if(preferences[preferenceName])
            options.allowed(options.data);

         if(preferences[preferenceName] == null)
            options.unknown(options.data);
        });
    };

    f.withCurrentPreferences = function(action){
        var preferences = f.getPreferencesCookie();
        if(preferences)
            action(preferences);
        else
            f.getPreferences(action);
    };

    f.savePreference = function(preferenceName, value){
        f.withCurrentPreferences(function(preferences){
            preferences[preferenceName] = value;
            f.setPreferencesCookie(preferences);
            f.setPreference(preferenceName, value);
        });
    };
    
    f.setPreference = function (preferenceName, value) {
        $.ajax({
            dataType: "jsonp",
            url: cfg.ProfileDomain + '/' + cfg.Username + '/SetPreference',
            data: {
                preferenceToSet:preferenceName,
                preferenceValue: value
            }
        });    
    };
    
    return f;
});

Node.js It’s Not Just for Websites

So I have been working on a node.js project recently, that I was hosting on Heroku. Sadly, Heroku doesn’t allow socket.io based node apps to use true websockets. So I asked my good friend Adron who was the best Heroku-like node host out there that did support it. He suggested Nodejitsu.

So I signed up and my hopes were immediately dashed when I discovered they are metering access to their beta. You have to camp out on their activation site waiting for them to allot a few more activations. This sounded boring. So I decided to automate it with node of course. I fired up Sublime Text 2 and ripped this out.

var util = require('util'),
    exec = require('child_process').exec,
    rest = require('restler');

var alertMe = function(){
	exec('say -v Cellos Bobby, come get your nodejitsu beta');
};

var checkSite = function(){
	util.puts('checking if I can get you into the beta yet.');
	rest.get('http://activate.nodejitsu.com/').on('complete', function(result){
		if(result instanceof Error) {
			util.puts('Error: ' + result.message);
		} else {
			if(result.indexOf('We\'ve hit our limit today. Please try again later.') < 0)
				alertMe();
			else
				util.puts('damn it...');
	   }
	});
};

var pollingSite = setInterval(checkSite, 10000);

Yes this script hits the website every 10 seconds checking to see if the limit message is not on the page and play a message if it is not. I was sufficiently amused by this that I gisted it and posted to twitter. Funny thing is with in a minute I had been retweeted by Joshua Holbrook the support lead for Nodejitsu and got the following response from NodeKohai the IRC bot for the Nodejistu channel.

@NotMyself Very nice! Now come join ‪#nodejitsu‬ on freenode to claim your prize!

You see sometimes being a smart ass is a bonus. It gets you free things! Also here is a quick video showing the script in action.

How Deep a Simple Problem Can Get: Moment, Node, Heroku & Time Zones

Over the weekend I started building my first real node.js application. I had watched the Hello Node series from TekPub, read the LeanPub books and attended NodePDX this year. I was ready to get down in the weeds and start writing a real application.

I also have been wanting to try to connect into the local non-.NET community in Olympia, not that I ever see my day job not involving .NET, but I am interested in learning different ecosystems, languages and frameworks I think it makes me a more well rounded better developer in the long run. So I started a meetup group for Olympia, WA Node users and beginners.

My idea for a node app, was to create a site that consumes the meetup api and displays upcoming meetings. Fairly, simple. You can see the result of my weekend worth of work here. The site is a simple twitter bootstrap based single page that has a carousel widget displaying the upcoming meetings, currently only one scheduled.

You can see meetup specific api data including the number of members who have said they were attending, the location and a google map link, as well as a date and time. I was pretty happy with myself and blasted the link out to the world via twitter and facebook. Little did I know I had missed something in the details, which Chris Bilson was so kind to point out. The date being displayed on the site said the meeting was being held at 1:30 AM.

The meetup api returns an event object containing two bits of information related to the event’s date and time, time and utc_offset. The time is based on milliseconds since the UNIX Epoch. And the utc_offset is milliseconds based as well. Because I was in full on cowboy mode coding up a storm, my initial implementation of prettifying the date looked like this with no tests.

var moment = require('moment');

exports.helpers = {
	prettyDate: function(input) {
		return moment(input).format("dddd, MMMM Do YYYY h:mm:ss A");
	}
}

This node module uses the awesome Moment module to parse a UNIX Epoch number into a date and then format it using standard date formatting. This worked awesomely on my local machine. So I didn’t think about it any more and moved on, until Chris chimed in.

Chris had suggested that it might have something to do with UTC. I was also a little embarrassed that I didn’t have such a simple thing under unit test. So I started fixing the bug by getting the code under test. I had a couple well known values for the currently scheduled meeting.

var helpers = require('../lib/helpers').helpers;

describe('helpers', function(){
	
	describe('pretty date', function(){
		var input = { time: 1341279000000, utc_offset: -25200000 },
		    expected = 'Monday, July 2nd 2012 6:30:00 PM',
		    actual = helpers.prettyDate(input.time, input.utc_offset);

		it('prints a pretty date in the correct time zone', function(){
			expected.should.equal(actual);
		});
	});
});

The interesting thing here is the test passed with out modifying the implementation code at all. You see Moment automatically sets an offset based on the current environment. So if I were able to run this test on Heroku the test would fail. I was a bit stumped and came back around to my sad little cowboy ways and modified the implementation like this.

var moment = require('moment');

exports.helpers = {
	prettyDate: function(input_date, utc_offset) {
		return moment(input_date).utc()
		       .add('milliseconds', utc_offset)
		       .format("dddd, MMMM Do YYYY h:mm:ss A");
	}
}

I was grasping at straws, but this modification didn’t effect the test running locally. I was curious what would happen when running the site on Heroku. I suspected I would have the same issue. I was very surprised to see that the code worked.

The downside was that I didn’t understand why and that bugs the crap out of me. I couldn’t let it go. Getting the code to work was not enough for me, I needed to understand why. So I started googling. I lucked out and found this blog post on Adevia Software’s blog.

It clicked for me after that. The reason the test for the new code passed locally and the code worked on Heroku all had to do with the time zone settings of the environment running the code. My local environment is set to PST, so taking a Unix Epoch based date parsing it with Moment gives a PST date, which is then converted to UTC and then reduced by a PST UTC Offset resulting in the original PST date created by Moment.

Heroku’s default apparently is UTC. Apply the same logic and you end up with a UTC date that has been reduced by 8 hours that is still a UTC date. It looks right on Heroku because my pretty printer doesn’t include the timezone. If it had it would be wrong.

Once again I understood how the code worked and it was working, but it was wrong. The nag in the back of my head would not let it go. It’s a bug, bugs must die. Now that I understood what was going on, I went back and reverted by helper back to this implementation.

var moment = require('moment');

exports.helpers = {
	prettyDate: function(input_date) {
		return moment(input_date).format("dddd, MMMM Do YYYY h:mm:ss A");
	}
}

I then issued the following command to Heroku from the commandline.

Derp:website cheezburger$ heroku config:add TZ=America/Los_Angeles

Finally redeployed the site and all is right with the world, I can get some work done now. Thanks, Bilson. This episode of OCD is brought to you by the letters W, T & F.

My Ghetto Non-AMD Compliant Dependency Loader

At Cheezburger, we make use of require.js for most of our client side javascript. Recently I had to implement some features that needed to pull lots of 3rd party scripts that were not AMD compliant. The documentation of course told me to put script tags directly in the head of every page, which I have learned recently is a blocking operation (one of the problems that require.js solves cleanly).

So, I took some time and came up with a simple asynchronous dependency loader for this situation.

    var dependency_loader = function (dependencies) {

        var callback = undefined;

        var ready = function (cb) {
            callback = cb;
            load_all();
        };

        var loaded = function () {
            if (is_completed() && callback)
                callback();
        };

        var is_completed = function () {
            for (var i = 0; i < dependencies.length; i++) {
                if (!dependencies[i].is_loaded)
                    return false;
            }
            return true;
        };

        var load_all = function () {
            for (var i = 0; i < dependencies.length; i++) {
                load_dependency(dependencies[i]);
            }
        };

        var load_dependency = function (dependency) {
            var dependency_element = utils.addScript(dependency);
            $(dependency_element).load(function () {
                dependency.is_loaded = true;
                loaded();
            });
        };

        return {
            ready: ready
        };
    };

This object takes an array of dependencies and exposes a ready function that takes a callback to be executed when all dependencies have loaded. Usage looks something like this.

 var dependencies = [
            { id: 'foo', http: '//', path: 'cdn.somehwere.com/somedependency1.js', is_loaded: false },
            { id: 'bar', http: '//', path: 'cdn.somehwere.com/somedependency2.js', is_loaded: false }
 ];

   

 dependency_loader(dependencies).ready(function () {
   doStuff();
 };

Feedback welcome, I am still in a learning phase so if you have suggests or want to point out holes, please do.

Today I Learned: You Cannot Transfer Ownership of iOS Applications

I have recently completed my first iOS application on behalf of Furnishly, the local furniture exchange. All the release bugs have been worked out and the app is available on the app store. Yesterday, I started looking into how to transfer all the assets over to the owner of Furnishly so he could continue development and review download data on iTunes connect.

On github, this was a snap. Simply go into the administration section for the private repository, scroll down to the Danger Zone and click the transfer button under transfer ownership. Then type in the name of the repository and the new owners user name and hit transfer. Simple, easy.

In iTunes connect, it is a completely different story. There is no obvious way to transfer the application in the UI. Searching around in the FAQ surfaced this gem.

I sold my app to another developer and can no longer distribute it on the App Store. Can I transfer the app to the new developer’s iTunes Connect account?
No, you can’t transfer the app to another developer account on iTunes Connect. To add the app to another account, remove the app from the current account and upload it to the new iTunes Connect account.

Note that uploading the app to a new iTunes Connect account will disable current customers from receiving automatic and free updates of your application. All customer reviews, rating, and ranking information will be reset. You will not be able to reuse the app name and SKU in the old account. If you have uploaded a binary or used the app with the iAd Network, your Bundle ID will not be reusable either.

So apparently the way to transfer ownership of this app to the non technical owner is to:

  1. ask him to create a apple developer account
  2. wait to get accepted
  3. generate new application keys
  4. rebuild the application with the new keys
  5. delete the old application build from my account
  6. resubmit the new application via his account

Oh and all the folks that have downloaded the app in the mean time from my account are pretty much never going to get an update. All the ratings you might have received will disappear.

Seriously this is a horrible way to handle what seems to me would be a common occurrence. Did Zinga have to follow this process when they bought DrawSomething?

Anyone have advice?

Using System.Threading.Tasks On MonoTouch

I ran into an issue this week where I was attempting to load data from a web service asynchronously using System.Threading.Tasks on MonoTouch. I was able to fire the task off but kept getting an error trying to update UI elements when callback was fired.

After a little beating my head against a wall, I took a walk and grew a neuron and this is what I came up with to resolve the issue. Note the call to InvokeOnMainThread.

private IEnumerable<Product> GetProducts(Position position)
{
	return productsService.GetProductsNear(position);
}

private void BeginGetProducts()
{
	Activity.PushNetworkActive();
	var scheduler = TaskScheduler.FromCurrentSynchronizationContext();
	
	Task.Factory.StartNew(() => GetProducts(currentPosition))
			.ContinueWith(OnProducts, scheduler);
}

private void OnProducts(Task<IEnumerable<Product>> task)
{
	if(task.IsFaulted)
		HandleException(productsTask.Exception);
	else 
	{
		InvokeOnMainThread(() => {
			this.products = task.Result;
			ShowProducts();
		});
	}
	Activity.PopNetworkActive();
}

Using the Implicit Operator in C# for Maximum Nerdy Good Times

My current team works with a lot of data. We represent that data with explicit types. For example if a string represents a Name we create an explicit type called Name like so:

public struct Name
{
  private readonly string _string;
  public Name(string name){ _string = name; }
  public override string ToString() { return _string } 
}

This makes things nice for testing purposes as these value structs are comparable for free and are clearly named what the value represents. The downside is if you need to set up a bunch of test data for a unit test you can run into code that looks like this:

 Name name = new Name("Name");
 IEnumerable<Name> names = new[] { new Name("Tom"), new Name("Dick"), new Name("Harry"), }

This will quickly give you carpal tunnel with all the ceremony required to create all the instances. It would be nice if we could reduce some of the noise and we can via use of the implicit operator. All we need to do is add the following operator logic to our struct.

public struct Name
{
  private readonly string _string;
  public Name(string name){ _string = name; }
  public override string ToString() { return _string } 
  public static implicit operator Name(string name) { return new Name(name); }
}

What does this buy us? How does the following syntax strike you?

 Name name = "Name";
 IEnumerable<Name> names = new[] { "Tom", "Dick", "Harry", }

We are used to using implicit typing from on the left hand side of a statement, were you aware that it can be used on the right hand side? I wasn’t. But basically this is how collection initializers work. Nice eh?

Thanks to Robert Ream for showing me this. It was fun working with someone with such a deep understanding of the language and functional development, even if it was a brief time.

PowerShell, msysgit 1.7.9 and Permission denied (publickey) Errors

TLDR: Add “$env:home = resolve-path ~” to your PowerShell profile.

I recently updated my work Virtual Machine to the latest release of msysgit 1.7.9 to resolve some issues I was having with global settings not being obeyed. After the installation I noticed that I was no longer able to update repositories from PowerShell. The output I was getting looked something like this:

GIT [dirkdiggler] on [master] (clean) | C:\projects\foo
-> git pull
Permission denied (publickey).
fatal: The remote end hung up unexpectedly

This was unexpected and the first thing I thought of was the recent security issue with GitHub, and maybe my work key needed to be validated. I checked GitHub and everything seemed to be set up correctly. I even went so far as to generate new keys with no success.

Next up, it occurred to me to try connecting via git bash.

dirkdiggler@DIRKDIGGLER-VM /c/projects/foo (master)
$ ssh git@github.com
Hi dirkdiggler! You've successfully authenticated, but GitHub does not provide shell access.
Connection to github.com closed.

Bash seems to be working fine. I then started troubleshooting my connection from PowerShell. I tried testing ssh first with the following command.

GIT [dirkdiggler] on [master] (clean) | C:\projects\foo
-> ssh git@github.com
Permission denied (publickey).

So it looks like the problem was not with git but with establishing an ssh connection to GitHub. I wanted to see exactly what was happening when trying to connect via ssh, so I ran the following command with enables verbose logging of the connection.

GIT [dirkdiggler] on [master] (clean) | C:\projects\foo
-> <b>ssh -v git@github.com</b>
OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007
debug1: Connecting to github.com [207.97.227.239] port 22.
debug1: Connection established.
debug1: identity file /.ssh/identity type -1
debug1: identity file /.ssh/id_rsa type -1
debug1: identity file /.ssh/id_dsa type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5github2
debug1: match: OpenSSH_5.1p1 Debian-5github2 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_4.6
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-cbc hmac-md5 none
debug1: kex: client->server aes128-cbc hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Host 'github.com' is known and matches the RSA host key.
debug1: Found key in /.ssh/known_hosts:1
debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey
debug1: Next authentication method: publickey
debug1: Trying private key: /.ssh/identity
debug1: Trying private key: /.ssh/id_rsa
debug1: Trying private key: /.ssh/id_dsa
debug1: No more authentication methods to try.
Permission denied (publickey).

This output did not give me any immediate ideas on the problem but I thought I might try the same command from git bash. I won’t include the full output here, but I did notice something different right away. Check out the following lines from the output. Compare them to lines 6-8 above.

debug1: identity file /c/Users/MGALFAPAIR/.ssh/identity type -1
debug1: identity file /c/Users/MGALFAPAIR/.ssh/id_rsa type 1
debug1: identity file /c/Users/MGALFAPAIR/.ssh/id_dsa type -1

So it looks like ssh running under PowerShell is looking for my public/private key pair in a different directory than under bash. Doing a quick google search I found that an environment variable named home is used when determining the path to look for keys. I went back to PowerShell and checked for the environment variable like so.

GIT [dirkdiggler] on [master] (clean) | C:\projects\foo
-> Write-Host $env:home

No home variable set. So I set it like so.

GIT [dirkdiggler] on [master] (clean) | C:\projects\foo
-> $env:home = Resolve-Path ~

GIT [dirkdiggler] on [master] (clean) | C:\projects\foo
-> Write-Host $env:home
C:\Users\MGALFAPAIR

Running the ssh test again, I am now able to connect. Adding the command to my PowerShell profile sets it automatically every time I start PowerShell resolving the problem completely.

Glenn Block Node.js on Windows Azure Video

Overview

If I told you that you can build node.js applications in Windows Azure would you believe me? Come to this session and I’ll show you how. You’ll see how take those existing node apps and easily deploy them to Windows Azure from any platform. You’ll see how you can make yours node apps more robust by leveraging Azure services like storage and service bus, all of which are available in our new “azure” npm module. You’ll also see how to take advantage of cool tools like socket.io for WebSockets, node-inspector for debugging and Cloud9 for an awesome online development experience.

About Glenn

Glenn is a PM at Microsoft working on support for node.js in Windows and Azure. Glenn has a breadth of experience both both inside and outside Microsoft developing software solutions for ISVs and the enterprise. Glenn has been a passionate supporter of open source and has been active in involving folks from the community in the development of software at Microsoft. This has included shipping products under open source licenses, as well as assisting other teams looking to do so. Glenn is also a lover of community and a frequent speaker at local and international events and user groups.

Glenn’s blog can be found on CodeBetter or you can follow him on twitter at you own risk.

Video

Slides

Source

The source for this presentation can be found on Adron’s github account.

SSDNUG Presents: Glenn Block – Unlock your inner node in the cloud with Windows Azure

The South Sound .NET Users group is proud to present Glenn Block on Thursday March tth at 7:00PM at the Olympia Center in the heart of downtown Olympia, WA.

If I told you that you can build node.js applications in Windows Azure would you believe me? Come to this session and I’ll show you how. You’ll see how take those existing node apps and easily deploy them to Windows Azure from any platform. You’ll see how you can make yours node apps more robust by leveraging Azure services like storage and service bus, all of which are available in our new “azure” npm module. You’ll also see how to take advantage of cool tools like socket.io for WebSockets, node-inspector for debugging and Cloud9 for an awesome online development experience.

Glenn is a PM at Microsoft working on support for node.js in Windows and Azure. Glenn has a breadth of experience both both inside and outside Microsoft developing software solutions for ISVs and the enterprise. Glenn has been a passionate supporter of open source and has been active in involving folks from the community in the development of software at Microsoft. This has included shipping products under open source licenses, as well as assisting other teams looking to do so. Glenn is also a lover of community and a frequent speaker at local and international events and user groups.

Glenn’s blog can be found on msdn or you can follow him on twitter at you own risk.