I Am Not Myself

Bills.Pay(Developer.Skills).ShouldBeTrue()

Category Archives: Javascript

Useful jQuery Plugin of the Day: Waypoints

At Cheezburger, my team is currently working on a sharing widget for our recently released list post type. If you take a look at this list, you will see our MVP of the idea. You will notice a share box in the left gutter that appears to float in a fixed position while you scroll through the items in the list giving you the opportunity to share at any point.

We are using an awesome little jQuery plugin called Waypoints to accomplish this. Waypoints makes it easy to execute a function whenever you scroll to an element. Our implementation looks like this.

define(['jquery', 'mods/device', 'libs/waypoints'], function($, device) {
    var f = {};

    f.listen = function () {
        if (device.is.desktop) {
            $('.js-footer').waypoint(f.affix, { offset: '75%' });
        }
        return this;
    };

    f.affix = function (event, direction) {
        event.stopPropagation();
        $('#js-share-menu-wrap').toggleClass('is-fixed');
        return this;
    };

    return f;
});

In this code we are attaching our waypoint to the footer of of the page. When the footer scrolls into the window viewport the affix method gets called. The affix method simply toggles the is fixed class on the share widget. We are also passing in an options object that tells the waypoint to trigger when the footer scrolls passed the point 75% from the top of the viewport.

In our next iteration, we are planning to change the way the share button behaves as you scroll through the list. The idea being when you click the button while it is within the area of the third list item, the image and title used to share the list on Facebook will be the third list items information. So while you scroll though the list we are updating the share buttons shared information.

We have not implemented this functionality yet, but yesterday I did a spike to figure out how we could accomplish it. And once again, Waypoints made the task easy.

define(['jquery', 'mods/device', 'libs/waypoints'], function($, device) {
    var f = {};

    f.listen = function () {
        if (device.is.desktop) {
            $('.actions').waypoint(f.affix, { offset: '75%' });
            $('.list-asset-item').waypoint(f.onSwitch, { offset: $('#js-share-menu-wrap').offset().top });
        }

        return this;
    };

    f.affix = function (event, direction) {
        event.stopPropagation();
        $('#js-share-menu-wrap').toggleClass('is-fixed');
        return this;
    };

    f.onSwitch = function(event, direction) {
        var item = $(this);
        console.log(direction);
        console.log(item.index());
        console.log($('h2.title', item).text());
        console.log($('div.post-description', item).text());
    };

    return f;
});

In this code I am attaching a waypoint to each list asset item. But this time I am setting the offset to the fixed pixel position of the top of the share widget. This allows me to trigger the onSwitch function as the top of the list asset item crosses the top of the share widget. The handler is scoped correctly for the list asset item so I can easily grab it’s index, title and description. Waypoints will even tell me which direction the user is scrolling.

Here is a quick video showing this code in action.

Test Driving a Node HTTP Server with James Shore

Recently I have been watching the Let’s Code Test Driven JavaScript video series by James Shore. The series attempts to show the building of an application in node.js applying rigorous testing to the process.

The videos are nice and short at ~15 minutes each. I have been watching them in groups of about 3-5 at a time and following along Learn To Code The Hardway style. And I am about 15 episodes in so far.

The interesting thing is James is obviously not a node programmer. He states clearly he is new to node. So the early videos are less about how to do the right thing in node, but more about watching a disciplined professional programmer set small goals, explore a foreign environment and discovery. That alone is worth cost of entry.

I find myself rooting for James, looking forward to when he discovers package.json and how it will help him manage his dependencies more explicitly. Wondering if he will discover how to write his own modules and publish his lint runner to NPM. Curious if he even sees value in that. Anxious for him to remove his node_modules folder from his git repository because it sets off my OCD.

At the same time I am learning a lot from him about approaching problems. Early on James sets a goal that he wants to write a unit test that validates his server responds to get requests. He proceeds to fail many times at this task, fighting with the intricacies of node development, asynchronous programming and unfamiliar tooling. At each failure, he steps back considers the problem and tries a new approach. You really see that by the end of the process, James has not only accomplished the goal but has gained a considerable amount of knowledge about the environment.

So how do you test drive the creation of a HTTP server in node? Like this.

'use strict';

var http = require('http');

var server = http.createServer();

exports.start = function(port, callback){
	if(!port) throw new Error('port is required.');

	server.on('request', function(request, response){
		response.end('Hello World');
	});

	server.listen(port);

	if(callback)
		callback();
};

exports.stop = function(callback){
	server.close(callback);
};
'use strict';

var server = require('./server'),
	http = require('http');

exports.request = {

	tearDown: function(done){
		server.stop(done);
	},

	setUp: function(done){
		server.start(8080, done);
	},

	testServerReturnsHelloWorld: function(test){
		var request = http.get('http://localhost:8080');

		request.on('response', function(response){
			var receivedData = false;
			
			response.setEncoding('utf8');
			
			test.equals(200, response.statusCode, 'status code');
			
			response.on('data', function(chunk){
				receivedData = true;
				test.equals('Hello World', chunk, 'response text');
			});

			response.on('end', function(){
				test.ok(receivedData, 'recieved data');
				test.done();
			});
		});
	}
};

exports.start = {

	testStartingTheServerWithOutAPortThrows: function(test){
		test.throws(server.start);
		test.done();
	},
	testServerRunsCallbackWhenStartCompletes: function(test){
		server.start(8080, function(){
			server.stop(test.done);
		});
	}

};

exports.stop = {

	testServerRunsCallbackWhenStopCompletes: function(test){
		server.start(8080, function(){
			server.stop(test.done);
		});
	},
	testCallingStopWhenServerIsNotRunningThrows: function(test){
		test.throws(server.stop);
		test.done();
	}
};

So if you are interested in these kinds of things, I highly recommend this series. It is worth the price of $25 per month for the over 75 episodes currently. It is the closest you are going to get to pair programming with James Shore with out shelling out some serious cash to get him to come to you.

Getting Started Managing Client Side Scripts with Require.js

Back in the 90s when I started my development career, the first language I learned was Javascript. It is a deeply and perfectly flawed language. We created huge DHTML messes with it. And I moved on to server side development and got lost in .NET for a while. It all kind of left me with the following feeling.

With the rise of jQuery and other frameworks like it, javascript has come back front and center for me. As time goes by, I find I am doing more and more javascript daily. In fact, at Cheezburger javascript is about 80% of my day now. jQuery removes some of the browser compatibility burdens we felt way back in the 90s. And over the course of the last few years I have created a few jQuery messes.

Those messes look a little something like this.

<!DOCTYPE html>
<html lang="en">
	<head>
		<meta charset="utf-8">
		<title>Getting Started with Require.js</title>
		<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.8.1/jquery.min.js"></script>
		<script type="text/javascript">
			$(function(){
				var cfg = $.parseJSON($('#config').html());
				if(cfg.isProduction)
					$('h1').css('background-color', 'red');
			});
		</script>
	</head>
	<body>
		<h1>Sample Application</h1>

		<p>This is a sample application used in a blog post to demonstrate 
			how to get started with <a target="_blank" href="http://requirejs.org">require.js</a>.</p>

		<script id="config" type="application/json">
			{
				"isProduction": true
			}
		</script>
	</body>
</html>

This is a very simplified example, but it is pretty representative of what I used to do. The server has rendered some client side configuration information into an application/json script tag. In the head of the document we have a script tag to include the minified jquery library from the Google CDN. There is also a script block that uses the jquery ready function to read the configuration information and modify the mark up based on a setting.

One problem presented here is the nature in which browsers load javascript. Script tags are blocking operations. Meaning when the browser hits a script tag the script must be downloaded and fully evaluated before continuing. This is why some developers have adopted the convention of putting script tags and blocks at the end of the document body. This way all the mark up is rendered and then scripts start to load and execute.

Now imagine this sample also used about twenty jquery plugins. All those plugins have to load in a blocking fashion and all of them must be in the right order in the markup to satisfy dependencies. And every member of my team must be aware of those dependencies as well and care and feed them. Additionally every script must load before we can fire off any functionality. Are you beginning to understand why some sites are so damn slow to render?

Finally, with this example there is no concept of modularity. I simply open up a ready function and start pluggin away. Grabbing data directly from its source and poking at the dom directly. Abstractions were not my thing apparently, running with scissors as close to the metal as possible seems more accurate.

When I started at Cheezburger, I was introduced to require.js by Matt Mirande. Matt has become my personal javascript savior. He sat me down and we had a come to black baby jesus talk and I am a better man for it.

The best way I can describe require.js is IoC for javascript. If you are not familiar with IoC, please get thy self to the Castle project and do some learnin. Require.js allows you to think about your javascript in terms of discreet modules of functionality. It manages loading those modules asynchronously and executes them as their dependencies are satisfied.

I think the best way to describe require.js might be to simply demonstrate it by converting my example. To start, simply download the require.js library from the site and put it in the root of your scripts directory. Then add a reference to it. The final markup looks like this.

<!DOCTYPE html>
<html lang="en">
	<head>
		<meta charset="utf-8">
		<title>Getting Started with Require.js</title>
		<script type="text/javascript" src="scripts/require.js"></script>		
		<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.8.1/jquery.min.js"></script>
		<script type="text/javascript">
			$(function(){
				var cfg = $.parseJSON($('#config').html());
				if(cfg.isProduction)
					$('h1').css('background-color', 'red');
			});
		</script>
	</head>
	<body>
		<h1>Sample Application</h1>

		<p>This is a sample application used in a blog post to demonstrate 
			how to get started with <a target="_blank" href="http://requirejs.org">require.js</a>.</p>
                
                <ol id="status">
			<li>loading...</li>
		</ol>

		<script id="config" type="application/json">
			{
				"isProduction": true
			}
		</script>
	</body>
</html>

Note that I added an ordered list to the markup. I will be using it to demonstrate the load order of various things I am about to show you. Next, we need to tell require.js what to do. Create a javascript file right next to require called main.js. this will be the main entry point for our require context. The contents of this file looks like this.

require([], function(){
	 var status = document.getElementById('status');
	 var item = document.createElement('li');
	 item.innerText = "main loaded";
	 status.appendChild(item);
});

We then need to change our reference to the require.js script to use our new main entry point script. Simply add a data-main attribute to the script reference like so. You can drop the .js off the end of your file name and the path is relative to the require.js file.

<script type="text/javascript" src="scripts/require.js" data-main="scripts/main"></script>

This script calls the require function passing in an array of dependencies and a function to call when the dependencies have been loaded and executed. Right now I don’t have any dependencies, so I pass an empty array. The function simply uses some straight DOM manipulation to add a list item to my status list.

All this DOM manipulation is silly, considering I have already established the use of jquery. So I am going to fix that up by making jquery my first dependency. jQuery is special as a dependency because I still want to load it from the CDN and it is not a proper AMD module. No problem require.js can handle that.

require.config({paths: { jquery: 'https://ajax.googleapis.com/ajax/libs/jquery/1.8.1/jquery.min' }});

require(['jquery'],function($){
	$('<li>main loaded</li>').appendTo('#status');
});

Using the config method of require, I can set the path that jquery should be loaded from. I only need to do this once in the main entry point script. All other modules can simply add a dependency to jquery and not worry about it.

My require statement has changed a little bit. I have explicitly added jquery as a dependency and pass a reference to it in my callback function. At this point I am free to use jquery as I see fit. jQuery plugins will work the same way. If you are loading them from a CDN, set the path and simply add them to the dependency list. if you don’t need to interact with the plugin directly, you can drop the reference to the function.

require(['jquery','plugin-with-methods', 'just-needs-to-load'],function($, plugin){
	plugin.dosomethingwith($);
});

Note that by convention libraries that simply need to load should be at the end of the dependency chain to not cause issues with the function call.

We can now safely remove the reference to jquery from our markup and migrate our ready function into the body of our main.js require call.

<!DOCTYPE html>
<html lang="en">
	<head>
		<meta charset="utf-8">
		<title>Getting Started with Require.js</title>
		<script type="text/javascript" src="scripts/require.js" data-main="scripts/main"></script>
	</head>
	<body>
		<h1>Sample Application</h1>

		<p>This is a sample application used in a blog post to demonstrate 
			how to get started with <a target="_blank" href="http://requirejs.org">require.js</a>.</p>

		<ol id="status">
			<li>loading...</li>
		</ol>

		<script id="config" type="application/json">
			{
				"isProduction": true
			}
		</script>
	</body>
</html>
require.config({paths: { jquery: 'https://ajax.googleapis.com/ajax/libs/jquery/1.8.1/jquery.min' }});

require(['jquery'],function($){
	$('<li>main loaded</li>').appendTo('#status');

	var cfg = $.parseJSON($('#config').html());
	if(cfg.isProduction)
            $('h1').css('background-color', 'red');
});

We have now successfully set up require.js and modified our use of jquery so that it is treated like any other require module. I can now start breaking out useful functionality into reusable modules. It seems like configuration is a pretty obvious module. Lots of other modules will want to use the configuration object and knowing how to read it out of the markup seems like a SRP violation.

I’ll start by creating a mods folder in my scripts folder to hold all my modules. Then create a configuration.js file in it. To define a module you call the define method, pass in your dependencies as an array and a function.

define(['jquery'], function($){
	$('<li>configuration loaded</li>').appendTo('#status');
});

This module simply appends a meesage to our status list that it has been loaded. Let’s modify our main.js to depend on this module.

require(['jquery', 'mods/configuration'],function($, configuration){
	$('<li>main loaded</li>').appendTo('#status');

	var cfg = $.parseJSON($('#config').html());
				if(cfg.isProduction)
					$('h1').css('background-color', 'red');
});

All I need to do is add the relative path of the configuration module to the dependencies list and add a reference to it in the callback function signature. Executing this code right now yields the following output in our status list.

  1. Loading…
  2. configuration loaded
  3. main loaded

Note that the configuration module is loaded and executed first. Then the main entry point is executed. This chain of dependencies is figured out by require.js. Now let’s flesh out our configuration module.

define(['jquery'], function($){
		
	var module = {};

	module.getConfig = function(){
		return $.parseJSON($('#config').html());
	};

	$('<li>configuration loaded</li>').appendTo('#status');
	return module;
});

And finally, modify our main.js to consume the new module.

require(['jquery', 'mods/configuration'], function($, cfg) {
	$('<li>main loaded</li>').appendTo('#status');

	if(cfg.getConfig().isProduction)
		$('h1').css('background-color', 'red');
});

Now our consuming code has no knowledge of how the configuration is retrieved, it just uses the values. We can feel free to refactor the configuration module without effecting consumers at all. Lets do that and add some caching so we are not reading the DOM every time we get the configuration.

define(['jquery'], function($){
		
	var config, module = {};

	module.getConfig = function(){
		return config || (config = $.parseJSON($('#config').html()) || {});
	};

	$('<li>configuration loaded</li>').appendTo('#status');
	return module;
});

Bam, application keeps on humming. Nice. Now admittedly this example is pretty contrived. But I hope the point was delivered and you could follow all the moving parts. If you are interested in trying it out the project is up on github. Why not try implementing a module to handle our little status logging implementation?

Update: Here are some comments I got from Mr. Mirande on this post. I thought my readers might be interested in hearing the critique of this article from my javascript mentor.

First off, wow, totally honored / humbled by the shout-out dude! So awesome – Thanks! 🙂 Second, great post sir! Was a fun and fast read which managed to pack almost all the key points into a nice, digestible nugget. Couple things I noticed… and in general these point to main flaw in the whole AMD thing: things can get a bit fiddly / WTF-y once you get past the initial setup… and, well, initial setup itself is kind of a pain 🙂

1: “jQuery… is not a proper AMD module” – well, it kind of is. It’s just making use of a feature of AMD that is usually best avoided – named modules. For almost every use-case, you want to go with “anonymous” modules as they are easier to move around and require less boilerplate. We do actually use named modules during testing (via our testing.js thinger (testing.req.using() ) this is how we stub out dependencies at the module level.

2: jQuery plugins – This is trickier… basically only AMD modules (shim’ed or otherwise) can be executed in a specific order. Normal, non-AMD, scripts will be load and executed immediately. So, in your example where you load jQuery as a module and a jQuery plugin as a script, there’s a chance the plugin will load before jQuery itself and throw an error. >_< In require 2.x, they added the concept of a “shim” to better accommodate scenarios where devs are working with both AMD and non-AMD resources. Again, shit gets fiddly 😦 so I’ve always just wrapped plugins such that they operate either as AMD modules or standard browser globals. Neither approach is really friendly to folks just getting started unfortunately.

3: Module return values – I like how you describe the ordering of dependencies in the define() / require() callback functions but you might want to be more specific in describing the module mechanic (e.g. explicitly returning something vs. not). Personally, I found this to be the key revelation when working with AMD – each module offers it’s own “sandbox” of sorts – a new scope where you don’t have to worry about name collisions, can specify public / private things, etc.

Anyway, I don’t think these are major omissions… it’s really tricky to find the right level of detail when explaining AMD to newbs and I think you’ve pretty much nailed it… but I figured I’d point them out in case they are helpful.

Implementing Asynchronous Actions Based On User Preferences Client Side

On my current project, we needed a way to check a user set preference before taking action on behalf of the user. To be specific, we wanted to check if the user prefers for us to post an open graph action to facebook when they favorite a meme on our site. The trick is the user preferences are stored in our profile database and all of our open graph work is purely client side.

In our open graph module, we really didn’t want to care how the user preferences were stored. We simply wanted to consume them in a clean way. An example looks like this:

f.onFavorite = function (postUrl) {
        var options = {
            data: postUrl,
            allowed: function(postUrl){
                f.postFavorite(postUrl);
            },
            unknown: function(postUrl){
                $('#js-og-enable-favorite-btn').data('postUrl', postUrl);
                $('#js-og-favorite-gate').modal("show");
            }
        };
        preferences.userPrefers('favoriting', options );
    };

This is the click handler for a favorite button. We pass in the url to the meme the user favorited and construct an options object. The options object defines data associated with the preference as well as a function to perform if the user allows the action. We also include a function to execute if the preference is not currently set. This way we can prompt the user to make a preference. Finally, we call the preferences module with the preference in question and the options.

Deep in the bowels of our preferences module, is the userPrefers method. It looks like this.

f.userPrefers = function(preferenceName, options){
         f.withCurrentPreferences(function(preferences){
         if(preferences[preferenceName])
            options.allowed(options.data);

         if(preferences[preferenceName] == null)
            options.unknown(options.data);
        });
    };

This function calls withCurrentPreferences and passes in a function describing what to do with a set of current preferences. We check to see if the preference we are checking is enabled and call the allowed method passing along the data if it is. Finally, it check is the preference is explicitly null and calls the unknown method if it is.

So far fairly clear and concise. But what magic is this withCurrentPreferences method?

f.withCurrentPreferences = function(action){
        var preferences = f.getPreferencesCookie();
        if(preferences)
            action(preferences);
        else
            f.getPreferences(action);
    };

f.getPreferences = function(action) {
        $.ajax({
            dataType: "jsonp",
            url: cfg.ProfileDomain + '/' + cfg.Username + '/Preferences',
            success: function(preferences){
                f.setPreferencesCookie(preferences);
                if(action)
                    action(preferences);
            }
        });
    };

The method takes an action to execute with preferences and attempts to read a locally stored preference cookie. We cache preferences locally to not bombard our app servers with unneeded calls. If the cookie based preference exists, we simply call the action passing along the preference. If not, we call getPreferences passing along the action. Finally the getPreferences function makes a ajax call out to our app server to get the preferences. On success it saves a preference cookie and if an action was passed in it calls it.

And there you have it a nice clean asynchronous method of taking actions based on a users preference that is managed completely client side and it uses a local caching mechanism to make it zippy.

Here is the full source of the AMD module.

define(['jquery', 'mods/ono-config', 'mods/utils/utils'], function ($, config, cookieJar) {
    var cfg = config.getConfig();
    var f = {};

    f.getPreferences = function(action) {
        $.ajax({
            dataType: "jsonp",
            url: cfg.ProfileDomain + '/' + cfg.Username + '/Preferences',
            success: function(preferences){
                f.setPreferencesCookie(preferences);
                if(action)
                    action(preferences);
            }
        });
    };

    f.setPreferencesCookie = function (preferences) {
       cookieJar.destroyCookie('preferences', cfg.CookieHostname);
       cookieJar.setCookie('preferences', JSON.stringify(preferences), 1000, cfg.CookieHostname);
    };

    f.getPreferencesCookie = function(){
      return JSON.parse(cookieJar.getCookie('preferences'));
    };

    f.userPrefers = function(preferenceName, options){
         f.withCurrentPreferences(function(preferences){
         if(preferences[preferenceName])
            options.allowed(options.data);

         if(preferences[preferenceName] == null)
            options.unknown(options.data);
        });
    };

    f.withCurrentPreferences = function(action){
        var preferences = f.getPreferencesCookie();
        if(preferences)
            action(preferences);
        else
            f.getPreferences(action);
    };

    f.savePreference = function(preferenceName, value){
        f.withCurrentPreferences(function(preferences){
            preferences[preferenceName] = value;
            f.setPreferencesCookie(preferences);
            f.setPreference(preferenceName, value);
        });
    };
    
    f.setPreference = function (preferenceName, value) {
        $.ajax({
            dataType: "jsonp",
            url: cfg.ProfileDomain + '/' + cfg.Username + '/SetPreference',
            data: {
                preferenceToSet:preferenceName,
                preferenceValue: value
            }
        });    
    };
    
    return f;
});

Node.js It’s Not Just for Websites

So I have been working on a node.js project recently, that I was hosting on Heroku. Sadly, Heroku doesn’t allow socket.io based node apps to use true websockets. So I asked my good friend Adron who was the best Heroku-like node host out there that did support it. He suggested Nodejitsu.

So I signed up and my hopes were immediately dashed when I discovered they are metering access to their beta. You have to camp out on their activation site waiting for them to allot a few more activations. This sounded boring. So I decided to automate it with node of course. I fired up Sublime Text 2 and ripped this out.

var util = require('util'),
    exec = require('child_process').exec,
    rest = require('restler');

var alertMe = function(){
	exec('say -v Cellos Bobby, come get your nodejitsu beta');
};

var checkSite = function(){
	util.puts('checking if I can get you into the beta yet.');
	rest.get('http://activate.nodejitsu.com/').on('complete', function(result){
		if(result instanceof Error) {
			util.puts('Error: ' + result.message);
		} else {
			if(result.indexOf('We\'ve hit our limit today. Please try again later.') < 0)
				alertMe();
			else
				util.puts('damn it...');
	   }
	});
};

var pollingSite = setInterval(checkSite, 10000);

Yes this script hits the website every 10 seconds checking to see if the limit message is not on the page and play a message if it is not. I was sufficiently amused by this that I gisted it and posted to twitter. Funny thing is with in a minute I had been retweeted by Joshua Holbrook the support lead for Nodejitsu and got the following response from NodeKohai the IRC bot for the Nodejistu channel.

@NotMyself Very nice! Now come join ‪#nodejitsu‬ on freenode to claim your prize!

You see sometimes being a smart ass is a bonus. It gets you free things! Also here is a quick video showing the script in action.

How Deep a Simple Problem Can Get: Moment, Node, Heroku & Time Zones

Over the weekend I started building my first real node.js application. I had watched the Hello Node series from TekPub, read the LeanPub books and attended NodePDX this year. I was ready to get down in the weeds and start writing a real application.

I also have been wanting to try to connect into the local non-.NET community in Olympia, not that I ever see my day job not involving .NET, but I am interested in learning different ecosystems, languages and frameworks I think it makes me a more well rounded better developer in the long run. So I started a meetup group for Olympia, WA Node users and beginners.

My idea for a node app, was to create a site that consumes the meetup api and displays upcoming meetings. Fairly, simple. You can see the result of my weekend worth of work here. The site is a simple twitter bootstrap based single page that has a carousel widget displaying the upcoming meetings, currently only one scheduled.

You can see meetup specific api data including the number of members who have said they were attending, the location and a google map link, as well as a date and time. I was pretty happy with myself and blasted the link out to the world via twitter and facebook. Little did I know I had missed something in the details, which Chris Bilson was so kind to point out. The date being displayed on the site said the meeting was being held at 1:30 AM.

The meetup api returns an event object containing two bits of information related to the event’s date and time, time and utc_offset. The time is based on milliseconds since the UNIX Epoch. And the utc_offset is milliseconds based as well. Because I was in full on cowboy mode coding up a storm, my initial implementation of prettifying the date looked like this with no tests.

var moment = require('moment');

exports.helpers = {
	prettyDate: function(input) {
		return moment(input).format("dddd, MMMM Do YYYY h:mm:ss A");
	}
}

This node module uses the awesome Moment module to parse a UNIX Epoch number into a date and then format it using standard date formatting. This worked awesomely on my local machine. So I didn’t think about it any more and moved on, until Chris chimed in.

Chris had suggested that it might have something to do with UTC. I was also a little embarrassed that I didn’t have such a simple thing under unit test. So I started fixing the bug by getting the code under test. I had a couple well known values for the currently scheduled meeting.

var helpers = require('../lib/helpers').helpers;

describe('helpers', function(){
	
	describe('pretty date', function(){
		var input = { time: 1341279000000, utc_offset: -25200000 },
		    expected = 'Monday, July 2nd 2012 6:30:00 PM',
		    actual = helpers.prettyDate(input.time, input.utc_offset);

		it('prints a pretty date in the correct time zone', function(){
			expected.should.equal(actual);
		});
	});
});

The interesting thing here is the test passed with out modifying the implementation code at all. You see Moment automatically sets an offset based on the current environment. So if I were able to run this test on Heroku the test would fail. I was a bit stumped and came back around to my sad little cowboy ways and modified the implementation like this.

var moment = require('moment');

exports.helpers = {
	prettyDate: function(input_date, utc_offset) {
		return moment(input_date).utc()
		       .add('milliseconds', utc_offset)
		       .format("dddd, MMMM Do YYYY h:mm:ss A");
	}
}

I was grasping at straws, but this modification didn’t effect the test running locally. I was curious what would happen when running the site on Heroku. I suspected I would have the same issue. I was very surprised to see that the code worked.

The downside was that I didn’t understand why and that bugs the crap out of me. I couldn’t let it go. Getting the code to work was not enough for me, I needed to understand why. So I started googling. I lucked out and found this blog post on Adevia Software’s blog.

It clicked for me after that. The reason the test for the new code passed locally and the code worked on Heroku all had to do with the time zone settings of the environment running the code. My local environment is set to PST, so taking a Unix Epoch based date parsing it with Moment gives a PST date, which is then converted to UTC and then reduced by a PST UTC Offset resulting in the original PST date created by Moment.

Heroku’s default apparently is UTC. Apply the same logic and you end up with a UTC date that has been reduced by 8 hours that is still a UTC date. It looks right on Heroku because my pretty printer doesn’t include the timezone. If it had it would be wrong.

Once again I understood how the code worked and it was working, but it was wrong. The nag in the back of my head would not let it go. It’s a bug, bugs must die. Now that I understood what was going on, I went back and reverted by helper back to this implementation.

var moment = require('moment');

exports.helpers = {
	prettyDate: function(input_date) {
		return moment(input_date).format("dddd, MMMM Do YYYY h:mm:ss A");
	}
}

I then issued the following command to Heroku from the commandline.

Derp:website cheezburger$ heroku config:add TZ=America/Los_Angeles

Finally redeployed the site and all is right with the world, I can get some work done now. Thanks, Bilson. This episode of OCD is brought to you by the letters W, T & F.

My Ghetto Non-AMD Compliant Dependency Loader

At Cheezburger, we make use of require.js for most of our client side javascript. Recently I had to implement some features that needed to pull lots of 3rd party scripts that were not AMD compliant. The documentation of course told me to put script tags directly in the head of every page, which I have learned recently is a blocking operation (one of the problems that require.js solves cleanly).

So, I took some time and came up with a simple asynchronous dependency loader for this situation.

    var dependency_loader = function (dependencies) {

        var callback = undefined;

        var ready = function (cb) {
            callback = cb;
            load_all();
        };

        var loaded = function () {
            if (is_completed() && callback)
                callback();
        };

        var is_completed = function () {
            for (var i = 0; i < dependencies.length; i++) {
                if (!dependencies[i].is_loaded)
                    return false;
            }
            return true;
        };

        var load_all = function () {
            for (var i = 0; i < dependencies.length; i++) {
                load_dependency(dependencies[i]);
            }
        };

        var load_dependency = function (dependency) {
            var dependency_element = utils.addScript(dependency);
            $(dependency_element).load(function () {
                dependency.is_loaded = true;
                loaded();
            });
        };

        return {
            ready: ready
        };
    };

This object takes an array of dependencies and exposes a ready function that takes a callback to be executed when all dependencies have loaded. Usage looks something like this.

 var dependencies = [
            { id: 'foo', http: '//', path: 'cdn.somehwere.com/somedependency1.js', is_loaded: false },
            { id: 'bar', http: '//', path: 'cdn.somehwere.com/somedependency2.js', is_loaded: false }
 ];

   

 dependency_loader(dependencies).ready(function () {
   doStuff();
 };

Feedback welcome, I am still in a learning phase so if you have suggests or want to point out holes, please do.