Wednesday, April 16, 2014

Checking performance in EaselJS apps

So one of the great things about EaselJS is the fact that you can create Flash like games in JavaScript. I have been doing so on www.activememory.com for the last couple of years.

Unfortunately, it came to our attention that on some systems, the games ran rather slowly. According to their own docs, they admit that the time between ticks might be greater than specified because of CPU load (http://www.createjs.com/Docs/EaselJS/classes/Ticker.html#method_setInterval).

So, is there any way we can track the actual time between ticks on a real system?

Fortunately, yes, we can.

On each element that uses the ticker, you can set up an addEventListener and set the method you want to fire on that interval.

createjs.Ticker.addEventListener('tick', tickListener);

In the method itself you will want a few instance (or global) variables

this.previousFrameTime
this.totalFrames
this.totalFrameTimes

as well as a local variable

currentFrameTime

On each tick, we get the current timestamp (new Date() in JavaScript) and compare it to the previous.

We can then store the total number of ticks in the game, and then compare it to the total amount of time between frames.

This will give you the average frame time.

But enough of my yakking, here is a code sample (adapted from my code)

createjs.Ticker.setFPS(50);

Theatre = function() {
  var me = this; //set scope to this object
  var canvas = $('#canvas').get(0); // get the canvas object
  me.stage = new createjs.Stage(me.canvas); // create a stage object

  me.previousFrameTime = null; // initially null
  me.totalFrames = 0;
  me.totalFrameTimes = 0;

  createjs.Ticker.addEventListener('tick', me.tickListener);
}

Theatre.prototype.tickListener = function(event) {
  var me = this;
  var currentFrameTime = new Date();
  
  // skip the first frame because we haven't set a previousFrameTime yet
  if (me.previousFrameTime) {
    me.totalFrames++;
    me.totalFrameTimes += currentFrameTime - me.previousFrameTime;
  }

  me.previousFrameTime = currentFrameTime; // save this for the next tick
  me.stage.update(); // update the stage with each tick
}

// something else calls this when the game ends
Theatre.prototype.finish = function() {
  var me = this;
  console.log("average frame time in milliseconds:" + (me.totalFrameTimes / me.totalFrames);
}

So when you subtract 2 dates in JavaScript, it returns the difference in milliseconds.

If your FPS is 50, then you should have values that are round about 20 (1000 / 50 = 20). Anything much larger is an indication that your game is running much more slowly than it should be.


Saturday, January 11, 2014

Our git workflow

In my current gig in ABC Active Memory (https://activememory.com/) we have put together a git workflow which works really well for us, so I decided I would write about it here so that others could benefit.

The things you need to have in place though are Continuous Deployment (http://en.wikipedia.org/wiki/Continuous_delivery), Test Driven Development (http://en.wikipedia.org/wiki/Test-driven_development), Agile (http://en.wikipedia.org/wiki/Agile_software_development) and the desire to keep releasing small independent incremental features without any one of them blocking each other.

Branches

So the key is branches (obviously). We have 3 main branches which are shared by everyone, and every new feature takes place in a feature branch. 

Our 3 shared branches are 
  • acceptance
  • integration
  • master

Feature branch

When we start a new feature, we do all the work in our feature branch which we cut from integration. Our feature branches are named by ticket number (i.e. 575-add-invoice-to-order) so that they can reference our JIRA tickets (though any ticketing system will do) and we can keep track of them.

If you have been working on a feature branch for a long time, it is worth occasionally rebasing or merging from integration to keep it fresh. 

Acceptance

When we think the feature is ready for the business owner to inspect, we merge it into acceptance and once this build is green, we deploy it to the acceptance server for the business owner to look at and (hopefully) accept. If for some reason the business owner needs some changes, we do them in our feature branch and then re-integrate into acceptance and then redeploy. Rinse and repeat until accepted.

Periodically, we will wipe the acceptance branch (when there are no pending stories in it) and re-cut it from integration to weed out any stories that never get accepted (due to changing business needs).

Integration

Once the feature has been accepted, we merge it from our feature branch into integration. Never merge acceptance into integration as it may have other unaccepted stories in it.

If the merge has issues and the build goes red for some reason, you can fix them in integration as this will eventually get merged into master which is released.

Master

master only gets merges from integration so those merges should be green most of the time. The one exception is that hotfixes (i.e. any emergency typo or blocking issue) might need to get done in master directly because time may be of the essence. Those hotfixes must be merged back into integration.

Once master is green, deploy it to the staging server to test the actual build. Once the feature is checked on staging, we deploy to our production servers and do another feature check.

No downtime releases

We strive for no downtime releases most of the time, but occasionally we will need some scheduled downtime. If the feature one is working on does need downtime, we normally hold off on merging it into integration until about an hour before deployment and we make sure everyone knows about it (it's quite rare though). We even deploy migrations without downtime provided we are only adding fields or tables (renames do require downtime unfortunately).

Advantages

So what advantages do we get from this workflow? Well, mainly, none of us are blocking each other. All of us can release without having to worry about the state of our coworkers' stories/features. Some features take longer to get accepted than others and it gives the business owners more time to examine a feature before accepting it.

Downsides

The downsides are minimal once you get used to the flow, but there is a little bit of complexity to take on when you first get on board. There is a lot of merging going on and sometimes you might have to fix merge errors twice (once in acceptance and once in integration).

The other downside is if your feature does actually have dependencies on another feature then you need to wait for it to get integrated before you start working. It is possible, if you can't wait, to cut from the other feature branch instead of integration, but it is not a risk free approach (especially if the first feature gets cut or rejected or delayed).

This flow also works best if all your developers are full stack or vertical, rather than horizontal. It is the case in our team that some of us are more geared towards the front end and some of us are geared towards the back end. We normally get around this either by sharing the feature branch or else by doing code reviews.

Summary

In summary, our git workflow is the following
  1. Feature branches for all new features
  2. Merge feature branch into acceptance and deploy on acceptance server for the business owner to accept
  3. Merge feature branch into integration and make sure build is green.
  4. Merge integration into master and deploy on staging
  5. Deploy to production

Additionally

  1. Hotfixes are done on master, deployed, and merged back into integration
  2. acceptance is periodically deleted and re-cut from integration



Wednesday, August 07, 2013

JavaScript Constructors

Even though I don't agree with his thoughts about semi colons, this is a really good primer on JavaScript Constructors and Prototypes...

http://tobyho.com/2010/11/22/javascript-constructors-and/

IE console.log(). It works, except when it doesn't...

So I made an interesting discovery today.

console.log() in IE9 only works when the console is open!

As a web developer, and especially a JavaScript developer, the most useful tool in your arsenal is console.log(). And while you should (for the most part) remove any console.log() calls from your code as soon as you have finished debugging, sometimes your library code might contain it, or other times if you are writing a framework for an intermediary to use, you might leave a couple of messages for your intermediaries using console.log().

console.log() first made an appearance in Firefox's Firebug (or at least that is when I first noticed it) and has been in most major browsers, including Internet Explorer since version 8. Whilst developing, I have happily had my console open so I can debug things.

Anyways, fast forward to the current app I am working on. I am lucky enough to be working on a project that uses HTML5 canvas and the project I am working on is used by a game designer (who can put in some pieces of JavaScript code) so naturally, not only do I wrap certain parts in try/catch blocks, I want to let the game designer know his code snippet has failed so I do so using console.log.

On production, we were getting some weird bugs with IE9 and I could not for the life of me figure them out because every time I opened the console, lo and behold, they disappeared. This was driving me nuts!

Until today when by chance, I opened up the site in IE9 with the console closed and I got a JavaScript debugging error telling me that console was not defined?

Huh?

I opened the console to check, and it went away. I closed the console again but I still didn't see the error. Curious...

Anyways, after some googling, I found that console.log() does not work in IE if the console has not been opened.

D'oh!

Anyways, I basically got around it by creating a mock console object if the console is not present.

i.e.

if(!(window.console && console.log)) {
  console = {
    log: function(){},
    debug: function(){},
    info: function(){},
    warn: function(){},
    error: function(){}
  };
}
(credit: http://stackoverflow.com/questions/10183440/console-is-undefined-error-in-ie9)

So there you have it...

Thursday, June 21, 2012

Screen scraping with Nokogiri

So a few months ago I put together a side project at http://www.myrecipesavour.com/ . Basically, the site allows you to put in the URL of a cooking recipe page and will then parse the recipe for your collection.

So it turns out, reading data from another site is very easy with Nokogiri.

The source code is available here https://github.com/abreckner/MyRecipeSavour

There is a lot I am going to cover in the next few posts based on this code base (like Devise and Heroku), but for now we are focussed on this file https://github.com/abreckner/MyRecipeSavour/blob/master/app/models/site.rb

So we are going to look at the add_recipe method.

First we need to require a few packages
require 'open-uri'
require 'rubygems'
require 'nokogiri'
Unfortunately, I haven't yet figured out a heuristic for separating a recipe web page into a recipe's components (Title, Ingredients, Instructions, Amounts, etc...) but as a workaround, I maintain a catalogue of CSS selectors which define these elements per domain. When I read the page, I use NokoGiri to parse those elements for me using the CSS selectors

i.e.
html = Nokogiri::HTML(open(url).read) # open the page
  title = html.css(site.title_selector).text.strip # read the title

I then populate a recipe object with these pieces


recipe = Recipe.new
recipe.name = title
...
recipe.save


My code around the ingredients and instructions is a little more complex as my Recipe model has many Ingredients and Instructions (eventually I am going to allow users to manipulate them individually). Each ingredient/instruction is parsed based on a line break, so I need to pull in the ingredient array from Nokogiri and then merge it into a string separated by line breaks.

ingredients = html.css(site.ingredient_selector).children.inject(''){|sum, n| sum + n.text + "\n"
...
Ingredient.multi_save(ingredients, recipe)

The reason I convert it to a string and then back into an array is so that the user can later edit the ingredients via a textarea. It's fair to say that I actually write the multi_save code from a textarea for input before I did the screen scrape and I wanted to reuse it.

The other interesting piece of this add_recipe method is that I store a new Site in case the user tries to add a recipe from an "uncatalogued" site. This automatically builds up a list of the sites people are interested in saving recipes from and allows me to catalogue it at a later date


site_domain = URI.parse(url).host
    site = Site.find_by_domain site_domain
    if site.nil?
      site = Site.new
      site.domain = site_domain
      site.url = url
      site.user = current_user
      site.save!
      false
    else
... #Nokogiri scraping code goes here
end