Thursday, May 28, 2015

The perils of over testing

Test Driven Development (TDD) is a great thing. It allows us to write code "safe" in the knowledge that if we accidentally break something in another part of the system, we will be alerted to it and can re-write or fix accordingly.

It's such a good thing that often in large teams people will insist on this idea of 100% code coverage (i.e. every piece of working code has a corresponding test). Also, some people are compelled to test every single permutation and variation of how someone will access their program/website.

TDD does have some costs however. Some are evident, but others not so.

Cost 1 - Tests take time to run
True, automated testing is faster than manual testing, but they do take time to run. This time means every time you make a change to the code, you need to wait for the suite to finish running in order to feel that your code is "safe" for deployment.

Most of this time is a necessary evil.

A lot of it is unnecessary...

Should you write a feature that checks to see that a form is disabled if a text box is not checked? Well, it depends...

If this is essential to the end user getting to a process, then possibly yes.

If this is just to check a possible error condition that has little or no bearing on the final result, then possibly no.

You could maybe write it as you are developing a feature and then remove it when you are done.

Or else you can write an Object Oriented JS component (FormDisabler) and then test its logic with Jasmine.

Bear in mind that testing JavaScript in a feature will bring up the whole stack which takes time. Time is valuable.

One analogy is that testing is like packing for a trip. You might think that you will need to bring everything you have to cover every possible situation, but really you probably only need an overnight bag with a couple of changes of clothes. The cost is that the more you bring, the more you have to carry.

Cost 2 - Brittleness
Over testing can lead to a very brittle code base where one small change can lead to many hours of fixing broken tests. In Agile development you will constantly make changes to the code base and often what is wanted in the end is not what is expressed in the beginning.

If you test every single possible scenario, then a small business change can lead to a domino effect of broken tests.

The key is to identify what aspects/methods of a feature are most important and to test those.

Classes should be made bulletproof, but not features.

Martin Fowler's Test Pyramid is a good guide.

http://martinfowler.com/bliki/TestPyramid.html

 Summary
At the end of the day, you have to use your brain and judgement to determine what tests are important and what tests are not. It is a subjective thing and can vary from team to team (and application to application). Back to the packing analogy, you should only pack what you need for the trip. If you are writing something mission critical that can never go down and the slightest mistake can lead to potential loss of human life, then by all means go for 100% test coverage. On the other hand, if a mistake in part of your app means a comment or review might not get posted, then test accordingly.

Sunday, April 12, 2015

ECMAScript 6

Today is a rather significant one for me as JavaScript/Rails programmer because Sprockets (https://github.com/rails/sprockets) upgraded to version 3 meaning I can now use the Sprockets-es6 gem (https://github.com/josh/sprockets-es6) to transpile ES6 JavaScript to ES5.

Now to those that didn't understand the last paragraph, here's what it means...

The language we commonly call JavaScript is actually formally known as ECMAScript (http://en.wikipedia.org/wiki/ECMAScript). Since 2009 all the major browsers have been interpreting ECMAScript version 5. In fact they mostly still do.

Version 6 is due to be formalised any day now (well, June 2015) which means that a whole new slew of functionality is due to be added to the language (class constructors, formal inheritance, fat arrows, etc...). Browser makers will start implementing these new features once the spec is complete.

All of this sounds great, but for 2 things

1) I want to start using ES6 today
2) Even if the newly released browsers did come out today, people are not all going to upgrade their browsers as soon as the new ones come out. Typically there is about a year lag before a new feature (or set of features) has enough critical mass to be used with any confidence.

So what can we do?

Well thankfully there are transpilers which can take most of the features of ES6, and rewrite them as ES5. The 2 main ones are Babel (https://babeljs.io/) and Traceur (https://github.com/google/traceur-compiler). For various reasons, I chose to use Babel (because I found their resulting code more readable and because their gem, Sprockets-es6, seemed to have the easiest to add to our project flow). (Basically Traceur didn't work with Phantom which is what we use for testing and is actually running ES4!).

Unfortunately, sprockets-es6 had a dependency on sprockets 3 which was in beta up until this weekend.

And now it's out of beta!

As I write this I am just testing it out on our develop server and hopefully I will go live with it soon.

Happy days.

If you want an example of what can be done with ES6, you can mess around with it here.

http://www.es6fiddle.net/


Monday, September 15, 2014

Opal - Ruby to JavaScript compiler

So I just found out about the Opal Ruby to JavaScript compiler and I am very intrigued.

http://opalrb.org/

So for a few years now, the Ruby community has been encouraged to use CoffeeScript as a JavaScript "alternative" (http://coffeescript.org/) but for some reason, as an avid JavaScript programmer, it never appealed to me. CoffeeScript offers a small amount of syntactic sugar on top of JavaScript, but it never seemed worth the learning curve for me to take it on. I don't actually mind JS's brackets (as they let you know where things begin and end cleanly) and significant white space in a programming language has always seemed like a bad idea too (it's bad enough that a misplaced semicolon can stop a program from running properly, but try searching for an extra white space character you can't even see!).

Opal on the other hand, is actually Ruby compiled to JavaScript (not Ruby/Python-ish syntax). It looks, feels, and even smells like Ruby because it is. The JavaScript it generates might be a little more verbose than that generated by CoffeeScript, but that's because it has to do more (Ruby and JavaScript are separate languages in the proper sense, not different dialects). It does however compile out its classes using the module pattern which means it is safely scoped (http://toddmotto.com/mastering-the-module-pattern/).

It also has a "Native" bridge to interact with regular JavaScript which helps maintain clean separation (http://opalrb.org/docs/interacting_with_javascript/) as well as RSpec support for testing. It even supports method_missing.

In any case, there are many other interesting things about it, which I won't go into here.

It will be interesting to see where it leads...

PS I see Opal as being more like ASM.js (http://asmjs.org/) than CoffeeScript. It can actually use a quite limited subset of JS in order to be functional.

Wednesday, April 16, 2014

Checking performance in EaselJS apps

So one of the great things about EaselJS is the fact that you can create Flash like games in JavaScript. I have been doing so on www.activememory.com for the last couple of years.

Unfortunately, it came to our attention that on some systems, the games ran rather slowly. According to their own docs, they admit that the time between ticks might be greater than specified because of CPU load (http://www.createjs.com/Docs/EaselJS/classes/Ticker.html#method_setInterval).

So, is there any way we can track the actual time between ticks on a real system?

Fortunately, yes, we can.

On each element that uses the ticker, you can set up an addEventListener and set the method you want to fire on that interval.

createjs.Ticker.addEventListener('tick', tickListener);

In the method itself you will want a few instance (or global) variables

this.previousFrameTime
this.totalFrames
this.totalFrameTimes

as well as a local variable

currentFrameTime

On each tick, we get the current timestamp (new Date() in JavaScript) and compare it to the previous.

We can then store the total number of ticks in the game, and then compare it to the total amount of time between frames.

This will give you the average frame time.

But enough of my yakking, here is a code sample (adapted from my code)

createjs.Ticker.setFPS(50);

Theatre = function() {
  var me = this; //set scope to this object
  var canvas = $('#canvas').get(0); // get the canvas object
  me.stage = new createjs.Stage(me.canvas); // create a stage object

  me.previousFrameTime = null; // initially null
  me.totalFrames = 0;
  me.totalFrameTimes = 0;

  createjs.Ticker.addEventListener('tick', me.tickListener);
}

Theatre.prototype.tickListener = function(event) {
  var me = this;
  var currentFrameTime = new Date();
  
  // skip the first frame because we haven't set a previousFrameTime yet
  if (me.previousFrameTime) {
    me.totalFrames++;
    me.totalFrameTimes += currentFrameTime - me.previousFrameTime;
  }

  me.previousFrameTime = currentFrameTime; // save this for the next tick
  me.stage.update(); // update the stage with each tick
}

// something else calls this when the game ends
Theatre.prototype.finish = function() {
  var me = this;
  console.log("average frame time in milliseconds:" + (me.totalFrameTimes / me.totalFrames);
}

So when you subtract 2 dates in JavaScript, it returns the difference in milliseconds.

If your FPS is 50, then you should have values that are round about 20 (1000 / 50 = 20). Anything much larger is an indication that your game is running much more slowly than it should be.


Saturday, January 11, 2014

Our git workflow

In my current gig in ABC Active Memory (https://activememory.com/) we have put together a git workflow which works really well for us, so I decided I would write about it here so that others could benefit.

The things you need to have in place though are Continuous Deployment (http://en.wikipedia.org/wiki/Continuous_delivery), Test Driven Development (http://en.wikipedia.org/wiki/Test-driven_development), Agile (http://en.wikipedia.org/wiki/Agile_software_development) and the desire to keep releasing small independent incremental features without any one of them blocking each other.

Branches

So the key is branches (obviously). We have 3 main branches which are shared by everyone, and every new feature takes place in a feature branch. 

Our 3 shared branches are 
  • acceptance
  • integration
  • master

Feature branch

When we start a new feature, we do all the work in our feature branch which we cut from integration. Our feature branches are named by ticket number (i.e. 575-add-invoice-to-order) so that they can reference our JIRA tickets (though any ticketing system will do) and we can keep track of them.

If you have been working on a feature branch for a long time, it is worth occasionally rebasing or merging from integration to keep it fresh. 

Acceptance

When we think the feature is ready for the business owner to inspect, we merge it into acceptance and once this build is green, we deploy it to the acceptance server for the business owner to look at and (hopefully) accept. If for some reason the business owner needs some changes, we do them in our feature branch and then re-integrate into acceptance and then redeploy. Rinse and repeat until accepted.

Periodically, we will wipe the acceptance branch (when there are no pending stories in it) and re-cut it from integration to weed out any stories that never get accepted (due to changing business needs).

Integration

Once the feature has been accepted, we merge it from our feature branch into integration. Never merge acceptance into integration as it may have other unaccepted stories in it.

If the merge has issues and the build goes red for some reason, you can fix them in integration as this will eventually get merged into master which is released.

Master

master only gets merges from integration so those merges should be green most of the time. The one exception is that hotfixes (i.e. any emergency typo or blocking issue) might need to get done in master directly because time may be of the essence. Those hotfixes must be merged back into integration.

Once master is green, deploy it to the staging server to test the actual build. Once the feature is checked on staging, we deploy to our production servers and do another feature check.

No downtime releases

We strive for no downtime releases most of the time, but occasionally we will need some scheduled downtime. If the feature one is working on does need downtime, we normally hold off on merging it into integration until about an hour before deployment and we make sure everyone knows about it (it's quite rare though). We even deploy migrations without downtime provided we are only adding fields or tables (renames do require downtime unfortunately).

Advantages

So what advantages do we get from this workflow? Well, mainly, none of us are blocking each other. All of us can release without having to worry about the state of our coworkers' stories/features. Some features take longer to get accepted than others and it gives the business owners more time to examine a feature before accepting it.

Downsides

The downsides are minimal once you get used to the flow, but there is a little bit of complexity to take on when you first get on board. There is a lot of merging going on and sometimes you might have to fix merge errors twice (once in acceptance and once in integration).

The other downside is if your feature does actually have dependencies on another feature then you need to wait for it to get integrated before you start working. It is possible, if you can't wait, to cut from the other feature branch instead of integration, but it is not a risk free approach (especially if the first feature gets cut or rejected or delayed).

This flow also works best if all your developers are full stack or vertical, rather than horizontal. It is the case in our team that some of us are more geared towards the front end and some of us are geared towards the back end. We normally get around this either by sharing the feature branch or else by doing code reviews.

Summary

In summary, our git workflow is the following
  1. Feature branches for all new features
  2. Merge feature branch into acceptance and deploy on acceptance server for the business owner to accept
  3. Merge feature branch into integration and make sure build is green.
  4. Merge integration into master and deploy on staging
  5. Deploy to production

Additionally

  1. Hotfixes are done on master, deployed, and merged back into integration
  2. acceptance is periodically deleted and re-cut from integration