If you don’t like testing your product, most likely your customers won’t like to test it either.
Here’s a snippet of a test for our ‘share’ feature:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
The test opens up the page, fills out the form, and makes sure the confirmation window appears. If it doesn’t appear, the test takes a screenshot and reports a failure.
The tests are generally pretty fast. We use Grunt to kick off the test suites
so that all you need to do to run them is type
grunt test. (That’s a lot
easier to remember than
casperjs --ssl-protocol=any --ignore-ssl-errors=true
test path/to/tests!) Simpler tests are typically less than a second to run,
but there are a few slower tests that rely on external services, which can
take as long as 15 seconds to run. This led to concerns about the test run
time. We want to run the whole suite of tests frequently, but we don’t want it
to take a couple of minutes each time.
The solution I went with was running them in parallel. They’re all independent
so there’s no need for any test to wait for any other test to finish. CasperJS
doesn’t officially support parallelization so I jury-rigged something together
with a shell script. It gets each test file, runs them all as background processes
and redirects their output to temporary files. Once that’s done, it cats all the
output in order and then uses
grep to print failures at the end.
Here’s some sample output after the test suite has run:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
I used the
time command to print out how long the suite takes. It’s now
around 25s instead of 90s+. That is, the run time is the slowest test’s run
time plus some overhead. That’s a big improvement over the sum of all the tests’
This was great when we only had a few tests, but as the suite grew larger, I noticed the server was starting to struggle. It could handle five connections opening at once, but a hundred was causing tests to time out. My solution for this was to split the tests into smaller batches. Instead of running 100 tests all at once and bringing the server down, I can run two sets of 50. It’s a little slower than it would be if they could all run at once, but it’s definitely faster than having some tests randomly time out and fail.
Now that the casper tests are quick and easy to run, they’re being used more frequently and catching errors faster. Some developers are writing casper tests before they write the actual code, too.
While CasperJS is a great tool for testing interactions and catching errors (like forms not submitting correctly), it doesn’t particularly care about how the page looks. The casper tests will happily pass even if no CSS loads on the page. A human would obviously see something is broke, but the casper tests won’t. We wanted to be able to catch problems like that without manually looking at every page. Fortunately, there’s a tool for that: PhantomCSS.
PhantomCSS builds on top of CasperJS. It takes screenshots of your site and then after you’ve done work, it takes new screenshots. It then compares the two and highlights any changes. This can be incredibly useful. For example, suppose you’re changing the header on one pager to be centered. If this accidentally center headers on other pages, that will show up as a failure in PhantomCSS.
Since PhantomCSS tests run the same way as the casper tests, I was able to use the same method to run them in parallel. Like with casper tests, individually they’re pretty quick, but running them all sequentially can be slow. Running them in parallel is a big time saver.
Now that we are using CasperJS and PhantomJS, our confidence when releasing has gone way up. We no longer need to manually check styles on every page because PhantomCSS will show us every change. We don’t need to click through flows because CasperJS does that for us. We’ve been releasing new bug-free features at a consistent rate that wouldn’t be possible if we were still manually testing.