Professional Clojure: The Book

professionalclojureA book I contributed to, Professional Clojure from Wrox Press, has just been released. It’s an intermediate-level Clojure book, providing practical examples for working developers, with a particular focus on Clojure tools for building web applications.

I contributed a section on Datomic, an exciting database system with a focus on immutable data from the creator of Clojure. To my knowledge, this is one of the only books covering Datomic, and I was very pleased to have the opportunity to write about it. Check it out on Amazon, or if you’d like you can peruse the code samples for the Datomic chapter.

The book project approached me at an interesting moment. I was working at a company that was making extensive use of RDF, and I had written an adapter that let us treat Datomic as an RDF triplestore. In the process, I learned a fair amount about the mechanics of RDF and the internals of Datomic. As it turns out, RDF and its ecosystem were major inspirations in the creation of Datomic – as were, I am now quite sure, the various painful shortcomings of RDF and triplestore databases when it comes to building production systems.

This experience gave me a different perspective than I had on Datomic from my previous encounters with it. In the book, I really tried to give readers an understanding of the why and how of Datomic, its data model, and its design, in addition to covering its practical use. My hope is that it’s both readable and useful.

As a fun game, readers can also try and guess at what time of night I wrote various sections of the chapter based on how loopy and deranged the examples get.

Stateful Components in Clojure: Part 3 on Testing

5cbbc6fc230200b38db266f28f810f58

Previously in Part 1 and Part 2, we looked at some problems with using globals for stateful components in Clojure, and an approach to solving this problem using local, scoped state managed with Stuart Sierra’s Component library.

A few people pointed out that I failed to deliver on the promise of showing how this approach helps with testing. From the outside it may look as though I simply forgot to include this part, and in a startling display of poor priorities I spent my time writing a deranged dream sequence instead. It might look that way! But in the best tradition of a certain prominent politician, I boldly claim that my so-called “mistake” was completely deliberate – merely a part of my grand plan – and it is you who are mistaken.

So let’s take a look at testing these things.

Simple Web Handler Test

Remember in Part 1, there was this simple test of a web handler that uses a database from the global state:

(deftest homepage-handler-test
  (with-bindings {app.db/*db-config* test-config}
    (is (re-find #"Hits: [0-9]." (homepage-handler {})))))

We re-wrote our database connection as a record in the Component style, implementing a DBQuery protocol, like this:

(defprotocol DBQuery
  (select [db query])
  (connection [db]))

(defrecord Database
    [config conn]

  component/Lifecycle
  (start [this] ...)
  (stop [this] ...)

  DBQuery
  (select [_ query]
    (jdbc/query conn query))
  (connection [_]
    conn))

(defn database
  "Returns Database component from config, which is either a
   connection string or JDBC connection map."
  [config]
  (assert (or (string? config) (map? config)))
  (->Database config nil))

We also re-wrote our web app as a component that gets the database record injected in, like this:

(defrecord WebApp
    [database])

(defn web-app
  []
  (->WebApp nil))

(defn web-handler
  [web-app]
  (GET "/" req (homepage-handler web-app req)))

This gives us several ways to test our web app and handler functionality. First, we can duplicate the original test logic that uses a testing database.

(def test-db (atom nil))

(defn with-test-db
  [f]
  (reset! test-db (component/start (database test-config)))
  (f)
  (component/stop @test-db))

(use-fixtures :once with-test-db)

(deftest homepage-handler-test
  (let [test-web-app (assoc (web-app) :database @test-db)]
    (is (re-find #"Hits: [0-9].") (homepage-handler test-web-app {}))))

If we don’t want to use an actual database to test against, we can also mock the functionality we need in a mock database component.

(defrecord MockDatabase
    [select-fn]

  DBQuery
  (select [_ query]
    (select-fn query))
  (connection [_]
    nil))

(defn mock-database
  "Returns a mock database component that calls select-fn with the
   passed in query when DBQuery/select is called."
  [select-fn]
  (->MockDatabase select-fn))

(deftest homepage-handler-test
  (let [mock-db (mock-database (constantly {:hits 10}))
        test-web-app (assoc (web-app) :database mock-db)]
    (is (re-find #"Hits: 10" (homepage-handler test-web-app {})))))

Testing the Image Workers

A lot of this code can be re-usable in testing the image worker component, which also uses a database connection. The code for this component is a bit longer, so refer to Part 2 if you need a refresher. It’s very straightforward to write a test for it that uses the test database from our previous example.

(deftest image-worker-test
  (let [... insert some tasks into the test db ...
        test-image-worker (component/start
                           (assoc (image-worker {:workers 2})
                                  :database @test-db))]
    (Thread/sleep 2000)
    (is (... check if the test-db's tasks were completed ... ))
    (component/stop test-image-worker)))

Also observe that we’ve hard-coded the idea that the task queue is in the database into our image worker component. This too could be improved by turning it into its own component with its own protocol API.

(defprotocol TaskQueue
  (next-task [this]
    "Returns the next task in the queue, or nil if no task is in the queue.")
  (complete-task [this task]
    "Sets task as complete."))

(defrecord DBTaskQueue
    [config db]

  TaskQueue
  (next-task [this]
    (database/select db ...))
  (complete-task [this task]
    (let [conn (database/connection db)]
      (jdbc/with-transaction ...))))

This lets us flexibly mock out the task queue for testing components that use it.

(defrecord MockTaskQueue
    [tasks]

  TaskQueue
  (next-task [this]
    (let [[nxt & rst :as ts] @tasks]
      (if (compare-and-set! tasks ts rst)
        nxt
        (recur))))
  (complete-task [this task]
    true))

(defn mock-task-queue
  "Takes collection of tasks, returns mock task queue."
  [tasks]
  (->MockTaskQueue (atom tasks)))

(deftest image-worker-test
  (let [tq (mock-task-queue [test-task1 test-task2])
        test-image-worker (component/start (assoc (image-worker {:workers 2})
                                                  :database tq))]
    (Thread/sleep 2000)
    (is (... check if the test task images were processed ...))
    (component/stop test-image-worker)))

One thing that was skipped over in the discussion in the previous articles was the fact that the image workers are doing side effects other than touching a database. They make a call to an `add-watermark` function, which presumably reads the image, does processing of some kind, and re-saves it somewhere. The testability of this, too, could be improved by converting it into a component which could be mocked out or redefined for tests of the image worker.

Test Systems

It’s possible to take this test style even further, and build complete Component system maps for our testing. To refresh our memory, a component system map is a structure that defines all the components in our system, describes their dependencies, and can start them in order and pass in dependencies to return a running system.

In Part 2, we built a system that includes a web server, web handlers, and database. Let’s look at defining a system-wide test.

(def test-system
  (component/system-map
   :db (database test-db-config)
   :web-app (component/using (web-app) {:database :db})
   :web-server (component/using (immutant-server {:port 8990})
                                [:web-app])))

(deftest system-test
  (let [started-system (component/start test-system)
        test-response (http/get "http://localhost:8990/")]
    (is (re-find #"Hits: [0-9].") (:body test-response))
    (component/stop started-system))

Advantages

The most natural way to work in Clojure – as a functional language with a fantastic REPL-driven workflow – is to call functions and control their behavior by passing them arguments. This enhances our workflow in development, and also gives us flexible and straightforward ways to test our code.

A component-driven code style provides the same kind of leverage when dealing with areas of our systems that deal in side effects, state, and external systems. Access to controlling the behavior of these components is through passing them different arguments on construction, and passing in different dependencies. This delivers the same kinds of benefits as pure functions, while isolating side effects and state.

The Recap

I wrote this series because when I started in Clojure the process of building clean systems was painful.  There wasn’t a community consensus on how to manage stateful resources and there certainly wasn’t a framework I could leverage.  My goal was to give the guide I wish I had; concepts could be tied together and beginners could experience the joy of Clojure even earlier than I did.

In this series, we looked at a few strategies for managing stateful resources in the global environment and some problems these approaches cause for interactive development and testing. Then we had a nightmare about being complected by kittens. Then we looked at a design pattern using Stuart Sierra’s component library for managing stateful resources in a local scope with dependency injection. Lastly, we looked at some strategies for testing components built in this style.

After reading this series, I hope you see the beauty I see in building clean systems in Clojure using components.  Go forth and build, and may the kittens always be in your favor.

Data Scientist’s Toolbox for Data Infrastructure I

keshen-blogarticle

Recently, the terms “Big Data” and “Data Science” have become important buzzwords; massive amounts of complex data are being produced by business, scientific applications, government agencies and social applications.

“Big Data” and “Data Science” have captured the business zeitgeist because of extravagant visualizations and the amazing predictive power from today’s newest algorithms. We’ve nearly approached mythical proportions of Data Science as a quixotic oracle. In reality, Data Science is more practical and less mystical. We as Data Scientists spend half of our time solving engineering infrastructure problems, designing data architecture solutions and preparing the data so that it can be used effectively and efficiently. A good data scientist can create statistical models and predictive algorithms, a great data scientist can handle infrastructure tasks, data architecture challenges and still build impressive and accurate algorithms for business needs.

Throughout this blog series, “Data Scientist’s Toolbox for Data Infrastructure”, we will introduce and discuss three important subjects that we feel are essential for the full stack data scientist:

  1. Docker and OpenCPU
  2. ETL and Rcpp
  3. Shiny

In the first part of this blog series we will discuss our motivations behind implementing Docker and OpenCPU. Then follow our discussion with applicable examples of how Docker containers reduce complexity of environmental management and OpenCPU allows for consistent deployment of production models and algorithms.

 

CONTAINING YOUR WORK

Environment configuration can be a frustrating task. Dealing with inconsistent package versions, diving through obscure error messages, and waiting hours for packages to compile can wear anyone’s patience thin. The following is a true (and recent) story. We used a topic modeling package in R (along with Python, the go-to programming language for Data Scientists) to develop our recommender system. Included with our recommender system were several dependencies, one of them being “Matrix” version 1.2-4. Somehow, we upgraded our “Matrix” version to 1.2-5, which (unfortunately for us) was not compatible with our development package containing the recommender system. The terrible part of this situation was the error messages did not indicate why the error occurred (apparently due to a version upgrade), which resulted in several hours of debugging in order to remedy the situation.

Another similar example is when our R environment was originally installed on CentOS 6.5. By using ‘yum install’ we only obtained R with version 3.1.2, which was released in October, 2014, and not compatible with many of dependencies from our development and production environments. Therefore, we decided to build R from source, which took us two days to complete. This was due to a bug from the source which we had to dig into the source code to find.

This begs the question, how do we avoid these painful and costly yet avoidable problems?

SIMPLE! With docker containers, we can easily handle many of our toughest problems simultaneously. We use Docker for a number of reasons, with a few of the most relevant mentioned below:

  1. Simplifying Configuration: Docker provides the same capability of a virtual machine without the unneeded overhead. It lets you put your environment and configuration into code and deploy it, similar to a recipe. The same Docker configuration can also be used in a variety of environments. This decouples infrastructure requirements from the application environment while sharing system resources.
  2. Code Pipeline Management: Docker provides a consistent environment for an application from QA to PROD therefore easing the code development and deployment pipeline.
  3. App Isolation: Docker can help run multiple applications on the same machine. Let’s say, for example, we have two REST API servers with a slightly different version of OpenCPU. Running these API servers under different containers provides a way to escape what we refer to as, “dependency hell”.
  4. Open Source Docker Hub: Docker Hub is easy to distribute Docker images, it contains over 15,000 ready-to-use images we can download and use to build containers. For example, if we want to use MongoDB, we can easily pull it from Docker Hub and run the image. Whenever we need to create a new docker container, we can easily pull and run the image from Docker Hub
`docker pull <docker_image>`

`docker run -t -d --name <container_name> -p 80:80 -p 8004:8004 <docker_image>`

Screen Shot 2016-05-18 at 3.21.00 PM

We are now at a point where we can safely develop multiple environments using common system resources without worrying about any of the mentioned horror stories simply by:

`docker ps`

`docker exec -it <container_name> bash`

Screen Shot 2016-05-18 at 3.21.54 PM

Our main structure for personalized results is shown in the image below. We have three docker containers deployed in a single Amazon EC2 machine running independently with different environments yet sharing system resources. Raw data is extracted from SQL server and goes through an ETL process to feed into the recommender system. Personalized results are called from RESTful API through OpenCPU and return in JSON format.

pci

DISTRIBUTING YOUR WORK

OpenCPU is a system that provides a reliable and interoperable HTTP API for data analysis based on R. The opencpu.js library builds on jQuery to call R functions through AJAX, straight from the browser. This makes it easy to embed R based computation or graphics in apps such that you can deploy an ETL, computation or model and have everyone using the same environment and code.

For example, we want to generate 10 samples from a random normal distribution with mean equals to 6 and standard deviation equals to 1. First, we need to call a function called ”rnorm” in R base library. Performing a HTTP POST on a function results in a function call where the HTTP request arguments are mapped to the function call.

`curl https://public.opencpu.org/ocpu/library/states/R/rnorm/ -d “n=10&mean=5”`

Screen Shot 2016-05-18 at 3.30.54 PM

The output can be retrieved using HTTP GET, when calling an R function, the output object is always called .val. In this case, we could GET:

`curl https://public.opencpu.org/ocpu/tmp/x0aff8525e4/R/.val/print`

Screen Shot 2016-05-18 at 3.32.01 PM

And here are the 10 samples:

Now imagine this type of sharing on a large scale. Where an analytics or data team can develop and internally deploy their products to the company. Consistent reproducible results are the key to making the best business decisions.

Combining Docker with OpenCPU are great first steps in streamlining the deployment process and moving towards self serviceable products in a company. However, a Full Stack Data Scientist must also be able to handle data warehousing and understand the tricks of increasing performance of their code systems scale. In part 2, we will discuss using R as an ETL tool which may seem like a crazy idea, but in reality R’s functional characteristics allow for elegant data transformation. To handle performance bottlenecks that may transpire, we will discuss the benefits of RCPP as a way of increasing performance and memory efficiency by rewriting key functions in C++.

Transforming Data with a Functional Approach

integrated-1168815

Almost without fail, every time folks talk about the benefits of functional programming (FP) someone will mention it is “easier to reason about.” Speakers and bloggers make it sound obvious that immutable values, pure functions, and laziness lead to code that is easier to understand, maintain and test. But what does this actually mean when you are in front of a screen, ready to bash something out? What are some actual examples of “easier to reason about?” I recently worked on a project where I found a functional approach (using Clojure) really did bring about some of these benefits. Hopefully this post will provide some real-world examples how FP has some amazing advantages.

Speakers and bloggers make it sound obvious that immutable values, pure functions, and laziness lead to code that is easier to understand, maintain and test.

The project involved loading and processing about a hundred thousand jobs so they can be indexed by ElasticSearch. The first step is easy enough: just load the data from the filesystem. We’ll use Extensible Data Notation (EDN), a data format that is really easy to use from within Clojure.

(def jobs (edn/read-string (slurp “jobs.edn”)))

The `slurp` function takes a filename and returns all its contents as a string. `edn/read-string` parses the EDN in that string and returns a Clojure data structure. In this case, a list where each job is represented as a Clojure map that looks something like this

{ :id 123

  :location “New York, NY”

  :full_description “This job will involve …”

  :industry “Lumber”

  … }

Pretty straightforward and nothing really functional programingy just yet. However, trying to index these immediately throws a bunch of errors. ElasticSearch complains that some of these jobs have nulls for their locations. So it seems we have some bad data, begging the question, how many of our jobs are affected? We could manually loop through all those jobs and keep a count of the ones with nulls as locations. Yuck! Instead, let’s try an FP approach. Given a single job, we want to check if it has a `nil` for a location. Then, we filter all the jobs we have to just those `nil` ones and see how many we get.

(count (filter #(nil? (:location %)) jobs))

Note: The semicolon character simply starts a comment and is used here to display the result.

First, the call to `filter` takes two arguments, a function and a collection. The function should take elements of the collection, in this case individual jobs, and return whether or not the given job’s location (extracted using `get`) is equal to `nil` (equality checking is done with `=`).

Okay, so about a hundred bad jobs out of thousands. That’s not a whole lot, so we should be okay ignoring those. To do that, we want the opposite of the above and filter out jobs that *don’t* have nulls for locations.

(def no-nulls (filter #(not (nil? (:location %))) jobs))

Instead of `=` we use `not=` to check the opposite. ElasticSearch can now happily take our jobs and index them just fine.

We start doing some test searches against our ElasticSearch instance and quickly find out the results contain far more data than we need. Each job comes with a full job description that often contains a huge amount of text. But we don’t need all of that information. We want to create a new field for our jobs that contains a truncated version of this description. That way, the search results can be much smaller, reducing network traffic and time ElasticSearch spends serializing and marshaling all that data.

Let’s again try the FP approach. When we used `filter` earlier we were thinking of what had to be done with each individual job (check if a location is `nil`) and then we let `filter` and `count` do the grunt work. Here, what we need to do with each individual job is to add a new field. Since jobs are just hashmaps, we can simply add a new key-value pair. As for the truncated description, we can just take the first hundred characters of the full description. This is a bit more involved than the `filter` example, so let’s start with writing a function that handles an individual job.

(defn assoc-truncated-desc

  [job]

  (let

      [full-desc (:full_description job)]

    (assoc job :truncated_desc (subs full-desc 0 100))))

This function takes a single `job`, gets the full description, and assigns it to `full-desc`, and then calls `assoc` to create the new key-value mapping. The key is the name of our new field, `:truncated_desc`, and the value is just a substring of the first 100 characters.

A subtle thing to note is the original `job` does not change. `assoc` returns a new hashmap with the new key value pairing. This means `assoc-truncated-desc` is a pure function that takes an existing job and returns a new one, never mutating any state. Let’s say we call it on one of our jobs and find out that 100 characters is not enough. The original job remains untouched, so we can modify the code (say, change 100 to 150) and call it again until it’s just right.

So we’ve written a function that can process a single job. Hooray! How do we do this with 100,000 jobs? Instead of filtering out some number of jobs as we did with the null locations, we want to apply this transformation to every single job. This is exactly what the `map` function is for. Let’s call it on `no-nulls` from before.

(map assoc-truncated-desc no-nulls)

And that’s it! We return a new list where every job has the new truncated description field, ready to be indexed and searched. The final code looks like this:

(defn prepare-jobs

  [jobs]

  (map assoc-truncated-desc

       (filter #(not (nil? (:location %))) jobs)))

We started with a bunch of raw jobs from the filesystem and turned them into something that fits our needs. An imperative approach might have you start with writing a for loop and filter out how you want to modify the whole list. Functional programming takes a different approach by separating out the actual modification (remove a null value, adding a new field) from the way you wanted to *apply* a transformation (either by filtering or by mapping). We start with thinking at the level of an individual element and what we want to do to that element., allowing us to figure out whether we need to `filter`, `map`, `partition`, etc. This approach lets us focus on the “meat” of the problem.

Functional programming takes a different approach by separating out the actual modification from the way you wanted to *apply* a transformation

Immutability and purity are simply what makes this approach optimal. A job is never changed so we can keep mucking about with it until we have what we want. A transformation does not rely on any state so we can make sure it works in isolation before even thinking about collections and loops. Functional programming helps us focus on solving the problem at hand by isolating unrelated concerns so we can, so to speak, “reason” about it more easily.