Forceps: import models from remote databases

2014-02-4

I recently released a gem called forceps. It lets you copy data from remote databases using Active Record. It addresses a problem I have found many times: importing data selectively from production databases into your local database in order to play with it safely. In this post I would like to describe how the library works internally. You can check its usage on the README.

The idea

Active Record lets you change the database connection on a per model basis using the method .establish_connection. Forceps takes each child of ActiveRecord::Base and generates a child class with the same name in the namespace Forceps::Remote. These remote classes also include a method #copy_to_local that copy the record and all the associated models automatically.

The main reason for managing remote Active Record classes is that I wanted to use its reflection and querying support for discovering associations and attributes. A nice side effect is that the library lets you explore remote databases in your local scripts with ease.

Defining remote classes and remote associations

The definition of the child model classes with the remote connection is shown below:

def declare_remote_model_class(klass)
  class_name = remote_class_name_for(klass.name)
  new_class = build_new_remote_class(klass, class_name)
  Forceps::Remote.const_set(class_name, new_class)
  remote_class_for(class_name).establish_connection 'remote'
end

def build_new_remote_class(local_class, class_name)
  Class.new(local_class) do
    ...
      include Forceps::ActsAsCopyableModel
    ...
    end
  end
end

With this definition, remote classes let you manipulate isolated remote objects. But the inherited associations are still pointing to their local counterparts. I solved this problem by cloning the association and changing its internal class attribute to make it point to the proper remote class.

def reference_remote_class_in_normal_association(association, remote_model_class)
  related_remote_class = remote_class_for(association.klass.name)

  cloned_association = association.dup
  cloned_association.instance_variable_set("@klass", related_remote_class)

  cloned_reflections = remote_model_class.reflections.dup
  cloned_reflections[cloned_association.name.to_sym] = cloned_association
  remote_model_class.reflections = cloned_reflections
end

Cloning trees of active record models

For copying simple attributes I ended up invoking each setter directly. I intended to do it with mass assignment but disabling its protection in Rails 3 is pretty tricky, as it can be enabled in multiple ways. Rails 4 moved mass-assignment protection to the controllers but I wanted forceps to support both versions.

def copy_attributes(target_object, attributes_map)
  attributes_map.each do |attribute_name, attribute_value|
    target_object.send("#{attribute_name}=", attribute_value)
  end
end

Cloning associations is done by fetching all the possible associations of each model class with .reflect_on_all_associations, and then just copying the associated objects depending on its cardinality. For example: this method copies a has_many association:

def copy_associated_objects_in_has_many(local_object, remote_object, association_name)
  remote_object.send(association_name).find_each do |remote_associated_object|
    local_object.send(association_name) << copy(remote_associated_object)
  end
end

It uses a cache internally to avoid copying objects more than once.

Handling STI and polymorphic associations

Supporting Single Table Inheritance and polymorphic associations turned out to be one of the most challenging parts. Both features rely on an type column containing the model class to instantiate. This column is referenced in multiples places in the Rails codebase, such as in join queries or when instantiating records.

For example, when instantiating objects from queries Rails uses the hash of attributes obtained from the database. In order to change the type column that method is overriden in remote classes:

 Class.new(local_class) do
  ...

  if Rails::VERSION::MAJOR >= 4
    def self.instantiate(record, column_types = {})
      __make_sti_column_point_to_forceps_remote_class(record)
      super
    end
  else
    def self.instantiate(record)
      __make_sti_column_point_to_forceps_remote_class(record)
      super
    end
  end

  def self.__make_sti_column_point_to_forceps_remote_class(record)
    if record[inheritance_column].present?
      record[inheritance_column] = "Forceps::Remote::#{record[inheritance_column]}"
    end
  end

  ...
end

Testing against multiple Rails versions

Testing against multiple Rails versions was far more easier than I expected. I used this approach by Richard Schneeman: using an environment variable to configure the Rails version at the .gemspec file:

if ENV['RAILS_VERSION']
  s.add_dependency "rails", "~> #{ENV['RAILS_VERSION']}"
else
  s.add_dependency "rails", "> 3.2.0"
end

And then set the target versions in travis.yml:

env:
  - "RAILS_VERSION=3.2.16"
  - "RAILS_VERSION=4.0.2"

The awesomeness of travis will do the rest.

Conclusions

A thing I loved about this project is that I started with a very simple idea without knowing if it was going to work with real-life complex models. I just wrote a very simple test and handled more and more cases incrementally. It ended up being more complex than I expected but it is still a pretty compact library thanks to the wonders of Ruby, metaprogramming and Active Record.

The code for Forceps is available at Github. Pull requests are welcomed.

No Comments

Mailgun adapter for Action Mailer

2013-09-4

At bandzoogle we wanted to use Mailgun as our mail provider for sending emails. Mailgun offers 2 interfaces: SMTP and an HTTP api. If you want to use mailgun in a Rails app via Action Mailer, the only alternative you can use out of the box is the Action Mailer SMTP adapter.

The SMTP approach works great unless you want to send mail in batches. For sending emails in batches with mailgun you must use recipient variables, which are substitutions you want to make based on each recipient. They indicate to mailgun that each message must be individualized and they prevent the to field from containing the full list of recipients.

Providing these recipient variables via SMTP requires you to use a very specific MIME format for wrapping the message. Doing that with Rails is not trivial, as it requires you to use the internal objects of Action Mailer to form the custom MIME. I didn’t manage to make it work, even replicating the documented format, and Mailgun support couldn’t help me with this approach in Rails.

So I decided to go with the HTTP API, which is much better documented and much easier to use for setting recipient variables. I created an adapter named mailgun_rails:

  • It lets you use plain Action Mailer for sending emails with Mailgun.
  • It supports Mailgun specific features like recipient variables or custom variables that will be received via Mailgun webhooks.
  • It supports sending HTML messages, TEXT messages or both depending of how the Action Mailer message is composed.

An example of usage:

email = mail from: 'jorge@email.com',
             to: ['user_1@email.com', 'user_2@email.com'],
             subject: 'Hey, this is a test email'

email.mailgun_variables = {name_1: :value_1, :name_2 => value_2}
email.mailgun_recipient_variables = {'user_1@email.com': {id: 1}, 'user_2@email.com': {id: 2}}

You can read more about the adapter on github.

No Comments

On better information habits

2013-08-20

I have recently read The information diet. It is a book about information consumption. It makes an analogy with food that works pretty well: in the same way that healthy eating habits are good and needed in a world of food choices, good information habits are necessary in a world where information is everywhere.

The Information Diest Book Cover

The book contains great pieces of advice. Some ideas I loved are:

  • Information overload is a wrong concept. Information is not requiring you to consume it.

  • There has always been more human knowledge that any human could consume. The problem is that today it is all at the tip of your fingers.

  • Information should be consumed consciously. This implies consuming exactly what you want to consume when you want to consume it. The author even recommends scheduling your information consumption.

I have reviewed many times the approach I use for consuming technical information. In fact, I bought this book because I wasn’t quite happy with the system I had in place:

  • I was receiving too much information

  • I was exposed to a lot of noise

Too much information

I have always tried to be selective with the sources I was subscribed to. But at the same time, I wasn’t comfortable with the idea of missing interesting stuff. The result is that I was subscribed to a lot of sites that published great content. This made it difficult to read all the new content periodically. This kind of stressed me out and prevented me to enjoy and get the most out of the things I read.

So I decided to reduce drastically my number of subscriptions. I reduced my subscriptions from 24 to 5:

But more important that the specific sites themselves, which for sure will change with time, I clarified what I wanted from now on:

  • Keep a really small group of subscriptions.
  • Carefully crafted publications. Their authors work hard to produce and select high-quality content
  • Predictable and slow cadence (weekly or monthly)

A nice side effect of these specific publications is that I didn’t need an RSS reader anymore. Just plain old mails and a filter to forward them to my Pocket account does the work perfectly.

In addition to these subscriptions, I love watching random screencasts from Peepcode, CleanCoders and Railscasts.

Finally, I think that technical books should form the basics of my education. In fact, reading books requires more time and effort that reading articles or watching screencasts. So freeing time for doing it was very important for me.

Not surprisingly, with the exception of the weekly newsletters, the contents I consume are not free. To me this is logical. Producing high-quality content is incredibly hard. When a publisher depends on ads, it is very easy that it ends up favoring volume and frequency, two things I really want to avoid.

Too much noise

I call noise to the pieces of information I spend time consuming that are irrelevant to me. It took me a lot of time realizing I had this problem and even more time taking measures against it. The source of my noise was basically Twitter.

Despite of all my efforts, I have failed to make Twitter a valuable information-consumption tool for me. At the end, I realized that the problem was that I was trying to use Twitter for something it was never intended to be.

One of my favorite parts of the book is when the author synthesizes his research on neuroscience applied to the way we consume information. At some point it talks about a neurotransmitter called Dopamine which is in the heart of brain stimulus reinforcement. In words of the author:

Dopamine makes us seek, which causes us to receive more dopamine, which causes us to seek more.

As a species, dopamine is good. It helped us to find acquire knowledge and innovate. With abundance of information and multiple notifications in place (emails, mobile phones, social networks, blogs…), dopamine is bad as it puts us in a loop where we can’t focus on a given task at hand.

The book doesn’t mention Twitter specifically, but when I learned about dopamine I realized Twitter was a perfect dopamine generator. So I wasn’t surprised to find articles like Twitter as the ultimate dopamine dispensary. I wasn’t quite happy with Twitter before learning about dopamine, but learning about it was enlightening.

I found myself reading about political opinions, hobbies, great and not that great technical articles, quotes, thoughts, discussions, travels, news, jokes, rants, food, cool apps… Reading tweets that made me curious and click the associated link or google for more. Things that most of the times were entertaining and, some times, highly educative. But that didn’t pass the filter of ‘high-quality and relevant to my goals only’ that I wanted to use from now on. The dose of narcissism present in Twitter (as in any other social network) doesn’t help either.

So what I did was basically to remove Twitter from my daily life one month ago. The only measure I took was removing my personal account from the desktop app (I will use the web app if I need to). I still have the app on my phone as it’s a good time killer in some circumstances, but I am really trying to keep these circumstances rare. For example, they don’t include reading before going to sleep or pauses when working, which were 2 typical scenarios for me. So far, I am really happy with the new silence gained by turning Twitter off.

Conclusions

The idea of selecting good content to consume is pretty obvious. But in my experience, more important than selecting the best content, is the act of selecting a small group (of great content, of course). I want to manage a volume of publications I can handle effectively with ease. The system won’t work for me otherwise. Even when that implies leaving a lot of great content out.

I have never been a fan of social networks, of any kind. Twitter was different in my mind. In fact, my first tweet in March of 2010 was a question about whether Twitter would be useful as a technical information source. It took me more than 3 years to answer myself. To me it isn’t, because the noise of the channel is too high for my taste.

3 Comments

Why I love RubyMine

2012-12-29

I love RubyMine. I have been using it for one year and a half as my main Rails/Javascript editor.

Before RubyMine I used TextMate. I never felt completely satisfied with it but I liked its minimalistic approach, the rich suite of available plugins and how it looked and felt in general. At some point I decided to learn Vim motivated by the good press it had among Rails developers. I didn’t get engaged at all. Then I tried RubyMine and felt that I have finally found something that matched my taste and needs.

Things I like about RubyMine

Refactorings

There are 2 things I keep doing when I code:

  • Renaming things
  • Extracting blocks of code as new methods or variables

When I code I am continuously renaming stuff and extracting methods and variables until the code looks clean to me. And I feel much more productive when I can have these refactorings performed automatically. RubyMine offers excellent support for them in both Ruby and Javascript.

Everything is navigable

RubyMine lets you navigate to any symbol by pressing CMD-B (or CMD click with the mouse). This includes Ruby gems in your Gemfile or JavaScript functions included in any file of your project.

I use this feature all the time. I missed it badly when using TextMate, and never felt totally comfortable using it in Vim via CTAGS support.

Keyboard friendly

RubyMine offers excellent keyboard support:

  • Most commands have predefined shortcuts and they appear in the menus so learning them is easy.

  • It has a nice editor with a search box for defining shortcuts for whatever action the IDE offers.

  • The whole environment is designed to be used with the keyboard. For example, when using modal dialogs: CTRL-ENTER will perform the default action and close the dialog (such as commit in Git), and ALT-SHIFT-ENTER will open the alternative options so you can choose other option and close the dialog when selected (such as commit and push in Git).

Selection in scope

When you press CTRL-W in RubyMine you get the word on the cursor selected (as in TextMate). Press it again and the selection will expand to include that word’s scope, such as the enclosing string or the method invocation it is part of. Consecutive keystrokes will expand the selection intelligently (enclosing block, method, class, etc.) I use this feature all the time.

Standard, discoverable environment

I appreciate having standard menus with actions and toolbars, having shortcuts displayed in menus and tooltips, having tabs, browsable trees and sheets of properties, having contextual menus properly displayed when I right click on something, and so on…

It is true that once you master the environment you rarely move your hands out of the keyboard. But a I think a nice environment benefits everyone: both power users and newcomers.

Search

RubyMine offers many search options and these are beautifully executed. Coming from TextMate this was another major relief for me:

  • Search in the current file is incremental and highlights matches as you type the search expression, even when you use regular expressions.

  • Global search lets you easily filter by file patterns and folders. It also lets you move through the matches using your keyboard (CMD-ALT-UP/DOWN), invalidating the results in real time if you modify the code.

Split views

Sometimes I like to have both the code and its test displayed in parallel. Splitting views is a feature TextMate users have been demanding for a long time (I think it was finally included in TextMate 2).

RubyMine lets you split the current editor in independent views and keep a set of tabs opened in each view.

Consistency out of the box

This feature is probably the most difficult to describe, but is also one of the most appealing to me. In RubyMine things work in a consistent way. For example:

  • If you have a list of search matches, you can browse them by pressing CMD-ALT+UP/DOWN. If you have a list of failing tests, you can browse them the same way.

  • If you place the cursor inside a hash entry and press ALT-ENTER you are offered the possibility of converting between => and : notations. Do the same inside a block, and you can convert automatically between the braces or do/end syntax. Do it inside a string to convert it to a symbol.

  • Errors highlighting, code formatting, refactorings… they all work the same way whether you are editing Ruby, JavaScript, a CSS or Cucumber. I remember in my TextMate days not being able to pretty format Ruby, while I could do it with JavaScript. To have errors highlighting in JavaScript or CSS, but not in Ruby.

I appreciate the possibility of configuring and extending your editor but I like things working out of the box. And I think that RubyMine is exactly that: an extensible environment that works out of the box. RubyMine itself is a set of plugins running on a JetBrains runtime. And all of these plugins present a high level of quality in terms of usability and IDE integration.

Integrated tools

My initial approach with RubyMine was using it as an editor only, using the Terminal for the rest of work (rails server, git, tests, rake tasks…) Eventually I started to use RubyMine built-in tools more and more, and found some of them very useful:

  • Running your server or rake tasks within RubyMine let you click on exceptions traces and browse to the corresponding line in source.
  • RubyMine built-in test runner lets you browse failing tests, listen for changes to re-run tests automatically and filter the output for each test individually.
  • Git support is excellent and it lets you commit, pull and merge changes without leaving RubyMine and without moving your hands out the keyboard.

When I am coding, I keep my RubyMine minimal. I only have the editor visible, without toolbars or other windows. But I use many of the built-in tools when I need them, and I have ended up liking them a lot.

Many other nice things to have

There are many other nice features of RubyMine I use everyday:

  • Clipboard history
  • Automatic completion
  • Live templates (a.k.a snippets)
  • Underline syntax errors in real time in both Javascript and Ruby
  • Possibility of configuring automatic spell checking for comments but not for code
  • Search for usages and references
  • Excellent cucumber support, including generating steps from missing invocations or navigate to steps from their usages in features
  • See the list of last opened files (CMD-E)
  • Navigate editing history back and forward (CMD-ALT-RIGHT/LEFT)

Major flaw of plain editors

Beyond the ‘work out of the box’ factor, I find another source of pain with plain editors: as long as they don’t manage a proper model of the edited code, the kind of assistance they can provide will fall short to my expectations.

I think most of the pain I felt when using TextMate, Vim or SublimeText 2 (my current plain Editor) came from this problem. You can use plugins and external tools to circumvent this limitation (such as CTAGS), but the experience is just not the same.

If the editor doesn’t know that the word under your cursor is a local variable defined as a parameter of a method, it won’t be able rename it automatically, browse to its definition or include it as a parameter in a new method if you extract the block of code where it appears. I value syntactical tricks and powerful combos, but for my editing happiness I need to get this basic stuff right first.

Conclusion

I don’t think there is such a thing such as the best editor for everyone. RubyMine is the best Ruby/Javascript editor for me, and I wanted to expose my reasons, since I feel that this IDE doesn’t receive as many praises as it should.

I don’t think RubyMine is perfect neither. My major complain is not having much better editor themes. RubyMine’s built-in themes or community themes don’t feel as good and polished as in TextMate or SublimeText. And of course, RubyMine is heavy in both terms of CPU and memory consumption. They have improved performance a lot in recent releases but it will always feel heavier than any simple editor.

4 Comments

Continuous delivery with Jenkins and Heroku

2012-07-11

Some time ago I read Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation, by Jez Humble. It is a great book I totally recommend. In this post, I would like to talk about our approach for implementing its principles using Jenkins and Heroku in zendone.

Continuous delivery extends the ideas of continuous integration introducing the pattern of a deployment pipeline, which according to the book is:

An automated implementation of your application’s build, deploy, test and release process.

Automated builds are a key aspect of Continuous Integration, and deployment pipelines elaborate on the concept of staged builds: automated build phases that are chained, having the application deployed in the last step if all the previous stages are built successfully.

The deployment pipeline

Each time a change is detected in the source code repository the deployment pipeline starts:

  • The commit stage run the unit tests and may run other processes, like analysis of coding standards or code coverage. It asserts the system works at the technical level.

  • The acceptance stage run the tests that exercise the system as a whole, asserting that it works at the functional and nonfunctional level.

  • The manual testing stage try to detect defects not caught by the automated tests and check that the system fulfill its requirements. It typically includes exploratory testing and user acceptance testing.

  • The release stage delivers the system to its users. This may include distributing the packaged software or, in the case of web applications, deploying the application in the production server.

As the build advances, the stages become slower but the confidence in the readiness of the build increases. For example, unit tests are fast but they won’t detect integration bugs. While acceptance tests are slow but they check all the components assembled and working together.

The book explains in detail the deployment pipeline pattern and it also reviews many technical solutions for implementing it.

So we wanted to apply these ideas to the way we develop zendone. Our ingredients are:

  • A Rails backend and a Javascript-heavy web client.
  • Heroku as our hosting platform.
  • Jenkins as our CI server.

Pipeline design

We designed a pipeline based in the general structure proposed in the book and implemented it with the following build steps:

  1. Commit stage
    • Start
    • Server unit tests: Rspec specs
    • Javascript client unit tests: Jasmine specs
  2. Acceptance stage (Cucumber tests)
    • Functional Core
    • Functional Evernote
    • Functional Calendar
  3. Manual testing stage
    • Release to staging environment
  4. Release stage
    • Release to production environment

In order to advance between stages, all the corresponding tasks must be properly executed. For example, the build won’t develop to our staging environment until all our acceptance tests have passed.

The deployment pipeline in zendone

Commit stage

When a change is detected in our Github repository the build starts. It first prepares the environment to run the build. It installs all the gems needed, resets the database, cleans temporary directories, etc.

After everything is set up, it automatically labels the code in Github with the build version, which includes the build number extracted from Jenkins. That will allow you to trace each build with the specific code it executed.

The build then run the suite of RSpec tests for the Rails server and also the suite of Jasmine tests for the Javascript client. Although we don’t pay too much attention to them, we also collect RCov metrics of code-coverage for our RSpec tests.

Acceptance stage

Once the unit tests pass, the build enters into the acceptance stage. We use Cucumber with Capybara and Selenium web driver for building our suite of functional tests.

A serious problem when you run your tests in a real browser with Webdriver if that they are very slow. We palliated this problem by:

  • Parallelizing. We parallelized the execution of our suite using parallel_tests. We are currently running our Cucumber features with 5 browsers in parallel.
  • Slicing. We divided our big suite of tests into 3 big categories: Core, Calendar and Evernote, so we have smaller builds that fail earlier and let us test specific parts of our app more easily.

But even with this measures, running functional tests in a browser is very slow. Our full suite takes about 2 hours to run. But we believe that it is a very valuable asset for us. When it passes we are pretty confident that everything will work right in the build. That level of confident is priceless, specially when you are a small team as we are.

There are different opinions in the community about the value of real integration tests. Some people like Uncle Bob opine that you should focus on testing the boundaries the UI uses for speaking with the system. This will let you keep your acceptance tests fast. I think that, in the case of web apps with heavy javascript clients, real tests really add a huge value: being able to specify what the user expects and how the system should respond, and with those, testing all the layers of your application (web UI, Javascript, persistence, external systems…). With the experience of zendone, these tests have prevented us to introduce bugs in countless occasions (despite of having a good unit suite for both the client and the server components).

Another discussion is wether to use Cucumber or not. You can perfectly use Capyabara for writing integration tests without Cucumber (for example, combined with RSpec). Cucumber certainly adds another abstraction layer that some people consider unneeded. My opinion is that I like how Cucumber makes you focus in describing how your system should behave. But the important thing is having a good acceptance suite executed in a real browser, whatever technology you use for implementing it.

Release to staging

If all the acceptance tests pass, the application is automatically deployed to our staging environment. This environment is identical to our production environment.

We typically run some manual tests in the staging environment just to be certain that the latests additions are working right, although in many minor builds we skip this phase.

We decided that we didn’t want to have the application automatically deployed to production if all the automated tests passed. We just considered it too risky. We prefer to keep a manual trigger deploying to production.

Release to production

When we want to release to production we launch the ‘Release to production’ task.

While we are making backups of our data on every hour, we do a backup just before releasing to production, just in case we have to revert to the previous build if something goes wrong. Heroku makes it very easy to revert your code to a previous version thanks to the releases system. But if you have changed the database schema in a way it can’t be reverted easily (e.g, transforming data), quickly restoring a previous backup can be very useful.

Fully automated configuration management

One idea that really changed my mind is how the book insists in fully automate every single configuration management step of your project. At some point it mentions that any person who is given a new workstation should be able to run a command and have the development environment ready to work. While we don’t have that level of sophistication, we have tried hard to apply this principle in zendone.

This is the kind of practice where the most challenging part is having the discipline for implementing it. It certainly requires a lot of work but it is not too difficult from the technical point of view.

We have tried to follow this philosophy since we started with zendone:

  • The deployment process since a code change is committed is fully automated. The only manual step is the release to production, which only required action pressing a button in our Hudson server.

  • We have created tasks for automating most of the other configuration tasks we perform.

Heroku makes it very easy to automate things, since all the operations can be executed via shell commands, without human intervention. We manage a number of Heroku instances for zendone. In addition to our staging and production instances, we use development instances for early testing of development branches, for android and iPhone development, etc. We use the heroku_san gem, that let you easily manage many Heroku instances for the same application.

For example, zendone requires a number of config vars defined in the server to run. Since the process of defining this vars is tedious and repetitive, we created a task for preparing each heroku instance. For example, if we needed to recreate an instance for android, we would run rake heroku:prepare:zendone-android.

Implementing the pipeline with Jenkins

We use Jenkins (formerly known as Hudson) as our Continuous Integration server. We are using the following plugins:

  • Rake for launching rake tasks.
  • Git for monitoring our private Github repository and triggering the build when a change is detected.
  • Chuck Norris that will remind you of facts like Chuck Norris solved the Travelling Salesman problem in O(1) time and will show an angry or happy Chuck depending on how the build finished.

We have also experimented with plugins like Join, for implementing join conditions in your pipeline, and Build pipeline, which offers a graphical representation of your pipeline, but at the end, we realized that we could implement a simple and sequential pipeline with what Jenkins already offers:

  • For specifying the pipeline flow we use the built-in system for configuring downstream projects: projects which execution is scheduled when an upstream build is finished.
  • Although by default each project has its own workspace directory in Jenkins, we configured the same directory for all the projects in our pipeline. This way all the stages are executed on the same code that is pulled by the initial Start project.

The deployment pipeline

Each Jenkins step invokes a script we keep under version control with our code. With Jenkins, it is easy to end up with complex scripts hard-coded in the configuration of each project. This is dangerous and will prevent you from invoke your pipeline steps in other environments. For example, when our Start project is executed, this is the shell script it runs.

There is a commercial CI server called ‘Go’ from Thoughtworks. It looks fantastic, although I haven’t tried it. Jez Humble is behind it so I am sure it offers complete support for implementing build pipelines.

Some caveats

While we tried to apply the principles described in the book, there are a number of things we haven’t solved (yet).

Maybe the most important one is that we are not running our acceptance tests in a production-like instance. While Heroku makes it easy to create identical instances, it doesn’t offer a mechanism for running Cucumber tests executing a real browser. So we just run our acceptance tests in our integration server, a Mac Pro running MacOS. We haven’t had any issues due to this, but it would be much more suitable to use an identical instance for running our acceptance tests. Our staging environment, where we run our manual tests, is identical to our production one.

Another problem is that we don’t maintain each build as a physical asset with its own lifetime. This means we can’t have a build version A running the acceptance tests, and a build version B deployed to staging, ready to be tested manually. All the builds share the same workspace, so this means there is only one pipeline instance in execution at a given time. We mitigate this by labeling each build in our Github repo when it starts, so the code executed in each build is traceable. Hudson doesn’t offer direct support for this. We could probable manage to make it work this way, but we haven’t had the need yet.

Finally, the book recommends to include a stage for testing nonfunctional requirements, such as security and capacity. We haven’t implemented automated capacity tests (for example, testing that the response time of your system is below some threshold with a given load). But we do have included security tests as part of our acceptance suite.

Conclusions

I think deployment pipelines are a great idea. Of course, they are not for free. Implementing its principles and automating things takes a lot of time and discipline. That is why I consider important to do it early in the development. Your pipeline will evolve as your project does, so the early you start with it the better.

The culture of automating all your software management configuration steps is also a great principle, as well as keeping all the configuration assets under version control. As I said, I think this is a matter of discipline, more that a technical challenge. Automating configuration management is not the most fun thing to do, but it starts paying off as soon as you do it.

Finally, I think the issue of writing good acceptance tests with Cucumber and Capybara deserves a post on its own. I see a couple of problems with them. They are slow and they can be fragile (although experience will mitigate the later). Still, I consider them one of the best assets we have. zendone is already quite a big piece of software, and we can add features and change things aggressively, being confident that it will work as far as our acceptance suite is green.

No Comments

solid_assert: A simple Ruby assertion utility

2011-09-19

I have published a tiny Ruby gem implementing an assertion utility: solid_assert.

When I started with Ruby I searched about an assert utility. My brother taught me about them a long time ago and I have used them since then. For some reason, assertions are not very used in Ruby. Most rubyists and most famous Ruby libraries just don’t use them. I found the same thing with dependency injection frameworks. But, while after a time working with Ruby I was convinced that I didn’t need a dependency injection framework, I still miss assertions when programming Ruby.

Motivation

The motivation of assertions are very well syntheitzed in the tip 33 of The Pragmatic Programmer book:

If it can’t happen, use assertions to ensure that it won’t

The premise of an assertion utility is very simple: being able to include tests for your assumptions inside your code. This way, it is the program itself who verifies its own integrity. In the same way it is a good practice to use properly named and composed methods so you don’t have to document with comments what some code is doing, it is a good practice to formally code the assumptions you make about your code, and have these assumptions tested automatically.

You may think that regular tests are already covering code integrity. To me, good test suites and assertions are totally complementary. I use tests but I still want my code to launch a nice NoMethodError for NilClass when I try to send a message to a nil reference. In the same way, you can write tests and also build-in integrity checking inside your code using assertions.

An example

Let me show you an example. Imagine you don’t know that Rails already includes an OrderedHash class and you decide to implement your own one.

A simple approach (in fact, the one used by Rails’ OrderedHash) would be extending the Ruby Hash class, using an array of keys for storing the keys in order and delegating hash-related behaviour to its parent class.

For example, we could implement the []= method for setting values in the hash in the following way:

class OrderedHash < ::Hash
    ...
    def []=(key, value)
        @keys << key if !has_key?(key)
        super
    end
    ...
end

The method stores the key in the array and then invoke the parent’s behaviour. Under any circumstance, it should always be verified that the keys array and the hash entries have the same size. That is a class invariant. assert let you express that:

def []=(key, value)
    @keys << key if !has_key?(key)
    super
    assert @keys.size == self.size, "#{@keys.size} elements in the list and #{self.size} entries in the hash?"
end

solid_assert also includes an invariant method that let you make more complex calculations using a block.

invariant do
    one_variable = calculate_some_value
    other_variable = calculate_some_other_value
    one_variable > other_variable
end

Assertions can be disabled. In fact, both assert and invariant are empty methods when you use the lib. You can enable them with SolidAssert.enable_assertions. This let you deactivate them in production, if you are concerned about their performance impact.

References

All the references to assertions you find in books refer directly or indirectly to Bertrand Meyer’s design by contract proposal:

  • The Pragmatic Programmer From Journeyman to Master. It contains a section dedicated to Design by contract and a short one dedicated to Assertive Programming. It says that you can use assertions to partially emulate design by contract. These sections are in the chapter Pragmatic Paranoia, when they defend that:

    But Pragmatic Programmer take this a step further. They don’t trust themselves, either.

    By the way, they recommend you to leave assertions on in production, eloquently saying that turning them off is “like crossing a high wire without a net because you once made it across in practice”.

  • Code Complete. It dedicates a section in the chapter Defensive programming to assertions. It recommends you to use assertions to document and verify preconditions and postconditions in your software.

  • Writing solid code. This was my first reading about assertions and the one that pays more attention to them. In the chapter Assert Yourself it defines assertions, explains its motivation and shows when to use them using many C samples. A great book on good coding practices in general.

  • Programming with assertions. Although it was published for explaining the assert keyword introduced by Java 1.4, it explains very well the underlying concepts and shows many examples using them.

Conclusion

In my experience assertions used reasonably are good. They make your code more solid. You don’t have to check all the invariants, or try to validate all the preconditions and postconditions of every method you write. Just use them when you find yourself saying “at this point this should be verified…” and you will enjoy its benefits in the form of more robust code and less surprises.

I also think that asserts are suitable for any programming language. Of course, with lower level languages like C their use is even more recommended, because errors are more obscure and their consequences are harder to debug. But nothing prevent higher level languages from benefitting from them.

1 Comment

Evernicious: A tool for importing del.icio.us bookmarks into Evernote

2011-01-2

Some months ago, following my brother’s advice I replaced del.icio.us with Evernote as my web bookmarking system. At the moment I had over 2000 del.icio.us bookmarks. Since Evernote didn’t offer any kind of facility for importing bookmarks from del.icio.us, and coding a converter seemed quite easy I started with it, but never took the time to finish it.

It has been the recent rumors regarding to del.icio.us future that I decided to finish it. The result is a tool amazingly named Evernicious. The source code, installation and usage instructions can be found at the Evernicious github page.

There are many other solutions already available for importing del.icio.us bookmarks into Evernote. Just check the comments at this Evernote post (and I mean the comments, the proposed way in the post is simply wrong in my opinion).

In my current configuration, I have a public Evernote Notebook with my bookmarks. I use the Chrome Clip to Evernote extension for capturing web pages into this Notebook.

No Comments

Using delayed_job with Cucumber

2010-09-1

Delayed_job is a great Ruby solution for executing jobs asynchronously. It is intended to be run in the background, dispatching jobs that are persisted in a table . If you are using Cucumber you have to consider how the dispatching process is launched when your features are executed.

My first attempt after googling for this question was to create a custom Cucumber step that launched the execution of the jobs.

Given /^Jobs are being dispatched$/ do
  Delayed::Worker.new.work_off
end

In this approach we have an specific Cucumber step for indicating when do we want to dispatch jobs. This step will be executed synchronously in the same Cucumber thread, so you have to invoke it after some step has introduced a new job in the queue and before the verification steps:

When I perform some action (that makes the server to create a new job)
  And Jobs are being dispatched
Then I should see the expected results

I think this approach is not very convenient:

  • Cucumber is intended to be used for writing integration tests. Tests that describe your application from the point of view of its users. Ideally, they should only manipulate the application inputs and verify its outputs through the UI. A user of your application will never need to know you are using a job dispatcher in your server.

  • While controlling the exact (and synchronous) execution of jobs makes writing tests easier, it doesn’t represent the temporal randomness which is in the very nature of an asynchronous job dispatcher. In my opinion, it is good that Cucumber features verify that this randomness is correctly handled (in some controlled limits).

I think a better approach is launching the jobs in the background, simulating the normal execution environment of your application. The idea is very simple: the job worker is started before each Cucumber scenario and is stopped after it. Cucumber tags represent a good choice for implementing these hooks. In this way, you can easily activate delayed_job only for the scenarios that need it.

When implementing this approach, I found a lot of problems for providing a proper RAILS_ENV=cucumber to the delayed_job command. In fact, I wasn’t able to make it work using launching the command script/delayed_job start from a Cucumber step. RAILS_ENV was simply ignored. What I finally did was executing the rake task directly.

Before('@background-jobs') do
  system "/usr/bin/env RAILS_ENV=cucumber rake jobs:work &"
end

For stopping the jobs I had the same RAILS_ENV issue using script/delayed_job stop. I ended up killing the job processes using a parametrized kill command.

After('@background-jobs') do
  system "ps -ef | grep 'rake jobs:work' | grep -v grep | awk '{print $2}' | xargs kill -9"
end

Using this approach you can get rid of specific steps for delayed_job. Instead, you just have to tag with @background-jobs the features/scenarios that needed it.

As a conclusion, I think that using background jobs in Cucumber is a better approach in general terms. I would only use the synchronous work_off approach for special cases.

11 Comments

Dependency injection and other Java necessary evils

2010-05-30

Lately I often find myself thinking about how much I have changed my mind about Java. For a long time I have been interested in the Java Platform and I have tried to be educated on its good practices, patterns and trends. Today, I can say I am not interested in the Java platform anymore. In the process, I have learned the wonders of Ruby, Rails, true Behavioral-Driven Development and a community full of brilliant developers who consider that beautiful code is a primary goal and that software development has to be fun.

In this post I would like to talk about three recurrent Java features I have changed my mind about in recent times: from loving them I have finished considering them as necessary evils you have to live with.

  • Dependency injection
  • Strong type system
  • Object disorientation

Dependency injection

The dependency injection (DI) pattern is about separation of concerns. The concern of wiring object dependencies is extracted from objects and centralized in some kind of factory facility that, using some configuration information on how object are wired together, instantiates and configure these objects for you. This pattern is in the core of Spring and is implemented by Google Guice.

So why are DI solutions so important in Java? I think the main reason is because they are the only way to enable good unit testing. Sure there are other benefits, like minimizing coupling between objects or avoiding to write the same wiring boilerplate code again and again, but these are not as important as testing. While you will hardly find yourself in the need of changing a JDBC DAO for a JPA one, you will always need to mock dependencies when writing unit tests. And for doing so, you need to isolate these dependencies first.

And the Java new operator makes this task very difficult. It is just so rigid. You don’t have any mechanism for faking it once it is coded within your objects. The alternative usually implies implementing a full ecosystem of interfaces, their implementations enabling the dependency injection (in the form of constructors or setter methods), and factories for instantiating the configured objects. And since the boilerplate code this approach implies is considerable, you would rather use an external DI library that makes the work for you. So external DI solutions are good but, from a testing perspective, they solve a problem created by the Java new operator.

I realized of this problem when I tried to build something serious with Rails. The first thing I did was to look which dependency-injection solutions were available for Rails. I you look for DI in Rails there is a good chance you end up reading this article by Jamis Buck. That post summarized his clarifying experience with DI libraries in Ruby:

  • He first created Copland: a port of the Java Hivemind.
  • He then created Needle as a more Ruby-like approach to a DI solution.
  • He finally concluded that DI frameworks are unnecessary in Ruby (he talks about DI tools, not about the pattern itself).

In my opinion, Ruby examples used by Jamis are not the best. I think a key aspect in Ruby is that new is just a class method. This means that classes in Ruby are by definition factories, and you can fake them directly when testing. Let me show you an example with a Person class that depends on Mouth, since to say something, a person has to open his mouth.

class Mouth
  def open
    "Very complex, slow and sophisticated processing here"
  end
end

class Person
  def say_hi
    Mouth.new.open
  end
end

If you want to test this behavior with RSpec, you would do something like:

describe "Person" do
  it "should open the mouth to say hi" do
    @person = Person.new
    mouth = mock("mouth")
    mouth.should_receive(:open)
    Mouth.should_receive(:new).and_return(mouth)
    @person.say_hi
  end
end

Since the Ruby new method is just a class method, it can be stubbed the way you want to inject the dependencies. This is a key aspect when testing with Ruby. You get used to it quickly because it is a natural thing from the programmer point of view. What is difficult is to go back to Java after knowing it, because of the big mess you have to create to achieve the same thing.

Another nice feature of Java DI solutions is that they manage the scope of objects for you. The usual default approach is to create an object each time is requested. But sometimes it is needed to have a singleton scope for some objects. Spring or Guice let you configure the scope in a declarative way and they manage the scope for you. The scope management is a production thing. For testing, you just create the objects you want to test directly.

If you hardcode a singleton object in Java you have to take care of cleaning its state between test cases. And this is just laborious and not elegant. DI frameworks offers a very nice solution for this problem. In Ruby, you still can test singleton classes because you can modify the Singleton module to expose a reset method that regenerates the single instance each time. This solution is possible by dynamic capabilities of Ruby. Again, I think it is nice just to be able to use the singleton object directly.

Another approach for the singleton problem is to have a global configuration object as a simple hash. In testing time you can override the contents of the hash. Again, Ruby syntax and symbols make to look nice something that seems scary if you come from the java World.

Regarding the scope problem, the solution offered by DI frameworks is probably more pure. They are also more complete. For example, Guice let you specify how the injected objects are created through the concept of providers. The thing is that with power comes complexity. Having many options is never for free. Even if you say: “I will only use a 10% of Guice” the rest of options are still there in the form of documentation and code. For example, they make you choose about what approach to take. And my point is that with a pure Ruby approach, without external tools, you can solve the most common cases in a much simpler way. I suppose it is a matter of taste, but I enjoy that approach much more.

Strong types system

I really don’t intend to talk about advantages and disadvantages of dynamic-typing and static-typing. But I would like to show an example of something that in my opinion is wrong. It is related to GWT concept of overlay types. It is a solution for managing JSON data as Java objects with GWT. This approach proposes to create a Java replication of the JSON data with getters/setters for the properties. The java code has to be mixed with the javascript code to be executed via JSNI. An example taken from the GWT page:

class Customer extends JavaScriptObject {

  // Overlay types always have protected, zero-arg constructors
  protected Customer() { }

  // Typically, methods on overlay types are JSNI
  public final native String getFirstName() /*-{ return this.FirstName; }-*/;
  public final native String getLastName()  /*-{ return this.LastName;  }-*/;
}

To me, it is like an implementation of the RY (Repeat Yourself) principle. I mean, do we need strong types that much? If this is the best abstraction you can do with Java of JSON data (and probably it is), then there must be something wrong with the language, because JSON data is only about structures of key/value pairs that can be nested.

Parsing JSON data with Ruby or Javascript is trivial. I will use the former in the next example. It shows how one line of javascript code can convert a JSON string into a plain Javascript object (using the JSON parsing utility of JQuery).

var personJSON = '{"name": "Jorge"}';
var person = $.parseJSON(personJSON);
person.name; //Jorge

Object disorientation

The JSON example is a concrete example where dynamic typing just makes more sense. But to me, the essential problem of strong types is that they make more difficult to represent mental abstractions with code, because every single piece must fit in a very strict puzzle of types. And this means that you are forced to code a lot of things that have nothing to do with your domain, but with the type system of the language.

Take for example the next GWT design. In GWT, widgets that trigger click events implements an interface HasClickHandler. This interface defines the contract of registering ClickHandler objects, which are responsible of processing click events. This is a materialization of the Passive View pattern: a view without any state with controllers performing all the work. The idea is to have a very thin view that can be faked in controllers tests.

Discussions about the pattern apart, there is no any possible human explanation that justify the existence of HasClickHandler. And by “human” I am not being sarcastic. From a human point of view, it makes no sense. It makes sense under the constraints imposed by testing tools and, specially, by the Java language itself, where the only way to specify behavior in a modular way is through the addition of interfaces defining the contract.

Object disorientation is what happens when you go from “My view has a button I have to test” to “My view is stupid, I am going to create a HasClickHandler getter in its interface so I can fake it when testing”. As of today, no technical choice is free of this problem, but in the case of Java I think its effects are just overwhelming.

When considering using GWT a project I am working on, I was impressed by this presentation of Ray Ryan. I loved the principles and patterns discussed there for architecting the client part of a web application. The good thing about the presentation it that it focused on the concepts. The bad thing it that he showed very little example code.

It was when I saw how an example implementing those principles looked, that I decided I was going to study javascript hard. And it wasn’t because the authors didn’t make an excellent work with the example, but because I understood I wasn’t going to have fun programming that way. The bad thing is not that implementing the basic GWT “hello world” in the proposed way took 600 lines of code (without tests). The bad thing was how ugly everything looked, from top to bottom. And since having fun and beautiful code are two very important things to me, I discarded GWT, I studied javascript and JQuery hard, and worked a lot learning how to mount a good testing environment for this technologies and a Rails backend. And I don’t regret at all of how things are going. Of course, there are other problems but those will be for another post.

It is curious. 7 years ago, Eric Evan’s warned the world about how important was to focus on the domain on his seminar Domain-Driven Design book. Both design and code should reflect it as much as possible. In this time, despite of the big impact the book had in the Java community and all the discussions surrounding it, there is no still a mainstream implementation of many of the concepts he proposed. I wouldn’t say it is only due to Java being a poor language for representing the domain with code, but I think it has something to do with it.

Conclusion

I always find very difficult to explain what do I think is wrong with Java to other people. After writing this post, I still think the best way to understand it is to study other platforms and paradigms. In fact, the Java Platform offers nowadays a variety of languages, and some of them, like Groovy or Scala, are starting to attract a lot of attention. I still don’t know anything about Scala but I have used Groovy quite extensively and, although it has the same new operator problem that Java, it is a very nice alternative.

3 Comments

Testing PURE javascript templates from RSpec

2010-05-2

The traditional Rails approach for AJAX has been using RJS templates. Following this approach, controllers respond to requests by rendering javascript code that is executed in the page on the fly, being the main benefit that you can use a powerful set of ruby helpers for specifying the javascript to be generated. Although this technique has shown to be a productive way of adding ajax to web interfaces, it also has serious drawbacks and has been explicitly discouraged by first-class experts on Rails and Javascript like Yehuda Katz: check these slides or these. It is worth to notice that Rails 3 has redefined its AJAX approach encouraging an unobtrusive use of javascript. This way you can still use the remote Rails helpers without being tied to a concrete Javascript library.

In my opinion, it just makes sense to have a server sending and receiving data in the form of JSON, and rich clients taking all the responsibility of rendering the UI and handling user interactions using HTML, CSS and Javascript. The key is that this approach enables you to architect the client part of your application. If some part of the client behavior is left to the server, it becomes more difficult to have a consistent architecture in the client part because concerns are mixed. And without a consistent architecture in the client, it is difficult to provide the user experience that is becoming more and more demanded in today web applications.

So, if you want to render everything in the client using Javascript, you need a good system for creating the HTML, and here it is where javascript templating systems appear. In this post I would like to explain how to test PURE Javascript templates using RSpec. I chose to use PURE because it is a production-ready system for rendering JSON data. The other solid approach seems to be JTemplates, but it doesn’t appear to be as actively maintained as PURE. There is also a proposal for including a templating system in JQuery that was initially submitted by Microsoft. There is already a demonstration implementation of the proposal but it is still in the incubation phase.

Tools

I am using RSpec and an amazing set of tools for running javascript from Ruby code:

  • Harmony: which wraps Johnson and env.js letting you run javascript code in a DOM environment from ruby.
  • Holy Grail: the Harmony plugin for Rails. It allows you to run javascript code in the context of your view tests.

Just follow the instructions in the sites or Harmony and Holy Grail for installing them. For the installation of Holy Grail with RSpec, read this article from Ken Mayer.

The example

For the sake of simplicity I will try to keep the PURE part as minimal as possible. I want to render the following person object.

{"name":"Jorge"}

And I will use the following PURE template:

<div id="person-template" class="name"/>

Although in practice you will be using PURE directives for sure, for the example I will use the PURE auto-rendering mechanism.

What I want is to write a RSpec view spec to test that the template renders JSon as expected:

describe "persons/_person_template.html.erb" do
  it "should render a proper container for the person" do
    person = {:name=>"Jorge"}
    render_javascript_template(person) do |rendered_text|
      rendered_text.should have_selector("div.name", :content=>"Jorge")
    end
  end
end

The helper method render_javascript_template receives a model object to render, and yields to a closure providing it with the result of the rendered template. The rendered text can be then checked using, in this case, Webrat matchers. The code of this helper method is shown bellow.

module ViewHelpers
 
  def render_javascript_template(data)
    render
    include_javascript_files
    yield do_render_javascript_template(data)    
  end
 
  def include_javascript_files
    %w{jquery.js pure.js}.each {|file| js("load('public/javascripts/#{file}');")}
  end
 
  def as_javascript_string(text)
    text.split("\n").collect{|line| "'#{line}'"}.join('+')
  end
 
  def template_id_from_path(template_path)
    template_path.gsub(/^(.+\/_)/, "").gsub(/_/, "-").gsub(/.html.erb$/, "")
  end
 
  def do_render_javascript_template(data)
    json_data = as_javascript_string(data.to_json)
    template_id = template_id_from_path(self.class.description_parts.first)
    js("var data=$.parseJSON(#{json_data});")
    js("var $template = $('##{template_id}').clone().appendTo($('<div></div>'));")
    js("var renderedElement = $template.autoRender(data);")
   
    return js("$('<div></div>').append(renderedElement).html()")
  end
 
end

The render_javascript_template method has to do basically two things before rendering the templates using PURE:

  • Rendering the template that contains the HTML of the template (in this example it is the ERB file itself).
  • Including the required javascript files of PURE and JQuery.

The order of these steps is important: if you don’t invoke render before requiring the javascript files, JQuery won’t work properly.

The method that does the work is do_render_javascript_template. It basically does three things:

  • It converts the json of the data to be rendered to a form suitable to be injected in the javascript runtime as a string literal (javascript doesn’t admit multi-lines literals).
  • It then obtains the template CSS id that is going to be used to locate the template node we want to use. It obtains it from the file path of the template, which is specified in the outer describe of the spec.
  • Finally it invokes the PURE auto-rendering process with the template. The only tip is that it appends an artificial container to the template because PURE fails if the template node hasn’t a parent. In the same way, before calling the JQuery function html(), an artificial root node is added because JQuery ignore root nodes with elements detached from the DOM.

Conclussions

While I test all my javascript code using JSpec, it didn’t fit well for testing my PURE templates. I had read about Harmony and I wondered if it would be possible to make the testing of PURE templates using RSpec and the powerful Webrat matchers. It happened to be not only possible, but a surprisingly fast and comfortable way of testing the templates. I am willing to see a templating system finally included with JQuery, but PURE is a powerful and fast choice that works today. I really think Javascript templates are here to stay.

The code of this post is availabe as a gist at github.

3 Comments