Archive for the ‘rails’ Category

The Closet JRubyists

with 15 comments

For too long we’ve let the JRuby core contributors be the only voice for JRuby. I for one am guilty of taking and taking and taking from the tireless and thankless work the JRuby team has done. Charles, Ola, Tom, Nick, Vladimir and many others need to be thanked.

Almost all of the JRuby projects I’ve been aware of or a part of are nowhere to be found on blogs, twitter or any other techno-coder communication flavor of the month. These projects aren’t going to become popular or have the codista spotlight on them. Most JRuby work is done in the deep inner workings of the corporate bureaucratic sinkhole that is enterprise IT. JRuby work is hidden behind non-disclosure agreements and kept secret because of the technological edge secrecy provides. The great stories haven’t been told and Charles is only able to hint at them because they really aren’t his to tell.

This is one such story and I hope that this post encourages other JRubyists to speak up and at least share parts of their JRuby experience. You owe it to the JRuby team and the Ruby community in general.

I’ll start out by being blunt and if you want to dismiss the rest of the post due to the next sentence then go ahead and move along because this post is not for you. JRuby is fantastic. The rest of this post will hopefully explain why that statement is true.

I joined a project that started out using MRI to wrap a C library, which built into a gem. The C library is a financial analytics package used to price instruments and extract contract specifications. Working in C with MRI was easy. The Ruby C API methods are simple and you almost get the sense you’re working in Ruby. Everything was lollipops and gumdrops just as working with Ruby should be. A rails application was built to display data provided by the gem. As the rails community moved through new deployment strategies so did we moving from webrick to mongrel with lighttpd, etc.

Then some of the business specifications required pricing to be done on hundreds of thousands of instruments at a time. An order of magnitude change in usage made speed and memory usage become very important. Pricing that many instruments in 3 months is not helpful. It needed to be done in parallel.

With this many instruments needing to be priced my team and I created a simple system for distributing the data and processing it in parallel using DRb similar in many ways to Hadoop. Had we been using JRuby at the time Hadoop would be perfect to wrap, but MRI didn’t give us that option.

Right around this point in time MRI became a huge bottleneck. MRI wasn’t going to handle the over 6 million objects we needed to push through DRb and even if it could get data to the workers on a grid of machines it couldn’t fully utilize all the cores on each machine. A combination of running out of memory and MRI failing to fully utilize all the cores of 64-core servers ground the project to a halt.

JRuby 1.0 had just been released and was starting to gain some traction. A 1.0 project? Certainly that can’t handle these problems. With nowhere else to go we took part of the C Ruby API and moved to a C, JNI, Java and JRuby stack. The new stack of tools wasn’t lollipops and gumdrops, but if it worked then who cares? Not me. I enjoyed the polygot work passing ruby objects into C callbacks and unit testing C from JRuby. Mind streaching stuff.

Turns out JRuby had no problems managing 8GB of memory and 6 million plus objects being passed around over DRb. Having the JVM do memory management for your Ruby objects isn’t that bad. I didn’t have to care about it anymore and not caring about the JVM is light years ahead of caring about MRI memory management. Yes, there is real value to using the JVM.

Additionally, JRuby fit into the rest of our MRI system because we weren’t having any problems with MRI talking to JRuby over DRb. I ran into a few problems with IO and Socket, but Charles and Ola were available via IRC and the problems were fixed in a matter of days. The availability of the JRuby team is something I haven’t found in any other community. Charles always put my questions before his other tasks and if you know anything about the man, he is busy. I don’t know how many talks he’s done recently, but his twitter messages list so many cities I’m not sure where he lives anymore.

The initial pricing times came in at around an hour and fifteen minutes. Not bad considering the client was ok with 2 days. JRuby FTW!

Now the story could end here and I’d consider the transition to JRuby a success, but the story goes on.

Tweaking the JVM options allowed us to move the time to about 45 minutes once we upgraded to JRuby 1.1.1. I added some of my findings to the performance tuning wiki page, which you can find here. When was the last time you heard of someone passing some options to MRI’s garbage collector and see performance increases?

Exporting this much data turned out to be a problem as well. Excel’s 256 column limit wasn’t to happy about my 9,000+ column files and the standard ruby spreadsheet gem had trouble handling anything more than 7MB. Fortunately, the Apache POI (Java) project could handle some these problems as well as other features like auto-sizing columns and freeze panes, which no other MRI compatible gem could provide (Yes, there are Ruby POI bindings). I never thought I’d enjoy working with POI/Excel’s API, but JRuby plus POI libraries had me smiling. Excel with a ruby feel rocks. Using JRuby to wrap pre-existing Java solutions is a great way to sleep at night.

Next we moved our Rails apps over to JRuby by deploying them as wars in JBoss.  Managing the mongel problem was gone and JBoss turned out to be much faster anyway provided you give it enough memory. Nick Seiger has done some great work with warbler and the process was a breeze.  Unfortunately, with the number of apps we moved over the DBAs were starting to get upset about the 60+ database connections we used. Rails 2.2 wasn’t around yet so connection pooling inside of Rails wasn’t an option, but using JNDI inside of JBoss worked perfectly. Using prexisting Java tool’s with an adapter written by the JRuby team made my job a lot easier again.

Meanwhile, JRuby was still releasing new versions. Using 1.1.3 moved our pricing time to about 15 minutes. Yes, from 1 hour and 15 to 15. There were some other tweaks we made along the way, but the most significant improvements came from JRuby itself. In it’s current state the C/Java/JRuby API is now exposed through Merb (http/json), DRb (druby), Rails (http/xml, http/html) and an Excel plugin and more opportunities are ahead.

We’re able to upgrade to a new version of JRuby within a day of release. Yes, it is that stable and easy to switch. Yes, 1.1.5 is currently in our production environment. Upgrading to new versions of MRI was usually a nightmare for me so I welcome the stability. JRuby being a jar has some wonderful benefits.

I won’t go into detail about the other libraries we’ve wrapped with JRuby including JFreeChart and QuicKFIX/J. I won’t go into detail about using JRuby with CORBA or RMI and the many other tools that become available to you with the use of JRuby.

Currently, MRI isn’t even installed on our production servers and I don’t see it being installed in the future. Most if not all the ways that the data is available or usable is due to JRuby. JRuby made my job much easier and many of the features I’ve implemented possible. Give it a try.


Written by syntatic

November 25, 2008 at 11:11 am

Posted in programming, rails

Tagged with

Connection Pooling for Rails on JRuby using JNDI and JDBC

leave a comment »

While JRuby is getting a lot of attention for its ability to use multiple cores with a single kernel thread. There are some other performance advantages that JRuby hands you as a Rails developer from Java’s toolkit. Not only does using JDBC allow connection usage to be concurrent, but managing the database connections becomes an easier process as well.

I’ve been using JNDI (Java Naming and Directory Interface) to manage Oracle database connections for multiple warred up Rails projects deployed inside of a JBoss application server.

Setting up connection pooling is an easy process when you can find good documentation.  Hopefully this will aide the setup process.

If you aren’t using Oracle for your database (MySQL, PostgreSQL, etc) the instructions don’t change much.

First you’ll need to know where you are deploying wars inside your jboss directory (your jboss context). This is usually at <path to jboss>/server/default/deploy where ‘default’ is the context.  In the context directory we’ll need to copy the JDBC driver you’ll use into the ‘lib’ directory. I’m using Oracle so the jar will be ojdbc14.jar, ojdbc5.jar, or ojdbc6.jar. Copying the database drivers into lib means that JBoss will automatically add the jar to the classpath.

Next we’ll move into the ‘deploy’ directory off of our JBoss context.  You’ll want to create a xml datasource file.  Using Oracle my file will be named oracle-ds.xml and it looks like this:

<?xml version="1.0" encoding="UTF-8"?>
    <!-- corresponding type-mapping in the standardjbosscmp-jdbc.xml -->

JBoss should automatically pick up the JNDI name and make it available.

Next we need to edit our rails database.yml file to look like this:

<% if defined?($servlet_context) %>
   adapter: jdbc
   jndi: jdbc:DevelopmentOracleDS
   driver: oracle.jdbc.driver.OracleDriver
<% else %>
  adapter: jdbc
  driver: oracle.jdbc.driver.OracleDriver
  username: user
  password: pass
<% end %>

Because the ActiveRecord will no longer be controlling a persistent database connection you’ll need to disconnect from JNDI using the following code provided by Nick Sieger.

# config/initializers/close_connections.rb
if defined?($servlet_context)
   require 'action_controller/dispatcher'

    ActionController::Dispatcher.after_dispatch do      

Redeploy your rails application and now your database connections are pooled.

If you’re using GlashFish the team over at LinkedIn has setup instructions on their blog.

Written by syntatic

August 20, 2008 at 1:10 am

Posted in jruby, rails

Tagged with , , , ,

Tools in the Studio

with one comment


Obtiva’s Studio is busy churning out projects and I thought it would be good to let the rest of the world what we are up to. Most of our rails projects are now using CruiseControl.rb, Zentest, restful_authentication, gems, query_trace, attachment_fu, Rcov, redgreen, exception_notification and mocha. While the list may seem long we are always looking for new tools. If you have any suggestions please comment.

As a team we’ve setup growl integrations for cruise, autotest and redgreen which is strongly suggested. Make TDD easier for yourself. As a team we really need to put out a tutorial on how to set all of this up properly. In my opinion you are doing yourself a disservice without them.

A quick side note, grep -r is mostly dead around here due to ack. Try ack, you’ll love it.


We’ve also pushed a lot of our interest into JRuby and Erlang. We are all extremely excited for the opportunities those two tools will provide. JRuby’s memory usage have our mouths drooling. If you are not paying attention to Charles Nutter’s blog you are missing out! The pace of everything surrounding JRuby is astounding. Merb and Sinatra are on our radar.


Joseph Leddy is deep in the bowels of ActiveWarehouse and FasterCSV where he is making millions of SQL rows consumable for our clients. Joseph is also exploring ETL Tool. He’s been aggressively implementing state machines alongside access control also. Tools unique to Joseph are query_analyzer and tail_logs which I’m eager to take a look at. Joseph recently implemented some multi-server file uploading using BackgrounDRB with tests!

Nate Jackson’s work involves sphinx via acts_as_sphinx mashed into will_paginate and aspell. He’s created an intelligent word suggestor for misspelled words and phrases using raspell. Nate spent a day or two scraping the web with hpricot, WWW:Mechanize , csspool and sass. Nate’s also pushing the studio into NetBeans for Ruby, RSpec, Dvorak and Leopard. Nate likes to include svn_tools and dot.rake in his projects.

Dave Hoover‘s working on innovative interfaces with Ajax and BackgroundDRB. Dave has picked up AR::Extensions as a hammer for memory and speed intensive ActiveRecord imports. He’s also weaving together fleximage and attachment_fu in a few projects. I don’t know much about it yet, but Dave seemed please with ZIYA and Flash chart delivery. Dave’s spent some of his time plunging into Sinatra too. Dave’s editor of choice is Textmate. Other things in his camp include: liquid, RedCloth, and chronic.

Dave also released a gem called TamTam using hpricot that will inline css. You can find the gem here. Dave and Nate paired up to create Obtiva’s first OS X widget here. I paired up with Dave to create a rails plugin for TamTam too which is named inline_css.

Ryan Platte has put together some sweet mashups with GWT, AIM Presence API, BackgrounDRb and ActiveScaffold. While Ryan wasn’t a huge fan of ActiveScaffold I was impressed. His editor of choice is rails.vim. For testing he is using UnitRecord to speed up his test suite among its other benefits. Ryan is now promoting the use of factories over fixtures. Ryan and Gareth demoed Ruby Prof to the rest of the studio. Ryan introduced me to the wonderful world of GNU Screen. The little exposure I’ve had to his projects has me very impressed.

Gareth Reeves is doing work in Event Driven Architecture and Event Driven Programming. He introduced me to testing Java with jMock.

I’ve been working with Amazon ECS, Streamlined and a session bridge between rails and Perl’s CGI Session. I strapped subdomain/SEO love onto a project using request_routing, url_for_domain, and acts_as_sluggable. Nate and I pulled ActiveMerchant into a project which was much less painful than expected. If you’re doing complex condition building my tool of choice is condition_builder. I also extended restful_authentication so that it can support authentication for multiple types of users.

Written by syntatic

November 30, 2007 at 9:42 am

ruby on rails: merge! ‘params’ with a hash indifferently

with 3 comments

When using a merge! with the params method key value pairs are not clobbered if the calling hash is using symbols as keys.

If params[:colors] contains:

params[:colors] = { "blue" => false, "green" => true }

and sym_hsh contains:

sym_hsh = { :blue => true, :red => false }

a merge! of the two will result in this:

=> { "blue"=> false, :blue => true, :red => false, "green" => true }

Notice the duplicate blue key and mixed key types. Some are symbols and some are strings.

Using rails you probably haven’t run into this problem much. Behind the scenes in most of rails hashes are made indifferent to key type. Indifference protects developers from running into problems with strings and symbols as hash keys.

The rails code base uses the class HashWithIndifferentAccess allowing hashes to use strings and hashes interchangeably.

When creating new hashes in a rails project it is best to avoid the problem alongside rails. You can do this by creating hashes two different ways:

hsh = :blue => true, :red => false)

or this which does the above for you:

{ :blue => true, :red => false }.with_indifferent_access

Now the merge! will clobber :blue instead of appending another :blue key.

Written by syntatic

November 28, 2007 at 10:01 pm

Posted in programming, rails

Get In Over Your Head

leave a comment »

Today was spent weaving a sabotaged rails project back together. Usually, I would tell you all the gory details but I’m a little concerned. I enjoyed the experience. Actually, I did more than enjoy the experience. I was in ecstasy all day. Today was the most enjoyable work day I’ve had in a month or two, but why?

Being unchallenged or on the same project everyday causes me to lose hope. I would venture to guess that your job is the same way. If you are a developer then I know your job is that way. Any self-respecting developer hates to be bored. We hate any work day that we do not grow by learning something new. As a developer few things are worse than being stagnant.

Everything that I did today was fresh. I advanced my abilities as a developer and rescued a poor abused rails app from being replaced by static html. The horror! I was in an environment I’d never been in before and was probably in over my head. I loved it. It felt like I kicked the crap out of whoever left rails and the server in that state of affairs.

From now on, I’ll be fighting for another opportunity to jump into some totally screwed up project where I get to throw a few punches again.

Written by syntatic

August 23, 2007 at 6:01 pm

Posted in programming, rails

Dynamic finders: modifying find_by_id for legacy tables

leave a comment »

One of my projects at Obtiva integrates heavily with a legacy database that isn’t “rails friendly”. None of the primary keys are named id so I use set_primay_key “model_id” all over the place along with some other workarounds.

I ran into an interesting problem when integrating techno weenie’s restful authentication into an existing user model. The plug-in raised an error when trying to run User.find_by_id and logged a no method error. I quickly found that, in their current state, dynamic finds aren’t aware of a primary key being set.

Since Rails advertises itself as opinionated software the behavior makes sense, however if you’re wanting to write highly portable code for say a plug-in when you might want to put a warning in the instructions. Some of your users can’t rely on a standard Model.find_by_id(1).

For the sake of time, I went ahead and modified the authentication plugin, but it would be nice if find_by_id checked for the set_primary_key and changed it to find_by_primary_key before raising an error. On that note, I think I have an idea for my first simplistic rails plug-in.

UPDATE: Instead of modifying the authentication plugin it is probably a better idea to override the find_by_id method in the User class, which would look something like this:

def find_by_id(id)

Written by syntatic

May 23, 2007 at 3:25 am