After upgrading to Snow Leopard using
sudo port caused errors:
no suitable image found
no matching architecture in universal wrapper
I was able to fix the problem by installing the Snow Leopard compatible version of MacPorts found on the download page.
Once you’ve installed the new MacPorts you’ll want to make sure the ports tree and base sources are the latest available:
sudo port selfupdate
Then I ran a command to recompile:
sudo port upgrade --force installed
dlopen(/opt/local/share/macports/Tcl/pextlib1.0/Pextlib.dylib, 10): no suitable image found. Did find:
/opt/local/share/macports/Tcl/pextlib1.0/Pextlib.dylib: no matching architecture in universal wrapper
("package ifneeded Pextlib 1.0" script)
invoked from within
"package require Pextlib 1.0"
(procedure "mportinit" line 382)
invoked from within
"mportinit ui_options global_options global_variations"
Error: /opt/local/bin/port: Failed to initialize MacPorts, dlopen(/opt/local/share/macports/Tcl/pextlib1.0/Pextlib.dylib, 10): no suitable image found. Did find:
/opt/local/share/macports/Tcl/pextlib1.0/Pextlib.dylib: no matching architecture in universal wrapper
I had some trouble getting JRuby 1.3.0 and ZenTest 4.1.3 autotest to work together. Part of the problem is that the subprocess that launches autotest ignores any attempt to turn ObjectSpace on. The other part of the problem is how jruby parses -u after the unit_diff command. Here’s a way to hack in fixes for each problem:
Open zentest.rb in the ZenTest gem lib directory and add the following to the top.
if RUBY_PLATFORM == 'java' require 'jruby' JRuby.objectspace = true end
Open autotest.rb and in the initialize of the Autotest class change
self.unit_diff = "unit_diff -u"
if RUBY_PLATFORM == 'java' self.unit_diff = "unit_diff" else self.unit_diff = "unit_diff -u" end
For too long we’ve let the JRuby core contributors be the only voice for JRuby. I for one am guilty of taking and taking and taking from the tireless and thankless work the JRuby team has done. Charles, Ola, Tom, Nick, Vladimir and many others need to be thanked.
Almost all of the JRuby projects I’ve been aware of or a part of are nowhere to be found on blogs, twitter or any other techno-coder communication flavor of the month. These projects aren’t going to become popular or have the codista spotlight on them. Most JRuby work is done in the deep inner workings of the corporate bureaucratic sinkhole that is enterprise IT. JRuby work is hidden behind non-disclosure agreements and kept secret because of the technological edge secrecy provides. The great stories haven’t been told and Charles is only able to hint at them because they really aren’t his to tell.
This is one such story and I hope that this post encourages other JRubyists to speak up and at least share parts of their JRuby experience. You owe it to the JRuby team and the Ruby community in general.
I’ll start out by being blunt and if you want to dismiss the rest of the post due to the next sentence then go ahead and move along because this post is not for you. JRuby is fantastic. The rest of this post will hopefully explain why that statement is true.
I joined a project that started out using MRI to wrap a C library, which built into a gem. The C library is a financial analytics package used to price instruments and extract contract specifications. Working in C with MRI was easy. The Ruby C API methods are simple and you almost get the sense you’re working in Ruby. Everything was lollipops and gumdrops just as working with Ruby should be. A rails application was built to display data provided by the gem. As the rails community moved through new deployment strategies so did we moving from webrick to mongrel with lighttpd, etc.
Then some of the business specifications required pricing to be done on hundreds of thousands of instruments at a time. An order of magnitude change in usage made speed and memory usage become very important. Pricing that many instruments in 3 months is not helpful. It needed to be done in parallel.
With this many instruments needing to be priced my team and I created a simple system for distributing the data and processing it in parallel using DRb similar in many ways to Hadoop. Had we been using JRuby at the time Hadoop would be perfect to wrap, but MRI didn’t give us that option.
Right around this point in time MRI became a huge bottleneck. MRI wasn’t going to handle the over 6 million objects we needed to push through DRb and even if it could get data to the workers on a grid of machines it couldn’t fully utilize all the cores on each machine. A combination of running out of memory and MRI failing to fully utilize all the cores of 64-core servers ground the project to a halt.
JRuby 1.0 had just been released and was starting to gain some traction. A 1.0 project? Certainly that can’t handle these problems. With nowhere else to go we took part of the C Ruby API and moved to a C, JNI, Java and JRuby stack. The new stack of tools wasn’t lollipops and gumdrops, but if it worked then who cares? Not me. I enjoyed the polygot work passing ruby objects into C callbacks and unit testing C from JRuby. Mind streaching stuff.
Turns out JRuby had no problems managing 8GB of memory and 6 million plus objects being passed around over DRb. Having the JVM do memory management for your Ruby objects isn’t that bad. I didn’t have to care about it anymore and not caring about the JVM is light years ahead of caring about MRI memory management. Yes, there is real value to using the JVM.
Additionally, JRuby fit into the rest of our MRI system because we weren’t having any problems with MRI talking to JRuby over DRb. I ran into a few problems with IO and Socket, but Charles and Ola were available via IRC and the problems were fixed in a matter of days. The availability of the JRuby team is something I haven’t found in any other community. Charles always put my questions before his other tasks and if you know anything about the man, he is busy. I don’t know how many talks he’s done recently, but his twitter messages list so many cities I’m not sure where he lives anymore.
The initial pricing times came in at around an hour and fifteen minutes. Not bad considering the client was ok with 2 days. JRuby FTW!
Now the story could end here and I’d consider the transition to JRuby a success, but the story goes on.
Tweaking the JVM options allowed us to move the time to about 45 minutes once we upgraded to JRuby 1.1.1. I added some of my findings to the performance tuning wiki page, which you can find here. When was the last time you heard of someone passing some options to MRI’s garbage collector and see performance increases?
Exporting this much data turned out to be a problem as well. Excel’s 256 column limit wasn’t to happy about my 9,000+ column files and the standard ruby spreadsheet gem had trouble handling anything more than 7MB. Fortunately, the Apache POI (Java) project could handle some these problems as well as other features like auto-sizing columns and freeze panes, which no other MRI compatible gem could provide (Yes, there are Ruby POI bindings). I never thought I’d enjoy working with POI/Excel’s API, but JRuby plus POI libraries had me smiling. Excel with a ruby feel rocks. Using JRuby to wrap pre-existing Java solutions is a great way to sleep at night.
Next we moved our Rails apps over to JRuby by deploying them as wars in JBoss. Managing the mongel problem was gone and JBoss turned out to be much faster anyway provided you give it enough memory. Nick Seiger has done some great work with warbler and the process was a breeze. Unfortunately, with the number of apps we moved over the DBAs were starting to get upset about the 60+ database connections we used. Rails 2.2 wasn’t around yet so connection pooling inside of Rails wasn’t an option, but using JNDI inside of JBoss worked perfectly. Using prexisting Java tool’s with an adapter written by the JRuby team made my job a lot easier again.
Meanwhile, JRuby was still releasing new versions. Using 1.1.3 moved our pricing time to about 15 minutes. Yes, from 1 hour and 15 to 15. There were some other tweaks we made along the way, but the most significant improvements came from JRuby itself. In it’s current state the C/Java/JRuby API is now exposed through Merb (http/json), DRb (druby), Rails (http/xml, http/html) and an Excel plugin and more opportunities are ahead.
We’re able to upgrade to a new version of JRuby within a day of release. Yes, it is that stable and easy to switch. Yes, 1.1.5 is currently in our production environment. Upgrading to new versions of MRI was usually a nightmare for me so I welcome the stability. JRuby being a jar has some wonderful benefits.
I won’t go into detail about the other libraries we’ve wrapped with JRuby including JFreeChart and QuicKFIX/J. I won’t go into detail about using JRuby with CORBA or RMI and the many other tools that become available to you with the use of JRuby.
Currently, MRI isn’t even installed on our production servers and I don’t see it being installed in the future. Most if not all the ways that the data is available or usable is due to JRuby. JRuby made my job much easier and many of the features I’ve implemented possible. Give it a try.
While JRuby is getting a lot of attention for its ability to use multiple cores with a single kernel thread. There are some other performance advantages that JRuby hands you as a Rails developer from Java’s toolkit. Not only does using JDBC allow connection usage to be concurrent, but managing the database connections becomes an easier process as well.
I’ve been using JNDI (Java Naming and Directory Interface) to manage Oracle database connections for multiple warred up Rails projects deployed inside of a JBoss application server.
Setting up connection pooling is an easy process when you can find good documentation. Hopefully this will aide the setup process.
If you aren’t using Oracle for your database (MySQL, PostgreSQL, etc) the instructions don’t change much.
First you’ll need to know where you are deploying wars inside your jboss directory (your jboss context). This is usually at <path to jboss>/server/default/deploy where ‘default’ is the context. In the context directory we’ll need to copy the JDBC driver you’ll use into the ‘lib’ directory. I’m using Oracle so the jar will be ojdbc14.jar, ojdbc5.jar, or ojdbc6.jar. Copying the database drivers into lib means that JBoss will automatically add the jar to the classpath.
Next we’ll move into the ‘deploy’ directory off of our JBoss context. You’ll want to create a xml datasource file. Using Oracle my file will be named oracle-ds.xml and it looks like this:
<?xml version="1.0" encoding="UTF-8"?> <datasources> <local-tx-datasource> <jndi-name>DevelopmentOracleDS</jndi-name> <connection-url>jdbc:oracle:thin:@odev.domain.com:1523:dbdev</connection-url> <driver-class>oracle.jdbc.driver.OracleDriver</driver-class> <user-name>user</user-name> <password>pass</password> <min-pool-size>0</min-pool-size> <max-pool-size>5</max-pool-size> <blocking-timeout-millis>10000</blocking-timeout-millis> <idle-timeout-minutes>5</idle-timeout-minutes> <exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.OracleExceptionSorter</exception-sorter-class-name> <!-- corresponding type-mapping in the standardjbosscmp-jdbc.xml --> <metadata> <type-mapping>Oracle9i</type-mapping> </metadata> </local-tx-datasource> </datasources>
JBoss should automatically pick up the JNDI name and make it available.
Next we need to edit our rails database.yml file to look like this:
<% if defined?($servlet_context) %> development: adapter: jdbc jndi: jdbc:DevelopmentOracleDS driver: oracle.jdbc.driver.OracleDriver <% else %> development: adapter: jdbc driver: oracle.jdbc.driver.OracleDriver url: jdbc:oracle:thin:@odev.domain.com:1523:dbde username: user password: pass <% end %>
Because the ActiveRecord will no longer be controlling a persistent database connection you’ll need to disconnect from JNDI using the following code provided by Nick Sieger.
# config/initializers/close_connections.rb if defined?($servlet_context) require 'action_controller/dispatcher' ActionController::Dispatcher.after_dispatch do ActiveRecord::Base.clear_active_connections! end end
Redeploy your rails application and now your database connections are pooled.
In July I’ll be speaking at OSCON 2008 with Dave Hoover. We’ll talk about our experiences with Apprenticeships on Open Source. I’ll provide the apprentice side and Dave will be providing the mentor side.
Additionally, I’m sure you’ll receive some insights from Dave’s upcoming book Apprenticeship Patterns: Guidance for the Aspiring Software Craftsman. I encourage you to read some of the initial chapters as they are a great source of wisdom and inspiration for any one who is looking to grow in the field of software development.
The past few weeks, I’ve used my train ride home to dig deeper and deeper into erlang. Then after an OS X update released on May 28, 2008 my macports erlang installation stopped working! Running erl caused a “bus error” to occur. Trying to recompile erlang caused the same bus error to occur. Directions below work whether you have an existing erlang installation or you’re trying to compile erlang without prior installation.
Here’s how I was able to get it working again:
Open the erlang portfile with your favorite text editor, I prefer vi.
sudo vi Portfile
Delete this line from configure.args attribute:
The MacPorts configuration doesn’t depend on the enabling of HiPE and erlang will work fine without it. By default HiPE (Hi-Performance Erlang) isn’t enabled or supported on Mac OS X so I’m not sure why the Portfile enables it. HiPE is a project aimed at creating a faster Erlang by compiling to native code. You can find out more about HiPE here.
sudo port uninstall erlang
sudo port install erlang
You should see a message that says: Portfile changed since last build; discarding previous state.
Use erlang again!
If you want to know when the MacPorts issue is solved you can follow the Trac ticket here.
To those of you who plan to use WinISD for predicting subwoofer maximum SPL.
Be weary of using WinISD to predict maximum output. WinISD can be a great tool for predicting speakers at small amplitudes, but it is a little naive concerning the non-linearities that occur with high amplitudes.
WinISD makes all of its predictions based off of T/S parameters, which are small-signal specifications and they are not always scaleable to larger power levels. You hope they don’t change in order to use them as an indication of performance, but they are better used for on deciding on box size and box type.
It’s also important to remember two different speakers with the exact same T/S parameters can have extremely different parameters at a given X-max. With small drivers it isn’t that big of a deal since X-max isn’t a big factor, but high excursion drivers are a different story. It’s safe to say that frequency response changes with power output especially for lower frequencies and WinISD doesn’t model the changes since it models in an ‘ideal’ world.
By ideal I mean WinISD assumes that a driver won’t suffer from power compression and more generally assumes that a driver’s response won’t change as additional power is applied.
Take a look at AV Talks Tests and Ilkka’s Tests. You’ll notice that every one of the tested drivers suffer from power compression at lower frequencies and the lower you go the greater the problem becomes. As Keith Yates puts it, “Power compression is the audio equivalent of getting shortchanged.” WinISD does not give you any indication of when your design is going to get shortchanged, which makes it very hard to compare to subwoofers that are already tested.
Another important piece of WinISD that is missing is that it doesn’t indicate how THD levels relate to SPL and frequency. For example, WinISD may show a maximum ouput of 114dB at 40Hz, but it won’t tell you how much distortion exists at that SPL level.
Further reading on high amplitude scenarios: here