Bio Eric Hodel is a long standing member of the Ruby community. He was one of the founding members of Seattle.rb, and forms part of the Ruby Hit Squad, creators of the deployment automation tool Vlad the Deployer. Eric is currently the maintainer of RubyGems, the de facto packaging system for Ruby libraries and applications.
I've been working on Ruby for about six years now. I think I first heard about Ruby shortly after the first Ruby conference. In my six years I've been living in Seattle. I am one of the founding members of the Seattle Ruby brigade. I've released about 40 different things from various disciplines and for various purposes from serious stuff to just kind of joke stuff that nobody would ever use.
RubyGems is the de facto packaging system for Ruby libraries and Ruby applications and it works pretty much like Free BSD'sport system or Mac Ports or Debian's packaging libraries or whatever. So you have bundles up your Ruby code and provides tools for creating packages and listing them and downloading them from online and installing them in managing and all that good stuff.
Just about everybody is using the package repository on RubyForge, that one is built into RubyGems, but anybody could run their own gem repository, so for example Rails runs their own where they have snapshots of Rails that they consider stable on there, people can test out. I know Jim Weirich also one of the RubyGems developers and the developer of Rake has his own and Why the Lucky Stiff has his own where they both have snapshot gems.
The current released version is 0.94 and I'm going to be releasing 0.95 in the next week or so hopefully, if there are no more issues to work out.
The biggest two are that the platform support is now built right in so it goes and looks it how Ruby was built and figures put what the platform was and uses that for automatic installs and the second one is that it doesn't ask you any more questions when you go to install a new gem. It used to go and ask for each dependency you wanted to install and we decided that being more like a Free BSD's ports or Mac Ports where it just goes in and installs everything and does what you want which is much better way to go. So now is installing stuff as much more streamlined and people who are deploying Rails apps will get a lot of benefit there from that too because they don't have to have any tools to go in say, to type "Yes" all the time or they don't have to specify the flag to make the gems install automatically.
Yes. The primary use of the platform is for compiling for Win32 users because most people on Windows don't have a compiler on their system. There are only a handful of gems for various other platforms, only two or three, but I've worked with the JRuby people to make sure their platform support will work on JRuby, so if they have a JRuby specific gem with a jar file or Java class files it will work for them too.
RubyInline was created by Ryan Davis after he looked at Pearl Inline and what it allows you to do is write some C code right in the middle of your Ruby code and it eliminates the ext conf.rb setup and the make file. So if you want to go and write some Ruby code and C code together test first it makes it easy because it's all on one file and every time the file gets updated it'll go ahead and recompile and reload itself. So you can go ahead and just have autotest sitting there and working with tests first on your C code and your Ruby code at the same time, you don't have to stop and rebuild that every time you change the C code a little bit.
Yes, you will need a compiler. There is a feature that's seldom used in RubyInline so if you want to distribute a RubyInline package you can go and generate the shared object that you would load and then run Inline package will build you a gem that has the prebuilt version in there for you. So you could easily do a platform gem that way.
Like lot of the Win32 gems do the same thing but most of them are using ext conf or an equivalent and they build them that way and package them and RubyInline can do that automatically.
The ParseTree stack that wasthe largest piece of software in terms of difficulty that we've worked on. We also spent some time implementing Ruby and some of the Ruby core classes in Ruby itself on the MetaRuby project and we had BFTS which is a testsuite that was a based on an older testsuite that was written largely by Dave Thomas. Some of those tests have been used in Jruby and Rubinius.
The idea that we had was that it's easier to work on Ruby if it's written all in Ruby because there are more people that understand Ruby than understand C, so it reduces the barrier of entry for people to make changes and that was our primary goal and in order to support that we decided to start with the core libraries first, and so in order to make sure that like Array and String and so forth worked we had to write tests for those because currently they're not exhaustibly tested and that's what we needed to make sure that our implementations were compatible.
One of the things that I'd like to finish up pretty soon is a new release of Heckle because there are still some outstanding issues with it that haven't been fixed and there have been some changes to ParseTree that it uses.
Heckle, Ryan describes it as a test unit sadism so what it will do is it will run your tests once to make sure they all pass. It will change pieces of your implementation one at a time rerunning the tests each time. So it will go through and it'll change an if to an unless and when that change is made some tests in your testsuite should fail It will go onto the next one, it will report to you with a diff output of all the changes that it made that didn't cause a test failure. So that is an edge case you missed or maybe even a whole set of tests that you missed. So with this you can tell how good you testsuite is. Unfortunately sometimes if your tests or you code is poorly coupled or really just tight knit it won't actually give you any useful information because changing one thing anywhere will just cause a test failure no matter what, but it's hard in that case to extract useful information other than something's wrong. Unfortunately.
14. The way that Heckle is implemented is it doesn't work on the source, it works with the AST via ParseTree so it can work in a higher level. Another interesting project is SuperCaller. What is that?
I first wrote SuperCaller when I was trying to write a Inliner for Ruby, so what my Inliner would do is it would figure out that one method called the second method and it would count to see how many times the first method called the second one and if it reached a certain thresholdit would go and take the AST of the called method and do a couple of transformations on it and inject it into the call site. So then it would make the calling method bigger and I decided that I would ahead and punish myself and start doing this on Rails code and with some of the way, the tricks that they do to do reloading didn't behave very well and I couldn't tell what the method looked like after 3 or 4 methods may have been Inlined into it. So I wrote SuperCaller to go and attach extra information to Ruby's exception back traces, so that way I could go through and say: "Ok, this method actually looked like this when I called it because by the time the exceptional was raised it would be entirely different from how it would actually read when I pull up the source code file". There are a few other things that I threw on it that were easy to do like you could go and ask for its local variables are pulled out what object you are on. And it ends up looking a lot closer to what Rubinius's stackframes would look like or what Smalltalk's stack frames and back traces would look like.
Yes, it's mostly a debugging tool, but it could also be used for evil because it overwrites the caller method that gives you the back trace, so you could actually call it from anywhere, the SuperCaller, and then go and walk up and find all the selves and then do things with them if you wanted.
Yes, you can get to things that Ruby doesn't allow you to get to.
The Ruby's Hit Squad primary goal is to find complex software that we can't otherwise refactor or break down or the authors are not open to the changes and simplifications we want to make and make those changes. So it's not about being against any particular person, it's about being against particular code.
One of the things was Capistrano was highly Rails deployment focused and there are people who would want to use the tool for general system administration outside of a Rails context and Wilson was having some difficulties with that. The other thing is it uses Net::SSH which isn't exactly like what you would get with the SSH command line, so we decided that we would rather base it on top of regular SSH calls and also Capistrano has its own entire dependency mechanism for how you specify the ordering of tasks, by just listing what happens after and before what. And instead we wanted to just go with rake because it already has all of that the dependency mechanisms built in a way that is familiar to a lot of Rubyists already.
As much as possible to make things as simple as possible for us so that it's hopefully also easy for other people to understand and either patch or find bugs if they have problems.
I believe we were sitting around on IRC and we were just kind of joking back and forth and the name Vlad just popped out because it had a very nice ring to it.
Once we got down to just choosing to use Rake and SSH there really wasn't much to Vlad. We got permission from Rails machine to include a bunch of their tasks and recipes, so that allowed us to have a bunch of things prebuilt in that you could just use right out of the box. We also went with the simpler way of doing source code modules than Capistrano did, so if you want to check out something really we just have a string that we substitute in the right directory and the repository and Capistrano has a larger class based mechanism that is slightly harder for us tofigure out how to use when we were looking at it.
We find it rather odd that Capistrano didn't include a lot of recipes by default and said it was very barebones and allowed other people to write the recipes. I think it would be really nice if Capistrano had a much better out of the box experience. What we'd like in Vlad to have is to cover the 80% use case, so things like when you get to a machine, you want to set up the database and run the initial migrations and create the databases and we haven't got into that creating the databases part yet, but we do stuff, we have all the standard tasks for Mongreland we're looking in the next release to have litehttpd supported and maybe even Fast CGI on Apache as well.
We are also working on Win32 support, so that people who are running Vlad from Windows can use it.
It's a problem where we don't have Windows boxes, so we decided not to try on the first release and we figured that some brave Windows user would step up and say: "Hey, if you just change these couple of things it will work" and I think we've got a couple of those out, suggestions or patches to do that.
The Seattle Ruby brigade was started about 5-5 ½ years ago as just myself, Ryan Davis and Pat Eyler were the primarily people who just go and hang out and then talk Ruby and since then Pat has moved to Salt Lake City and the Ruby user's group has grown. Originally we would do a meeting every month and we would have a presentation format and we found that a lot of the times the presenters were always the same two people, namely myself and Ryan. A lot of other people either were not confident enough to give a presentation or not interested enough or didn't want to go to a meeting because they didn't like the presentation topic and we gradually switched over to a hacking format where people just come to hack and they bring their laptop and they say: "Hey, I got this problem: I am trying to do this thing with Rails and how do I do that?" or "I want to help somebody else with it" or "Hey, I am new to Ruby, what should I do to get started?". So it's primarily a self guiding thing now.
It depends a lot on how nice the weather is. During the summer there are about 15-20 people showing up, during the winter when it gets darker and not as nice outside then we'll get 20-30 people showing up.
Since we meet every week we get a lot of people who come in every third week or fourth week or fifth week. So I think that has actually been helpful to growing the group because they know that it's going to be there every week and there's going to be people to hang out with. We get one or two new people showing up every other week, I'd say, and since we tell them it's every week they may not show up again for another month or so, but we probably grow by a couple of people every three months or so.
I think that what's been really helpful is we now meet in a coffee shop, so they have drinks there, actually they serve beer, so if somebody wants a beer they can get that to you. There is good food nearby.
It's an open space, so we write down that we are going to use the room and then we don't actually pay anything for it, but we do encourage anybody who comes to tip well because they have to deal with all these people. But the space is free of charge so that certainly helps. The hacking nights, I think, are definitely key because it's easier for someone who is new to Ruby to go and ask somebody sitting next to them: "Hey, I am trying this out and I am not figuring it out" then it is to be at the end of a presentation and then it's kind of breaking up a little bit. I think it seems to be a little bit more intimidating to say in front of a big group of people that you are having this problem that it is when somebody is next to you.
Really I've been devoting a lot my time to RubyGems lately and I haven't had much to work on other things. The last thing I started before the RubyGems was a Sphynx search engine tool for an active record called "Sphincter" that goes in, it's a nice light weight as easy as I could make it, simple to use plug-in for Rails.
Through the month of October was the target month for inclusion and as we were discussing this on the Ruby Core mailing list, there were several questions by the Japanese Rubyists about how exactly we're going to integrate it and one of the memory consumption aspects for loading up RubyGems all the time and how do we specify the load path and many of these questions and during that month I also was doing a contract, so I didn't have the time to focus on it and do the language issues; it was difficult to figure out what exactly the plan was. So last night I sat down with Koichi and Matz and also Rich Kilmer and we worked out a plan for how we were going to include it and what's going to be different in 1.9 from how it works in 1.8 such that RubyGems is active all the time.
The main difference that's going to be is you won't have to do requirRubyGems for the require to work. We're going to have go through the gems and figure out what the most recent versions are and add those paths to the load path by default and then if you wanted to do a specific version you could require RubyGems or use the gem method to go ahead and load up that specific version.
It shouldn't as far as the management of the packages themselves go that won't be any different. I don't think the load path differences shouldn't be any different than now. The only difference is users who have RubyGems won't have to do require RubyGemsfirst in 1.9. The other thing that we agreed on that Debian had been asking for was a vendor directory for Ruby libraries and so my personal opinion on that for RubyGems was that I don't want to add that to Ruby Gems if it's not in Ruby because I don't know. It should be something that Ruby decides and so since 1.9 has added that I'll add it to RubyGems for 1.9
There has been some discussion on the Ruby Core mailing lists about which libraries to unbundle from 1.9 and the consensus hasn't been reached on where those Gems will b placed. I feel that there should be a separate repository for those on the Ruby. org or Rubylang.org somewhere. And RubyGems could easily add that by supports and mechanism that automatically update which sources it searches from. But the details of that haven't been decided.
Memcached was written by Brad Fitzpatrick for live journal and it's a memory cache that has no replication, it's just shared across multiple machines. So in order to access it you go and you have some key that describes you data. Then you hash that against a list of servers and then you figure out which server to go and fetch the data from. Everything is all in memory and it's very fast compared to going to and hitting a database. Live Journal originally used this to speed up pretty much everything in their sites so they have these few database hits as possible and limit it to when necessary. So there is a various number or clients for this and the library that I am maintaining was originally written by my co worker, Robert Cottrell and so he wrote it up and I went ahead and released it and I continue to maintain it and add features to it. The original goal wasn't to make a compliant, fully respect and protocol compliant library, it was just to support what we needed and gradually have accepted features for the various pieces that we've been missing and expanding it and fixing whichever bugs it that people find.
It communicates directly with the Memcache server. It's a very simple text based protocol currently. The Memcached developers are working on a new binary protocol implementation and when that becomes stable I'll look it implementing the binary version of Memcache client, but that's still long ways out as far as I can tell.
The company that I originally worked for, the Robot Co-op uses it for 43 things in their various websites. I've talked to Blaine from Tweeter and he says that he uses it both for accessing, Memcached and he also wrote a special queuing library that speaks Memcached protocol. I am sure there are various other sites that I just haven't heard about that also use Memcached, it's seems to be fairly popular.
Significantly different from all the compared projects.
1. All the listed projects are primarily packaging stuff that is developed by other people. This means that each maintainer packages a much higher number of packages, and that the community is focused around packaging to a much larger degree than the people the produce Gems. This also means that there is much more quality control around the packaging process itself (plus often some cross-testing to make sure that various packages work together, and registration of conflicts when they turn out not to do so.)
2. All the listed projects are strongly tied in with the underlying operating system, and do work to make the packages actually work correctly there. RubyGems does the opposite - it provides guarantees that are in direct contradiction with those many operating systems give, thus making more or less sure that what is distributed as Gems not do fit easily with the rest of the operating system (actually, the "guaranteed brokenness" is only available on traditional Unix platforms. The platform policies of Mac OS X, Windows, Gentoo and some others are reasonably close to Gems.)
I find the differences between Gems and Ports/Debian/etc packaging to be so large that it is misleading to use the OS packaging systems as an analogy to describe Gems. Instead, say what Gems is: A system for distribution of Ruby programs and libraries, where the author release it and is responsible for making it work everywhere - albeit a bit foreignly - and where the author is responsible for avoiding conflicts.