Key Takeaway Points and Lessons Learned from QCon San Francisco 2008
In this article, we present the views and perspectives of many of the attendees who blogged about QCon, so that you can get a feeling what the impressions and experiences of QCon San Francisco (November 2008) were. From the first tutorials to the last sessions, people discussed many aspects of QCon in their blogs. You can also see numerous attendee-taken photos of QCon on Flickr as well as hundreds of tweets that went out about QCon over Twitter.
This QCon was InfoQ's fourth conference and the second annual in San Francisco. The event was produced in partnership with Trifork, the company who produces the JAOO conference in Denmark. There were over 460 registered attendees, with 60% of the attendees being team lead, architect and above. 80% were attending from the North America and 20% from Europe, Asia and South America. Over 100 speakers presented at QCon San Francisco including Kent Beck, Martin Fowler, and Tim Bray. QCon will continue to run in the US around November of every year, and QCon London will be running March 11-13, 2009.
Table of Contents
* Practical Erlang Programming
* Certified Scrum Master
* Getting started with JRuby and JRuby on Rails
* Domain Specific Languages
* The Zen of Agile Management
* Martin Fowler and Rebecca Parsons: Agilists and Architects -- Allies not Adversaries
* Kent Beck: Just You Wait
* Tim Bray: Application Design in the context of the shifting storage spectrum
RESTFul Web Integration in Practice
* HTTP Status Report
* RESTful Enterprise Development
* Building RESTful Web Services with Erlang and Yaws
* Introducing Real-World REST
* Designing Enterprise IT Systems with REST
* Golden Rules for Managing your Architecture
* Open Standards Development: Opportunity or Constraint?
Performance and Scalability
* Architecting for Performance and Scalability: Panel Discussion
* Teamwork Is An Individual Skill: How to Build Any Team Any Time
* Agility: Possibilities at a Personal Level
* Responsive Design
* Transcendence and Passing Through the Gate
Ruby in the Enterprise
* MERB: When Flexibility and Performance Matter
* Scaling Rails for the Enterprise
Cloud Computing: The Web as a platform
* Web as Platform: State of the Market
* Google App Engine and the Google Data APIs
* "Hooking Stuff Together" - Coupling, Messaging, and Conversations
Functional and Concurrent Programming Languages Applied
* Haskell and the Arts
* Strongly Typed Domain Specific Embedded Languages
* Buy a Feature: An Adventure in Immutability and Actors
* The Fundamentalist Functional Programmer
Effective design and Clean code
* Behaviour-Driven Development - a road to effective design and clean code
* 10 Ways to Improve Your Code
* Radical Simplification Through Polyglot and Poly-paradigm Programming
RIA in the real world: The Evolution of the Client
* Hard Rock: Behind the Music with Silverlight 2
* Flex and Air in the Trenches: 6 Months of Real World Enterprise Experiences
* Real world GWT: Why Lombardi built Blueprint with GWT and why you should use it for your next RIA
Domain Driven Design
* .NET Domain-Driven Design with C#: How to keep your domain model clean while working inside of frameworks
* Rebuilding guardian.co.uk with DDD
* Unshackle Your Domain
* Strategic Design
DSLs in Practise
* Domain Specific Languages in Erlang
Architectures you've always wondered about
* Digg.com Software Architecture
* Facebook: Science and the Social Graph
* Teaching Machines to Fish: How eBay Improves Itself
Alternatives in the .NET Space: Open Source, Frameworks and Languages
* The Joys and Pains of Long-lived Codebases
* Introduction to F#
* TDD in a DbC World
* Volta: Distributed .NET anywhere
Data Storage Rethinking: Document Oriented Distributed Databases
* CouchDB from 10,000ft
Java Emerging Technologies
* From Concurrent to Parallel
* Panel: How does the Open Source trend in Java affect your design and development process?
* Scaling Hibernate
* Unleashing the Fossa: Scaling Agile in an Ambitious Culture
Opinions about QCon itself
Inspired by QCon
Ola Bini enjoyed this tutorial:
The morning before out tutorial I spent in the Erlang tutorial, which was fun. Francesco is a very good teacher, and we got through lots of material.
Srini Penchikala enjoyed this class:
I attended Certified Scrum Master (CSM) hosted by Martine Devos from Object Mentor Project Management team. Martine is an excellent CSM Trainer. The tutorial class was very educational and informative for people who are in technical management to be come Agile Project Managers. The class was interactive that allowed students to discuss with each other and rest of the group in serveral real-world project management scenarios.
Bjorn Rochel attended this tutorial, and said:
What was interesting in this talk in particular is the fact that most of the things that apply to JRuby and Java also apply (or will apply) to IronRuby and .NET:
- The reuse of existing IT infrastructure. In the not so far future Rails will be able to be run on top of .NET and IIS. No existing Ruby infrastructure will be needed in order to get it running. No new server / infrastructure know how will be needed for running Rails in a .NET environment.
- The integration between a statically typed platform and Ruby in both directions. C# will be able to integrate dynamically typed code (C# keyword dynamic) and pure Ruby applications can be run on top of .NET. Besides that IronRuby will also have a fantastic integration of the .NET framework libraries and will be able to extend its view of the .NET world with Ruby concepts (I think it's called monkey patching). Opening up and extending .NET types from IronRuby and support for snake casing for members on .NET Framework built-in types are some of the nice features.
Ola Bini attended this session:
I've been in this tutorial several times, but it just keeps getting better. Especially Rebecca's pieces on parsing turned out to be very well polished this time. And of course, they are all great presenters.
As did Chris Patterson:
My favorite takeaway was the styles shown for internal DSLs (also called fluent interfaces). An internal DSL is built using the language itself rather than creating a new grammar. There are plenty of great examples of fluent interfaces, including the FluentNHibernate project and StructureMap's registry DSL. The immediate value of an internal DSL is the understanding gained from reading code written using this style of interface.
Razvan Peteanu had a detailed summary of this tutorial, including:
The theme of the day-long class is straightforward: bad management breaks software development more than anything else. That it does is practically a truism: making decisions without enough discrimination and knowledge breaks anything, not only software development.
The emphasis is on 'more than anything else'. Albeit not perfect, the agile approach, practiced sensibly, performs well enough so that the hot potato is no longer with the developers themselves. It is time to fix the act of managing development and yes, given the human nature, dealing with the programming methodology was the easy part.
Many people wrote about this keynote, including Denis Bregeon:
I will retain the single strongest thought of the talk: enterprise architecture is a stake holder of an application among a lot other stake holders. And the second strongest thought: enterprise architects are useful to bring a transversal view of a company's technological portfolio.
The Wednesday keynote with Martin and Rebecca was about architecture, and how agile can help architecture groups with their problems as well as help bridge the gap between developers and architects, that often exist in larger organizations. Very well done.
The main conference for QCon started Wednesday, opening with a keynote by Martin Fowler and Rebecca Parsons from Thoughtworks. The topic was Architects and Agilists - Allies not Adversaries. As you can surmise from the title, this talk focused on building a collaborative relationship between ivory tower architects and the teams responsible for delivering software. With a scene from the Matrix where Neo meets The Architect as the backdrop, a number of solutions were presented after highlighting the key disconnects between the two roles. I've always been a firm believer that architects who do not code (NCA's) are not nearly as effective as those who understand the issues of those actually writing code.
I think they made a very good case for using incremental delivery mechanisms and the transparency that Agile can provide to aid architects. Buuuuuuttttttt, in order for their recommendations to work, I think that large companies will have to dramatically change their attitudes towards the people that build software. Hands on skills and application architecture skills have to be held in much more regard by many companies than it is today.
Part of the problem with architecture is that it is hard to define. It may be simply design when we want to make it sound more important. I remember a guy I used to work with for whom everything was architecture. "If you think about the architecture of these two classes..." Martin liked Ralph Johnson's definition of an architect as the person who thinks about the hard stuff -- whatever that happens to be for a particular organization.
Another problem is that architects have a hard time defining success for their role, and an even harder time attaining it. Sometimes that's because of organizational dysfunction, like companies that insist that architects not write any code. Other times it is through poor practices, like inventing frameworks in anticipation of a yet unproven need.
The QCon San Francisco 2008 conference was opened with an interesting keynote by Rebecca Parsons and Martin Fowler. In their talks they addressed the often strained relationship between traditional architects and agile development and how to improve this relationship to the benefit of both the agile development team and architects. These benefits include cross-project and cross-department knowledge exchange, sharing of the architects many years of experience with the developers, and only working on the architecture that is actually needed.
It made me think about our policy here at 2paths, everyone codes to some extent. As an architect myself, I may not code large volumes of code but I do develop code. This makes my opinion relevant when we make design decisions. Ironically, their suggestion to fix the situation was just this, architects must be part of the dev team and need to look at the code developed/checked-in on a daily/weekly basis and in the ideal case code.
And Razvan Peteanu:
Another more subtle problem noted by the speakers is the pushing out of experienced practitioners with an unstated presumption that their time has passed and that if you are still in development in your 40s (versus, say, management), you've pretty much failed. It is socially acceptable to grow in skills for a number of years, but then the expectations shift to moving up the ladder, leaving coding as a 'youth thing'.
Yet the picture is not that simplistic. In many environments, EAs do have a good grasp of the architecture their title involves. They do have a wider vision, both breadth-wise and time-wise. It would be a mistake to ignore such knowledge in favour of lower-scale goals. The problem often lies in how the EAs engage with development teams: occasional dips concluded with the dispensing of advice and diagrams are not bound to create neither trust nor a working architecture. With no trust there is no agility.
The Register also covered this keynote:
"Sometimes when I give this talk people say that I'm too hard on architects," Parsons said. "But I understand that they have a very challenging job... They're put into very difficult positions, and organizations in general do them a disservice.
"The important thing to realize about this is that the problem is a systemic one," Fowler added. "It's about how the organization is set up, rather than a failure of individuals trapped within that system. No matter how good your people are, you can always mess them up with a bad process."
Eric Smith wrote about this keynote:
His talk was as an amateur futurist, looking at trends and imagining where they might go, and unintended consequnces. For example, what happens if you extend the trend of releasing software more frequently? Maybe releasing software with every keystroke? Well, OK, editing a live web site would almost be that (if your editor saved continuously).
Some other trends he expects are the end of "free" stuff on the web, and a decrease in status for programmers as other people begin to understand technology more and programming seems less mystical. Even if that latter point proves true, he says that we can still make a difference in the world with what we build, and how we build it.
As did Razvan Peteanu:
Mr Beck spoke again at an evening keynote - an informal attempt to guess the trends of computing and whether some of the staples of today's landscape aren't coming down from a prolonged 'high tide' they enjoyed in the last decades. One example: relational databases, being increasingly disfavoured for flatter approaches. Note, however, the examples were mainly from the world of Google and Amazon, and not the more typical OLTP system where user-transactions, rather than data crunching with MapReduce, are the norm (plus a 'legacy' warning label on the database schema :-).
Several people discussed this keynote, including Martin Fowler:
At QCon last week, there was a strong thread of talks that questioned this assumption. Certainly one that struck me was Tim Bray's keynote, which took a journey through several aspects of data management. In doing so he highlighted a number of interesting projects.
- Drizzle is a form of relational database, but one that eschews much of the machinery of modern relational products. I think of it as a RISC RDBMS - supporting only the bare bones of the relational feature set.
- Couch DB is one of many forays into a distributed key-value pair model. Although a sharply simple data-model (nothing more than a hashmap really) this kind of approach has become quite popular in high-volume websites.
- Gemstone was one of the object database crowd, and I found the Gemstone-Smalltalk combination a very powerful development environment (superior to most of its successors). Gemstone is still around as a niche player, but may gain more traction through Maglev - a project to bring its approach (essentially a fusion of database and virtual machine) to the Ruby world.
But the thing that really got Tim excited was not just his impressive figures on how much faster solid state could be on the right filesystem (which was a bit of an ad for a new server released by his employer, Sun), but the fact that SSD has Moore's law on its side: as opposed to "spinning rust", SSD is all silicon, so it will only increase in price/performance over time.
As Tim says, "Ladies and gentlemen, you are looking at the future."
He mainly focused on the O/R mapping layer and the SQL engine, where technology, to quote Tim, "sucks". O/R mapping is really hard, and it's too bad that object databases never really got commercial success (Martin Fowler in the front row nods in agreement). SQL engines are too slow, driving people to distributed hash tables like memcached instead, or even skipping the database engine entirely to go to the file system. SQL engines are also complex and laden with features that never get used. This is inspiring projects like Drizzle, a database forked from MySQL in an attempt to slim things down.
Tim also had good things to say about CouchDB, a document-oriented database that is accessible only via HTTP. I don't know anything about this project, but Tim's exact words were, "I'm infatuated with this" and "frighteningly elegant".
- between the application code and the disk/cache, these are the traditional layers - Object/Relational mapping, SQL engine, and OS & File System. Between layers what gets transported are these - objects from Application code to O/R mapping, then SQL to the SQL engine, then normalized tuples to OS & File systems, and finally bytes to the cache/disk.
- These layers add latency and overhead and become huge bottlenecks for web applications catering to millions of users. Hence, some application code is directly manipulating large hash tables for key/value pairs. They claim this approach makes it 100 times faster than RDBMS.
- Application code can directly map to RDBMS, bypassing the O/R layer (which is hard). PHP does that, but its hideous and ugly.
- How about application code going directly to the OS and file system, bypassing the entire database layer? They use XML, JSON, plain text and media files.
We kicked off the day with a talk by Tim Bray about the database and data access and the evolution of the database and what one needs to keep in mind when designing a system that uses them. He introduced techniques such as the use of MemCacheD that are used by all the big name web apps as a distributed cache of data retrieved from the database. He talked about Drizzle that is a light-weight database that removes all the stuff that no-one ever uses in major databases that slow down the database server and lastly column oriented databases such as Googles BigTable. All in all he provided some great insight on tech in the database layer which warrant further investigation.
And Denis Bregeon:
For me it was a liberating talk. First because I had fallen into the habit of not thinking enough about storage, with the reflex of falling back to the familiar RDBMS type. And this is costing my current project a lot of time and effort. Tim Bray managed in fifteen minutes to break that idea by presenting the storage choices available at each point in the storage hierarchy.
For instance when you have to persist objects, why go to object relational mappings and RDBMS at all ? Why not use the file system directly ? As a side note, in my current project we did and had to revert to RDBMS because our system administrators could not make file system replication work!
The Register also covered this keynote:
What is the future of the stressed stack? "It's all a moving target," Bray said. "But that means it's a good time to be in this business. Time was if you were building a serious, non-trivial, web-facing application for people paying real money, you had to use Java EE or .NET, and the OR that came out of the box, and Oracle, and this and that and the other. These days, you don't have to do any of that stuff, and people won't look at you like you're crazy. And that's a good thing."
Razvan Peteanu attended one of the interviews:
Mr Yoder was talking about how to architect for changing business rules by making a more generic model. At the end the interviewer unexpectedly turned and asked if I had any questions myself, I had so there go my two minutes of fame. Thirteen remaining...
Steve Vinoski enjoyed this track:
Jim Webber put together and ran the REST track, and it was one of the best tracks I've ever been a part of. Mark Nottingham is extremely knowledgeable on the REST and HTTP fronts, so he gave a very informative talk on HTTP and the work of the HTTPbis group. Ian Robinson and Stu Charlton both spoke on using REST in the enterprise (it's coming, like it or not). Leonard Richardson talked about how to judge the quality of RESTful services, but I missed most of his talk because I went out to stretch my legs and couldn't get back in because the room was so packed! I spoke about my work with REST, Erlang, and Yaws. You can get all the slide sets for this track from the QCon site.
As did Jim Webber:
The QCon REST track, as many of the speakers and attendees have blogged, was simply excellent, not a single dud talk all day.
Brendan Quinn had detailed notes about this presentation, including:
HTTP/1.1 was basically only written to "contain the damage" of 0.9 and 1.0 (vhosting, persistence, caching) Mark was involved with the WS-* stack -- but he graciously apologised to the room for his sins ;-) An interesting comment regarding SOAP etc was that "having that much extension available in a protocol is socially irresponsible - protocols are all about agreement" and you need to draw lines to make soething useful. He was basically saying that WS-* allows you to do too much, giving you enough rope, and making the normal case hard just to make an extreme case possible. (Or something like that, if there's a blog post where he explains himself I'll gladly link to it instead of badly paraphrasing him)
Mark had a neat way of saying that RESTful APIs "use HTTP as protocol construction toolkit". They're not built on top of HTTP, they're build as part of HTTP (in a way).
Jim Webber did the same, with notes such as:
- URI length limits - some clients as short as 256 chars. Most clients allow really big URIs. Intermediaries vary - typically 4-8Kb.
- Results in people tunnelling queries through POST (no limit on entity body size), killing cacheability - may not be a problem
- Some frameworks embody this ant-ipattern
- HTTPbis to recommend URIs at 8Kb
- Headers have some length limits (Squid: 20Kb in total)
- Header line continuations aren't honoured - HTTPbis cans them
Eric Smith also attended this talk:
I went over to the REST track for a little bit, chaired by Jim Webber, a noted expert on man boobs. The speaker for this session, though, was Mark Nottingham, the chair of the HTTPbis working group. They are essentially rewriting the HTTP 1.1 spec, but not trying to add any new features. Why bother? Mark explained that the existing spec is ambiguous, not well organized, not very approachable, and occasionally requires consultation with one of the authors to understand. As a result, everybody pretty much gets the main parts right, but there are a bunch of other things that are not very interoperable between implementations. HTTPbis, targeted for completion in about six months, will hopefully clarify the spec enough to eliminate incompatibilities.
Mark also spoke briefly about what might be expected for the future of HTTP, like a standard set of conformance tests, better authentication, transport over SCTP instead of TCP, and others.
Jim Webber had detailed notes from this talk, including:
"Restbucks"/"How to get a cup of coffee" article on InfoQ, reframed version of Gregor's article, framed into more of a REST modality.
whole bunch of back-office functionality involved: Order Management (supervising the fulfillment of an order), Inventory (maintaining inventory within a store), Product Management, Regional Distribution (distributing to the various locations), etc.
"Terrorist Cell Services" - self contained to do all the work it needs to, as opposed to a 3-layered architecture blown into a Service deployment.
for Services to do their job, who needs to know what?
Order Management needs to have knowledge of Product to do its job...
lots of stuff, even a distributed app, might constitute the internals of one of these Services, yet this discussion is about the Service boundary interactions.
Jim Webber had detailed notes from this talk, including:
- YAWS - Yet Another Web Server - good for serving dynamic content; embeddable, or stand alone
- Dynamic content - embed
tags in HTML in a .yaws file
- Out function - ties your apps to the Web server - an arg record gives connection details to the application - a big tuple
- Ehtml avoids embedding
tags which get messy, instead write HTML with Erlang syntax - tuple space of tuples
- Obligatory Emacs rant...
- Appmods - application module exports a function, is bound to one or more URIs via configuration file. Yaws calls Out function on any appmods registered for a URI.
Jim Webber had detailed notes from this talk, including:
- You can judge a service crudely, by seeing how far it goes up the stack: URI->HTTP->HTML (hypermedia)
- Level 0 - One URI one HTTP method - e.g. SOAP
- Level 1 - Many URIs, one HTTP method - e.g. most RESTful services that aren't (still loved, because people love URIs)
- Level 2 - Many URIs, each with multiple HTTP methods - e.g. Amazon S3
- Level 3 - Resources describe their own capabilities and interconnections - e.g. AtomPub
- The Web allows decomposition of complexity via many URIs, HTTP splits complexity by partitioning read operations (GET) and the write operations (all the others). Composing these yields sophisticated systems
- GET matters because it is constrained, and so you can optimise (level 1 services ignore these constraints)
- HTML only knows about GET and POST - tension between more specific constraints and opportunity to optimise, versus clients not understanding the enlarged vocabulary
Jim Webber had detailed notes from this talk, including:
- Challenge is applying REST to enterprises - enterprises haven't figured it out yet, particularly since they're in the MOM/Component/SOAP space - major headache is hypermedia
- Hypermedia is simply a different kind of abstraction between systems, and is a layer atop message passing
- REST is not file transfer - more than CRUD, is a solution for general purpose systems
- Enterprise architecture is tiered (data, business services, presentation)
- Web architecture blurs these silos, but doesn't solve every problem and has some trade offs
- Web architecture has visibility, transactions, syndication which map onto classic enterprise constraints
- Interfaces for classic services have a producer mentality, with interfaces resembling that mentality a la Conway's Law
- Move towards consumer driven contracts instead - Ian Robinson in audience agrees :-)
Kelvin Meeks wrote a summary of this presentation, including:
Erosion of architecture - a fundamental law?
Architecture erosion is quote a known problem
- System knowledge and skills are not evenly distributed
- Complexity grows faster than size
- Unwanted dependencies are created without being noticed
- Coupling and complexity are growing quickly
Typical symptoms of an eroded architecture are a high degrees of coupling and a lot of cyclic dependencies
- changes become increasingly difficult
- testing and code comprehension become more difficult
- deployment problems of all kinds
This panel was covered by several sites, including InfoQ:
After a couple of people from the audience mentioned that spec leads need to get out and promote their community, Floyd Marinescu, Chief Editor of InfoQ, asked if the JCP board has thought of hosting public surveys for a more "democratic approach" of taking decisions. Patrick Curran replied that they haven't thought of that but he expressed the opinion that with the new collaboration tools that are developed for the next version the the JCP site, people will have more ways to participate.
There were more than a few people from the audience that made negative comments about the choice of Java Modularity over OSGi and how that causes confusion. Patrick Curran defended the choice of Java Modularity by saying that although he is not an expert in the field there are several technical reasons that led to this decision.
The hour-long discussion was dominated by concerns about the openness of the JCP and barriers to entry-level participation. Janssen complained that it was difficult for user groups to participate in the JCP because of the stiff fee. Van Riper proposed the creation of a JUG USA umbrella organization to spread the expense of joining. He also complained that there is no real community feeling in the JCP if you're not involved with a JSR. Ashley pointed to a lack of public openness in the JCP. Horstmann echoed that concern and emphasized the importance of transparency in standardization processes.
When the discussion turned to standards, Venners argued that standardizing a language adds value only when there are multiple implementations. In most cases, a language conformance kit is sufficient. Rosenthal argued that what key to making a standards organizations work is basic team work principles. Reid talked about her organization's work to make documents accessible to disabled computer users.
Elsewhere in the JCP, Curran said all work on Java Platform, Standard Edition 7 will be done by the Java Development Kit community in an open source manner.
Also on tap from the JCP are collaboration tools for better communication in the Java standards development process. "We are going to roll out some stuff on jcp.org, which is in forums and so on, to make it a little easier for expert groups to communicate amongst themselves and between themselves and the general membership," Curran said.
There has been concern that expert groups have been operating behind closed doors, Curran said. The tools are currently in a beta form and are due in a couple months, he said.
Omar Khan attended this panel:
Next was a panel discussion on scalability and emphasized the need to understand the update requirements of the system you are designing. They proposed that there are two types of design:
I'm not sure if I'd ever go with option #2 but they made a good point about how many architects design for things to be bullet proof when one really doesn't really have the requirement for it to be bulletproof. I fully agree with that principle. There was mention of several websites such as flickr that have made some very good decisions on having that happy medium
- Design for scalability up front
- Design for simplicity and deal with bottle-necks as they pop-up
As did Razvan Peteanu:
Most participants were vendor CTOs (Terracota's Ari Zilka is a good talker) with Brian Goetz brought in just minutes before. It started with Bruce Eckel posing questions to Brian, there was good conversation afterwards about data partitioning to allow scaling. Wish the panel was longer, an hour is too short and many interesting conversations don't get a chance to develop. There is something about really high scale & scalability that picques the interest of the audience, perhaps because pushing the technical limits brings back computer science into the less glamorous business software.
Martin van Vliet enjoyed this presentation:
One of the best sessions of the first day of QCon for me was the talk "Teamwork is an individual skill" by Christopher Avery. The talk focused on skills and habits that we can learn to become effective team members. This is becoming more and more important since most of us are in the position that people we have no direct influence over determine whether we are successful or not. A software development team is a good example of this.
Daniel Cukier was amused by one of the images in Linda Rising's presentation, which showed the effect of assorted drugs (such as caffeine) upon the webs of spiders.
Denis Bregeon wrote about this talk:
Building on previous work, he started with a definition of design and the influence of values (how do I evaluate the quality of a design) and principles (to some extend an operational effect of the values) on the design as well as the use of patterns.
Then came the core of the talk: design strategies. Kent Beck identified four design strategies from his experience. They are really strategies to evolve a design.
- First is the leap, jumping from one design to another in a single go.
- Second is Parallel, where the old design and the new one will co-exist for a while.
- Third is the stepping stone, whereby successive intermediary steps are defined and implemented before arriving to the new design.
- Fourth is simplification, whereby the design is made for a simplified version of the problem and then enriched by reintroducing the complexity.
As did Razvan Peteanu:
Kent Beck's room-packing presentation shared the results of his active efforts to capture how he designs (without any attempt to dictate this is how everyone ought to do it). Mr Beck's initial assertion that design was hard was expectedly received with aprobatory murmurs (if both beginners and gurus say it is hard, it would be amusing to plot the perceived degree of difficulty against the skill level of practitioners. When does it reach the 'this-is-a-piece-of-cake' level?). Speaking of design, his recently-coined definition for it is 'beneficially relating elements'. There is a bit of a linguistic play here: you can read design as a noun (made of elements that are related beneficially) or as a verb (to design is to relate elements in a beneficial way).
As well as Jeremy Miller:
I had the pleasure of attending one of Kent Beck's talks here at QCon. It wasn't anything revolutionary or even informative to be honest, but what I saw was one of the masters of our craft simply reflecting over how he made design decisions. I think it's a good example to follow.
And Bruce Eckel:
Kent Beck talked about exploring the techniques he used during design, which he defined as "Beneficially Relating Elements."
- Leap (take small (safe) steps that you know will work)
- Parallel (operate two designs in parallel for awhile)
- Stepping Stone (if I had a ... I could do that)
- Simplification (Start with the simplest case, and expand on that once it's working). Art in knowing when to trim features for simplification, and when to re-add them.
Denis Bregeon attended this presentation:
This was such an interesting talk. At least for those of us who are interested about what drives us developing software. As a side note this came up a few times during questions after Kent Beck's keynote but the answers were rather less interesting being mostly about money and the good of humanity. This talk echoed with a lot of ideas I have been munching on for the past year about the philosophical make of Agile developers. I also have a rather keen interest for japan.
Dave's talk was a crash course, impressionist, education in Zen philosophy, the arts ranging from the degree of enlightenment to the prowess that the fully enlightened and what it is that the disciple seek. I am not sure what the point of the talk was though. I think it was a warning that agile in software development cannot be the recipe of the day. It is a way of life, a path to enlightenment (in the Zen sense), in other words a quest for perfection through constant improvement. Although being a software developer is not as noble as being a monk, a warrior or even a sword craftsman.
Chris Patterson was impressed by MERB:
MERB is a high-performance, scalable MVC framework for Ruby. It is in the same space as Rails, but with a lot of optimizations to increase performance. It also supports slices, allowing features to be added as simple gems. This is one to keep an eye on as it grows.
Kelvin Meeks also noted several points from this presentation, including:
One interesting point made during this morning's Ruby presentations: Ruby's historical bad performance reputation may not be currently valid given certain performance improvements (e.g. Merb compared to raw PHP, leading PHP frameworks, Django, Rails, Code Igniter, etc.)
InfoWorld also covered this presentation:
Merb, he said, is "very suited for the enterprise world but not only [the enterprise]." It is "the fastest Ruby framework we have right now," Aimonetti said.
The technology offers the concept of Merb "slices," which serve as stand-alone miniature applications that can be mounted inside other applications, he said. Merb offers modularity and flexibility, said Aimonetti.
InfoWorld covered this presentation:
Rails applications can be scaled via techniques such as the memcached application, Pollack said in an interview after his presentation. "Really, the way you scale Rails is just like you scale any other Web app," he said.
Ruby reaches beyond the Web, Pollack said. It is being used to generate music and to maintain Linux boxes, as well as for graphics and desktop clients, he said.
Eric Smith attended this talk:
John is the founder of Programmableweb.com, a sort of catalog site for various APIs available over the Internet. In his presentation, he talked about trends in web APIs. Some trends are technical such as the rising use of REST vs. SOAP (though since REST is more a philosophy than a standard, it can be hard to classify something as REST as opposed to REST-ish). Other trends are around the ubiquity of APIs -- it is now expected that if you put useful functionality on the web that it will have an API, and not just a user interface. Even media producers like The New York Times and NPR have APIs.
What makes a good web API? First and foremost, the underlying service has to be valuable. Beyond that, the API should support the business model of the provider (eBay wants to optimize adding listings since that's how they make money) and should be easy to access both in terms of openness and the developer support provided.
Chris Patterson wrote about this presentation:
My next stop was to learn about Google's AppEngine and how it handles scalability and performance. It's currently free, and developers can build and deploy applications into the cloud using Python and Bigtable. It's an interesting engine with a completely transparent scaling infrastructure. You worry about your application, the system takes care of the rest. Large applications have been built and Google has allowed them to scale beyond the standard quota for free accounts. While final pricing has yet to be discussed, it's likely to be very competitive. There is also a planned introduction of new language support early next year, but the exact language runtime being added was kept secret.
Chris Patterson had some thoughts on this session:
Since I started working on MassTransit, I've used the Enterprise Integration Patterns book by Gregor Hohpe as a reference manual for building distributed message-based systems. In this talk, Gregor laid down some fundamentals and set the stage for a sequel to the book that will be titled Conversation Patterns. But first, the challenges of message-based systems were presented. The levels of failure are quite involved, and include things like lost request, lost response, slow response and retry duplication. A lot of these are covered in texts online, so much of this material was review for me. A new acronym for ACID was also declared: Associative, Commutative, Idempotent, and Distributed.
With large scale distributed systems where consistency is of a more eventual rather than immediate nature, it's important to recognize that the future if flexible and redundant rather than predictable and accurate. Building distributed transactions that are durable and that support compensation is crucial to having a success, scalable application.
Omar Khan also noted:
Gregor Hohpe gave a great discussion about principles underlying the creation of appkications that use a "cloud" architecture. Cloud computing is basically the use of services from other organizations/companies within your application. Examples of this would be to have an application that uses google maps or Amazon's EC2, etc. The main take-aways from his talk where:
- Learn to live with uncertainty
The services you are using are not controlled by you so you have to design for the component being down/unavailable because your customers don't care what services you are using under the covers.
- Keep things simple and small
- Learn to properly design for asynchronous
- Embrace the new programming model
- Resist applying traditional patterns
Arjan Blokzijl enjoyed this track:
While attending QCon San Francisco, I had the particular pleasure of attending a whole track that was devoted to the area of functional programming, a topic that I have a profound interest in. After having followed the track, I'm even more convinced than before that functional programming is not confined the the academic world. I think that it will have a profound impact on our mental perspective and the way we think about programming and problem solving in the next coming years.
Arjan Blokzijl attended this presentation:
He talked about the concept of Functional Reactive Programming (FRP), now a key area of research at the University of Yale. FRP is a programming style where each function can capture a time varying quantity (for instance input sound, video, user actions). This style of programming has applications in Robotics, parallel programming, audio processing. Whenever a new input value is given to a function, it is re-evaluated and new output is returned. In his talk, he presented specially HasCore and HasSound, domain specific languages developed for music and sound synthesis. Paul said that functional languages are specially suited for computer music, as they are declarative (saying 'What' should be done, instead of 'How' it should be done). Haskells abstraction mechanisms allow for musical programs that are elegant, precise and powerful, using the mechanisms of lazy evaluation, higher order functions, algebraic data types and type classes. A basic example of music modeling in Haskell can be found here.
Arjan Blokzijl had some thoughts on this talk:
Lennart gave a couple of examples which he himself had developed by implementing DSL's in Haskell. For me, the most striking was the case where he had implemented a DSL for generating Excel (i.e. the real Excel sheets, not just a text based csv file). As he stated, Excel has a somewhat rudimentary abstraction mechanism, consisting only of copy and past re-use. However it is used widely by business people and is a familiar UI. Therefore, the solution is to generate Excel sheets. Lennart is an advocate of strong type checking to prevent errors at compile time. He showed how he had used Haskells type classes, and also a more obscure concept as Phantom types in his Excel DSL. The result was a DSL capable of generating Excel sheets without allowing type errors in operations, such as the addition of two cells in Excel where one contains an integer and the other containing a string, a thing that Excel itself does not prevent. An interesting and also amusing session indeed, showing the full power of Haskells type system capabilities, and demonstrating a use of it in an area that probably does not spring directly to the mind when thinking of Haskell.
Arjan Blokzijl wrote about this session:
A perhaps more familiar concept to most of us was demonstrated in a session given by David Pollak . He talked about a web application, Buy a feature, that he had developed recently. A subject that perhaps will not raise most interest immediately, however, David had used Scala and Scala's Lift web framework, the latter being created by himself. He had been using functional programming paradigms, including Scala's Actor library to deal with concurrency, in order to implement a multi-user, web-based, real-time, serious game. As he stated, the team initially used the Java's imperative programming style. However, after some time (and coaching) they gradually move over to use the declarative, functional programming style that Scala also offers. Some more noteworthy statements were, that none of the bugs found in the application were concurrency related, apparently the event driven, message passing programming style using Scala's Actor library served him well.
As did Steve Vinoski:
I found it interesting that in Lift he uses some of the same request dispatching techniques I use in my work with Yaws, even though he's writing in Scala and I in Erlang. Functional languages rule.
Arjan Blokzijl enjoyed this discussion:
The day's ending could not have been much better for a functional programming adherent.
Eric Meijer himself concluded it, with a delightful talk titled 'The fundamentalist functionalist programming'. His thought provoking argument is that we've been moving into the wrong directions for the last dozens of years in the way we program, and it's time to see change direction. Turning, as the title of this blog indicates, to pure, fundamental functional programming.
Why would this be a good thing? All 'pure' functional code has no side effects, and therefore does not alter state. Lacking state that is altered, the order in which statements are executed does not matter, nor does number of times a program gets executed. This makes programs better understandable and far less error prone. How nasty an implicit side effect of a function can be was shown by Eric with an example from the C# language. It would make this blog too lengthy to go into detail, so for the interested reader should take a look at his blog.
SD Times covered this presentation:
"Let's just try all using the same words to mean then same thing," said North. "I was consulting in a place where they handled things called credit derivatives, where you have this concept of pricing. It requires big grids. There [was] a pair of developers struggling with this pricing algorithm. A business analyst passed, and they had a conversation. I watched this. It was beautiful because at no point did he realize that they were talking about code. He was talking about pricing. They were having this fluid conversation. This guy didn't realize he was talking about objects.
"It's not just a thought exercise. Let's start having a shared language. A ubiquitous language is when you drive that into your software artifacts. Classes are named what they're named in the domain. What you'll find [is that] when you model stuff, you'll get work done."
Joey deVilla enjoyed one of the images he saw from this presentation, which showed a YAGNI development assistant:
YAGNI, short for "You Aren't Gonna Need It" is a development maxim that suggests to programmers that they shouldn't add features or functionality to applications that aren't necessary at the moment, but might be in the future. YAGNI has the DRY ("Don't Repeat Yourself") Principle has a cousin and among its ancestors are Occam's Razor and the KISS Principle (as in "Keep It Simple, Stupid" and not "I Wanna Rock and Roll All Night (and Party Every Day").
Eric Smith also attended this presentation:
Neal's 10 points were mostly things I know (and even do successfully at times) such as TDD, using static analysis, YAGNI, etc. There was an interesting point on polyglot programming. He said that polyglot programming ideally doesn't mean multiple platforms, but multiple languages on a single platform. Java and .NET are both multilingual platforms, so take advantage of that to use the best language for the problem at hand while still keeping the benefits of a common runtime environment.
As an intermission, Neal talked about 10 bad smells, my favorite of which was "Our lawyers say we can't use any open source software", which led to Neal having to buy a "license" for CruiseControl (written by Neal's employer, ThoughtWorks) so he could use it with a client.
As did Kelvin Meeks, who took detailed notes including:
Top 10 Corporate Code Smells
10 - we invented our own web/persistence/messaging/caching framework because none of the existing ones were good enough
9 - we bought the entire tool suite (even though we only needed about 10% of it) because it was cheaper than buying the individual tools.
8 - We use WebSphere because...(I always stop listening at this point)
7 - We can't use any open source code because our lawyers say we can't
Ola Bini wrote about this talk:
At the end of the day, I saw Dean Wampler mix up all the free floating ideas about polyglot programming, and talk about it in something that approached a cohesive whole (which I've never been able to do). A well done presentation.
As did Kelvin Meeks, who covered many things including:
Disadvantages of PPPP
- N tool chains, languages, libraries, "ecosystems"
- impedance mismatch between tools
Advantages of PPP
- use best tool for a particular job
- can minimize amount of code required
- can keep code closer to the domain
- encourages thinking about architectures
Eric Smith wrote about this presentation:
Scott is the CEO of Vertigo, a consulting firm that was named Microsoft partner of the year for 2008. It was nice to have some Microsoft technical representation since the conference so far seems heavier on the Java side. Scott spoke about building the Hard Rock Memorabilia site working with marketing firm Duncan/Channon. Duncan/Channon originally intended that the site be done using Flash, but Deep Zoom turned out to be the killer feature that made Silverlight the clear choice.
Scott said that the Silverlight story of independent graphic design and programming is completely playing out as advertised. He showed a little demo of an Etch-A-Sketch app that Vertigo programmer Michael Moser quickly coded up, but was then beautifully skinned by a graphic artist.
Eric Smith had some thoughts on this talk:
Scott Delap gave a presentation about using Adobe tools to build some online components of a video game, League of Legends. His perspective was comparing Flash/Flex to doing Java UI development with Swing. He seemed reasonably happy with the platform and development tools, but thought that things felt sort of like Java year 2000. That is, not quite up to state-of-the art. He is, however, a convert to declarative UI specification. Like the Java UI developers I've known, he never really trusted visual designers for Swing, but says the designer experience with Flash is great.
Scott talked about evaluating a lot of frameworks for various things (dependency injection, remoting, unit testing, functional testing, etc.) which is a typical exercise for open source development stacks.
Eric Smith attended this session:
If you work in a Java shop, it seems like a great development story: use all your familiar tools and write Java code and you get a great web app.
.NET Domain-Driven Design with C#: How to keep your domain model clean while working inside of frameworks
Eric Smith had some notes from this talk:
Tim raised an interesting question with his talk: How do you keep your domain code clean when working with frameworks? By frameworks, Tim meant broad frameworks like ASP.NET as well as more narrow, application frameworks like SharePoint. The challenge is that with frameworks, and especially the wizard/designer approaches to coding sometimes advocated by Microsoft, it can be really easy to become inappropriately coupled to persistence, specific APIs, etc.
Eric Smith attended this presentation:
Phil shared a few things they learned along the way. One thing was the importance of using the language of the problem domain everywhere, which allowed even the non-technical people on the team to contribute in unexpected ways. An example of this was a big breakthrough in terms of object composition. Phil showed an original object hierarchy and a much cleaner one. He said that of course, big breakthroughs always seem obvious and trivial in hindsight, but it was a non-engineer that came up with it. Another example he cited was a time when one of the users came over to a developer's machine for a feature demo. While the develper was getting things set up, the user looked over his shoulder at the code and commented, "Does this do what I think it's doing? If so, that's not right." Because the code used the domain language for object names and methods, he was able to understand it well enough to spot a bug.
Another thing Phil and his team learned is that the database schema is not the model. The model is the code -- even though sometimes people get a false sense that if something isn't in the database, it isn't really the model.
Erik Rozendaal enjoyed this presentation:
The talk "Unshackle Your Domain" given by Greg Young was the highlight of QCon for me. An architectural approach that is relatively easy to understand, incredibly scalable, and supports a rich domain model.
At his presentation, Greg quickly pointed out some of the problems with traditional enterprise application architecture, such as JEE. This architecture usually consists of an "object-oriented" domain model encapsulated inside a procedural service layer, persisted in a relational database, flattened out on a data entry screen by the UI layer, and crunched daily into management reports by some reporting tool. With so many masters, it's no wonder that the domain model becomes anemic and the application is hard to scale.
Eric Smith wrote about this talk:
His premise for this talk was:
Not all of a large system will be well-designed.
We wish it isn't so, but it just seems to be a fact of life. He then talked about the oh so common situation that a development team finds itself in. We've got this legacy system, and it has all the problems that typically make us unhappy with legacy systems: poor design, old technologies, accidental complexity, etc. So what do we do?
Bruce Eckel also had some notes from this talk:
Programming "heroes" are people who provide business value by working in the core domain. Unfortunately, such "heroes" are often bad programmers who are smart enough to put themselves in the right place. Good programmers often have to clean up after them.
Instead of trying to rearchitect the whole system to begin providing business value in the third year (which never happens because you spend two years costing the company money while not providing any visible business value), you should situate yourself in the core domain. Create a facade to the underlying (bad) architecture and begin adding business value. Over time, you can change aspects of the underlying architecture when the benefits are clear.
Ola Bini was impressed by this talk:
Dennis Byrne gave a very cool talk on DSLs in Erlang. There is some stuff you can do that's totally unbelievable. Best talk of the day. Possibly of the week.
Omar Khan wrote about this presentation:
Joe Stump from Digg.com was up next and his session was awesome. He talked about the evolution of their architecture from the point he joined to what they are planning on rolling out early next year along with some of the bumps they've encountered. MySQL's scalability had major issue so they moved to IDDB and are considering using MemcacheDB which is MemcacheD mixed with Berkley DB. Joe pointed out their use of MogileFS for the management of images on digg images which I will be taking a look at for our current project.
Omar Khan attended this talk:
Aditya Agarwal from facebook was great. He talked about all the different aspects of facebook and the use of PHP/MySQL being the underlying architecture and how for them it . They tend to use MySQL simply as a key value store and profile content is randomly scattered across their thousands of servers. They use LAMP for the most part but have created a framework that allows them to use services written in other languages within their application.
As did Eric Smith:
Aditya is the director of engineering at Facebook, and after hearing his presentation, I came to a couple of conclusions:
Facebook is built on the LAMP stack, but pretty much every component has been customized and optimized for their particular application. They also use memcached, which Aditya said was a huge factor in the performance of the site. This agreed nicely with Tim Bray's talk from the other day: databases are too slow for this kind of scalability. Aditya said their memcache distributed hash table consumes 25 terabytes of RAM.
- I haven't ever had to worry about scalability anywhere near this, and
- I'm kind of glad about that.
Razvan Peteanu had some notes from this session:
Later, a presentation with impressing numbers by Randy Shoup, eBay Distinguished Architect: their automated search & recommendation engine processes 50PB daily, with 50TB added every day (no typos :-). 10% of their inventory is in flux every day, a rate of change that imposes challenges when users require accurate search results. As the presenter put it, 'our users do go to the very last page of search results and they do complain if their number differs by one from what was originally stated'. Whereas with Google, the total number of searches for "Java" is about 414 million, but you cannot see the results beyond the first thousand.
Eric Smith had some thoughts on this talk:
These are some of the lessons he learned:
I also appreciated a comment he made when answering a question: don't use an IoC container when writing tests. I've seen people do that in the past: they have a constructor injected set of dependencies and try to reconfigure the container to inject mocks. Just pass the mocks in the constructor!
- Good design at the class level is crucial.
- Bad tests make it harder to change the code rather than easier.
- TDD as refined to BDD is best (I need to learn more about this).
- Getting abstractions right not only improves the code, it opens up new possibilities.
- DRY - even a little duplication is bad.
Eric Smith noted:
While I have heard a bit about F#, I've not really spent much time trying to understand it. What better opportunity could you have than to get the designer to provide an introduction? Of course learning a language is best done by writing code, not hearing a presentation. Still it was interesting to get a high-level overview and to see a few examples. One of Don's favorite things was to show the difference in lines of code between functionally equivalent bits of C# and F#. An asynchronous I/O example had three pages of C# code to a dozen lines of F#.
Don said that F# is well-suited to problems dealing with data transformation, but not so great for building user interfaces (even though all the .NET UI libraries are accessible from F#).
Eric Smith wrote about this presentation:
I've posted about design by contract in the past (part 1, part 2), and sort of intended to talk about unit testing and DbC at some point, but never got around to it. So I was interested to hear what Greg had to say. The first part of his talk was an overview of DbC, showing some spec# code to demonstrate the concepts and letting the code verifier find the problems. I noticed that Microsoft had announced a contracts library at PDC, so the exploratory work of spec# will be incorporated into the framework instead of as language extensions.
The main point of Greg's presentation is that tests and contracts are complimentary. Tests should focus on the behavior (another reference to BDD) while contracts focus on constraints. Having contracts means fewer tests to write because you don't have to write all the dumb tests that check that an argument exception is thrown when parameter x is null. In fact, not only do you not need to write those tests, if you try anyway the code verifier will complain. So you almost can't write those tests, and therefore contracts push you toward testing the meaningful behavior of the code.
Matthew Podwysocki expanded upon some of the ideas from this presentation in a blog post:
Lately, I've been talking about the new feature coming to .NET 4.0, Code Contracts, which is to bring Design by Contract (DbC) idioms to all .NET languages as part of the base class library. Last week, I attended QCon, where Greg Young, one of my CodeBetter cohorts, gave a talk titled "TDD in a DbC World" in which he talked about that they are not in conflict, but instead are complementary to a test first mentality. Both improve upon the usage of the other.
One of the questions that came up during some of my blog posts and even at QCon for that matter. Many times, DbC gets pegged as being in conflict with TDD, and that many people who espouse the use of DbC aren't doing TDD, and I don't think that's necessarily true. Let's look at where each fits into the general picture.
Eric Smith enjoyed this talk:
There is also a development work flow in which you write a single tiered .NET application, then split it into client and server tiers by simply annotating the code -- saying which bits should execute on the server and which on the client. Greg Young, from the previous presentation, immediately started pushing back citing the 8 fallacies of distributed computing. Erik simply skipped ahead a little to a slide showing the 8 fallacies, and assured the audience that programmers would still be in control enough to avoid the pain.
Omar Khan was impressed with CouchDB:
The discovery of the day was CouchDB. One of the lead developers/designers was sitting at the same table as me during the Digg session. Tim Bray was quite excited about the technology as well having mentioned it in his keynote on day 2. I attended one of two sessions about that covered what it's about from a high level. It has it's roots in Lotus Notes database design given that it was created by one of the people who worked on Lotus Notes in the past. It has some very cool features:
- RESTful so it doesn't require any connectors
- Stores all content in JSON
- Very efficient and performs versioning and synchronization when distributed
Omar Khan wrote about this talk:
The next talk was about the Java 7 concurrency library and the need for making things concurrent given the fact that processors are not getting faster but are moving towards multiple cores. Therefore, we as developers need to ensure that we ensure our applications are making everything run in parallel (concurrent). One might feel that if you are running a website that since every request is being handled by a different thread your job is complete. However, how many of us receive enough requests 24*7 to keep all the cores busy. Therefore, we need to make sure that applications is decomposed to the extent that we are doing everything concurrently that can be done concurrently to ensure maximum CPU utilization and decreased response time and therefore an improved user experience.
As did Razvan Peteanu:
Straight after, Brian Goetz' got hard-core with the fork-join concurrency framework planned for JDK7. The reason awaits us at the horizon: multi-cores with lots of processors. Intel's CPU speed hasn't increased since 2003, what has kept Moore's Law valid is the proliferation of cores. It is still in a mild phase (we enjoy 4- and 8-way machines) and as long as their number is lower than the size of an app server's thread pool, spreading the load at that level is still fine. With the rumoured 256-core CPU to be made by Intel by 2010, the game changes.
Making efficient use of those processors requires parallelization of finer granularity, inside the processing of a single request. As Mr Goetz put it, "many programmers will become concurrent programmers, perhaps reluctantly". Machines with hundreds or thousands of processors have been out there for many years, it's just that they will become mainstream. The good thing is that techniques have already been developed.
Omar Khan discussed this panel:
The last session of interest was a panel discussion on the affect that the open-source movement has and on java. It was the consensus that the open-source saved java from mediocrity and J2EE from certain death. The focus of the discussion moved to open-source during the current economic condition. The panel felt that slashed budgets within most companies would allow open-source to flourish, at least those that are well established. The last point of interest was about how they make a decision within their companies about what to open-source and what to charge for. Bob Lee of Google said that they open-source low level tools and frameworks and charge for things they build upon them. Others such as a guy from MuleSource and Rod Johnson from Spring Source said that they charge when people are already likely to be paying other vendors for their product/services. For example, the quote of the night was from Rod Johnson:
If you use MySql, tomcat and Apache as your application stack then you can use spring for free and that's great. However, if you are using Oracle Rack for a database, BEA Weblogic for an app server then you have no right to complain when we charge you for a Spring to Oracle RACK connector.
The point being if you're already spending money then why shouldn't we ask to be paid for what we are offering.c
Razvan Peteanu wrote about this presentation:
The beginning covered strategies to partition data across databases (with or without a security separation). Max Ross is the main guy behind Hibernate Shards so part of the talk was about it and what remains to be implemented before it's declared GA.
The other main ... shard was about Hibernate Search as a better way to implement free text searches than the classic SQL queries with %LIKE%. Having worked with a similar solution in the past (i.e. persistence layer + Lucene), it's good to see it integrated with Hibernate. Main advantages: Lucene can handle word variations, has built-in relevancy ranking and, the reason it's included in a scalability presentation, using it relieves the load of the text matching from the database to the app server, where it's easier to scale. Several slides covered the synchronous vs the asynchronous updates to the Lucene index; Emmanuel also responded to a question from the audience that Lucene/Hibernate Search is more flexible and cheaper then the full-text search capabilities of some databases - and again, the point is to relieve the database server of this work.
Brendan Quinn had detailed notes from this talk, including:
Some guys from salesforce.com spoke about how they migrated 50 teams, over 700 people, to Scrum -- all at once! Their rationale was that it was "like burning their boat after they had rowed to the other shore", and if some teams moved and others didn't, it would be a recipe for finger-pointing, missed dependency deadlines, and recriminations. Coupled with the fact that they all commit to the same codebase (!!) it sort of makes sense.
Some choice quotes:
- (When a prominent agile coach was asked how to sell agile to upper management:) "I ran a large agile project for (a very large organisation) for two and a half years... the whole time my boss thought it was a waterfall project"
- "agile can be gamed, just like anything else"
- "If you measure velocity by how many tasks you complete, people will complete all the easy but useless tasks. We shouldn't be measuring task velocity, we should be measuring business-value velocity... people do what they are measured by" [the mantra our VC professor Terry has drilled into us: structure drives behaviour]
- "we've gone from one dogma [waterfall] to another dogma [scrum and XP] -- I thought the whole point was that we were supposed to be agile. What happened to thinking?"
Chris Patterson enjoyed the Wednesday night party:
That evening, there was an attendee party at a nearby pool hall. After some snacks and a quick game of pool, Dru and I sat down with Gregor to discuss some messaging concepts. After a quick exchange in Japanese, we earned a seat at the table and started to talk about various conversation patterns that we had identified in our own work. I was surprised that we were thinking along the same lines and that several of the message patterns we had used were going to be covered in the book. We also talked a little bit about Google Protocol Buffers (a platform and language independent format for serializing data in an efficient, binary stream). We started digging into GPB a few weeks ago as a way to build more system interoperability between Java and .NET when using MassTransit. Based on our process so far, this is likely to end up in a future release of MT.
Razvan Peteanu enjoyed the conference:
The conference is overall quite good for its focus on the architecture and high scalability. It's more than the sheer scale itself: it's the ingenuity seen in the presentations or on the hallways. Some good stuff should come out of it.
The agile track overlaps somehow with other conferences, yet it's a sign that how software is developed cannot be separated from the software itself. QCon is a pretty young conference, but one with promise. If you want numbers, my estimation based on how many tables were in the lunch room is somewhere in between 300-350 people. The main organizers from InfoQ were around every day and you could chat with them - the event does not have a 'mega' feel to it.
As did Omar Khan:
In conclusion, QCon was a great experience. The hosts tried their best to make sure things were as tech focused as possible. There is a need to ensure that speakers cater their presentations to the audience and keep to the subject of the presentation. The lagging economy did have an effect on the conference, there were 200+ people that had been registered that didn't attend (you could see all of the uncollected badges at the registration desk) and if the economy improves I think that next year will be even better. It's truly a conference for developers by developers which I truly enjoyed. It was great to meet several of the authors of my favourite tech books.
And Kelvin Meeks:
I have found Mecca...the Holy Land.
So often I've paid to attend a conference - and felt that there were spots of "goodness" in the sessions that typically stretch on throughout a long day/week - but rarely felt that I was really getting my money's worth for the not insignificant time and expense that I sacrificed to travel and to attend a conference.
Not so today. Every session I have attended today has been right on target with the particular interests I have as an enterprise architect.
- Matt Aimonetti - Qcon was also an awesome event where I met a lot of very interesting people and could measure the Merb interest in the "Enterprise" community
- David Kinney - Overall, I thought QCon was excellent. While the quality of the speakers was somewhat varied in the sessions I attended, I never felt my time might be better spent checking out a different session, which puts it ahead of most conferences. (I'm a big fan of voting with my feet.) QCon is certainly on my short list of conferences to attend next year
- Age Mooy - All in all it was very much worth it coming to San Francisco this year and hopefully I'll be back for more inspiration next year.
- Alex Moffat - I enjoyed the conference quite a bit, it's smaller than Google I/O and JavaOne (obviously) so it didn't suffer from the crowding that can make JavaOne such a trial to navigate round. Even so on several occasions there was more than one talk I wanted to see at the same time so there was certainly interesting content
- Ola Bini - As always, it turned out to be a great event, with fantastic people and a very interesting presentations on the schedule
- Nick Sieger - QCon is a fun and well-organized event overall, and I got the impression that the folks present were on the leading edge of "the enterprise", which is exactly the people we need to engage to bring about growth in adoption of Ruby. For that reason, I hope we can kick it up a notch and take another shot at pimping Ruby at the next one. Maybe I'll see you there!
- Steve Vinoski - Just getting ready to fly home from QCon San Francisco. Not surprisingly, it was another great conference, and the organizers told me that attendance was up about 30% over last year. Having a well organized and well executed conference with a large number of great speakers tends to have that effect.
- Chris Patterson - since the day I arrived it has been great. The depth of knowledge is truly amazing and I'm enjoying some excellent conversations. The attendee mix is (based on my estimation) 50% Java developers, 30% .NET developers, and 20% other languages like Ruby or Python. The diverse nature of the conference is interesting in that many of the sessions are platform-neutral, focusing more on patterns and practices and a less on tools
- Jeremy Miller - I've been at QCon San Francisco most of the week, and I think I can say that content wise, it's the best "eyes forward" conference I've ever attended. I'm finished with all my speaking and relatively satisfied with how it went
- Brendan Quinn - The logistics were all great as usual, the wireless came thick and fast, and the food was pretty good -- ya gotta love that mid-afternoon ice cream run to keep the sugar levels high!
- Dan Diephouse - I just got back from QCon in San Francisco. QCon is one of the best conferences around (IMHO). The speakers are great, the content quality is excellent, and the hallway conversations are thought provoking. If only I could've attended more!
In addition to talking about the conference itself, several people wrote about things that happened as a result of discussions or presentations at QCon, including Matthew Podwysocki:
In the past, I've covered a bit about object oriented programming in F#. I'd like to come back to that series as there is much yet to cover on this topic. Last week, I spent some time with Erik Meijer at QCon and he and I both agreed that in some ways, F# is a better object oriented language than C# in some ways given some of the language flexibility.
Ola Bini also did an interview while at QCon:
At QCon SF last week, I had the pleasure of being interviewed by Fabio Akita. It turned out to become an almost 2 hour long interview that touched on many geek subjects, and ended up with some fairly detailed content about Ioke. If you're interested, and have the energy to hear my opinions about loads of programming languages, you can get it here: http://www.akitaonrails.com/2008/11/22/rails-podcast-brasil-qcon-special-ola-bini-jruby-ioke.
Nick Sieger wondered about how to include Ruby more in the enterprise:
This year's QCon San Francisco conference was my first time attending, and it was an eye-opener for me for several reasons.
The last item relates to my perception that Ruby is not yet seen as a worthwhile tool for enterprise software development. It leaves me with some cause for concern, though it reflects more on the state of the industry rather than on the way Ruby was presented at the conference itself.
What does it mean for Ruby to be "ready for the enterprise"? Does that imply JRuby? Running on the JVM or a Java application server, or even .NET? Reams of XML? Presence of buzzwords, such as JMS, Spring/Hibernate? Or ability to adapt to or leverage legacy code? All of these?
Erik Rozendaal wondered whether interest in Java was waning:
After two days of QCon you get the feeling that no one is talking about Java anymore. C#, Erlang, F#, Groovy, Ruby, and Scala seem to have taken over. The only new Java stuff being talked about are libraries, application servers, or just IDE improvements. No one is talking about the Java language.
Looking back, the last major change of Java language was with the release of Java 5 in 2004. Java 7 will bring changes, but is late. The advantage is stability, but the price to pay is that the brightest minds in the industry start to leave Java behind.
Omar Khan identified a list of software to keep an eye on:
Part of the purpose of attending QCon was to get in synch with the tech community in regards to technology they have experimented with and have found use for. The following is a list of technology/frameworks/etc of interest that I took away from the conference:
Bjorn Rochel discovered the solution to a problem while chatting with Greg Young:
Does this sound familiar to you? From time to time someone delivers you answers to questions that have been on your mind for quite a while. You didn't find an answer on your own and most of the people you talked with either didn't care or hadn't satisfying answers for you. When you finally hear the answers you were searching for, you're . . .
Today I had the luck to be able to talk with Gregory Young about (Distributed) Domain Driven Design in general and in particular some about questions regarding bidirectional mapping from domain object to DataTransferObject, and how messaging or eventing integrates with DDD. This is what I took from that discussion . . .
- stunned how simple the actual solution is,
- wondering why you weren't able to solve that on your own,
- nevertheless happy and thankful to finally see the missing peace in your puzzle.
Ari Zilka was inspired to put together a table describing when to use different sorts of clustering:
In yesterday's panel on designing for scale, I polled the audience:
1. How many know what "eventually correct [or consistent] is" - 3 people
2. How many know when to use EHCache vs. Sleepycat - same 3 people
3. How many know the advantages of EHCache async replication vs. JMS - 2 of 3 people
4. How many know how to make memcache transactional - none
5. How many leverage async, event-driven designs in their apps on a daily basis - 1 person
There were about 60 people in the room.
This tells me that there is a lot of danger in putting a solution in the market and leaving the developer to figure out where to use your engine.
QCon San Francisco was a great success and we are very proud to have been able to offer such a conference. It will continue to be an annual event in both London and San Francisco, with the next QCon being around the same time next year in each location. We also look forward to bringing QCon into other regions which InfoQ serves, such as China and Japan. Thanks everyone for coming and we'll see you next year!!!!
We'd also like to invite you to our third annual QCon London conference. The event is taking place March 11-13, 2009. Registration for the 3 day conference is £1,105 until January 15! Register now and save £195. The conference will be held at The Queen Elizabeth II Conference Centre, like last year.
Todd Montgomery Dec 19, 2014