BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Key Takeaway Points and Lessons Learned from QCon London 2008

Key Takeaway Points and Lessons Learned from QCon London 2008

Bookmarks

In this article, we present the views and perspectives of many of the attendees who blogged about QCon, so that you can get a feeling what the impressions and experiences of QCon London (March 2008) were. From the first tutorials to the last sessions, people discussed many aspects of QCon in their blogs. You can also see numerous attendee-taken photos of QCon on Flickr.

This QCon was InfoQ's third conference and the second annual in London. The event was produced in partnership with Trifork, the company who produces the JAOO conference in Denmark. There were over 600 badges printed, with 70% of the attendees being  team lead, architect and above. 60% were attending from the UK and 40% from mostly around Europe. Over 100 speakers presented at QCon London including Kent Beck, Martin Fowler, and Erich Gamma.  Going forward, QCon will continue to run in the UK around March of every year, and QCon San Fracisco (see previous blogger comments) will be running November 17-21st 2008.

Table of Contents

Tutorials
   * Patterns for Introducing Agile Practices
   * Domain-Specific Languages
   * Coding the Architecture: From Developer To Architect
   * The Zen of Agile Management
   * Build Scalable, Maintainable, Distributed Enterprise .NET Solutions with nServiceBus

Keynotes
   * Erich Gamma: How Eclipse Changed my Views on Software Development
   * Martin Fowler and Jim Webber: Does my Bus Look Big in This?
   * Kent Beck: Trends in Agile Development

BOFs
   * JCP
   * InfoQ

SOA, REST and the Web
   * REST: A Pragmatic Introduction to the Web's Architecture
   * Using REST to aid WS-* - Building a RESTful SOA Registry
   * REST, Reuse, and Serendipity
   * Diary of a Fence Sitting SOA Geek
   * A Couple of Ways to Skin an Internet-Scale Cat

The Cloud as the New Middleware Platform
   * Amazon Services: Building Blocks for True Internet Applications
   * Application Services on the Web: SalesForce.com
   * Google GData: reading and writing data on the web
   * Yahoo Pipes: Middleware in the Cloud
   * Panel: Programming the Cloud

Banking: Complex High Volume/Low Latency Architectures
   * Technology in the Investment Banking Space
   * Keeping 99.95% Uptime on 400+ Key Systems at Merrill
   * Real-time Java for Latency Critical Banking Applications
   * From Betting to Gaming to Tradefair
   * LiquidityHub

Programming Languages of Tomorrow
   * The Busy .NET Developer's Guide to F#
   * Haskell: Functional Programming on Steroids
   * Functions + Messages + Concurrency = Erlang
   * The Busy Java Developer's Guide to Scala
   * Open Space session

Evolving Java
   * Concurrency, Past and Present
   * Blending Java with Dynamic Languages
   * Evolving the JVM
   * The Cathedral, the Bazaar and the Commissar: The Evolution of Innovation in Enterprise Java
   * Evolving the Java Language

Solution Track
   * Architectural Implications of RESTful design
   * Introducing Spring Batch
   * Panel: Open Source and Open Standards
   * Panel Discussions: Architecting for Performance and Scalability
   * Clustered Architecture Patterns: Delivering Scalability and Availability
   * Testing by Example with Spring 2.5

Browser & Emerging Rich Client Technologies
   * GWT + Gears: The Browser is the Platform
   * The DOM Scripting Toolkit: jQuery
   * Tackling Code Debt

Agile in Practice
   * Managers in Scrum
   * Agile Mashups
   * Beyond Agile
   * A Kanban System for Software Engineering

XpDay Sampler
   * Measure for Measure

.NET: Client, Server, Cloud
   * Building Smart Windows Applications
   * Building Rich Internet Applications
   * Windows as a Web Platform

The Rise of Ruby
   * Panel: When is Rails an Appropriate Choice?

Effective Design
   * Intentions & Interfaces - Making Patterns Concrete
   * A Tale of Two Systems
   * User Interfaces: Meeting the Challenge of Simplicity
   * Effective Design

Architectures You've Always Wondered About
   * eBay's Architectural Principles
   * Architecture in the Media Production Workflow
   * Behind the Scenes at MySpace.com
   * Market Risk System @ BNP Paribas

Domain Specific Languages in Practice
   * External Textual DSLs Made Simple

Interviews

Social Events

Opinions about QCon itself

Takeaway Points

Video

Podcast

Conclusion

 

 

 

Tutorials

 

 

 

Patterns for Introducing Agile Practices

Michael Hunger related his thoughts on this tutorial:

On Monday I was delighted to attend Lindas tutorial. It had a very personal touch, was very interactive. She presented the patterns for introducing new ideas into organizations using a play with us participants as actors. Actually this was the story of herself introducing Design Patterns back in 1996. During the discussion many of the subtleties of the patterns (i.e context and forces) were addressed. Having us contributing by playing and asking lots of questions helped a lot to really absorb the essence of the patterns. It was a lot of fun.
[...]
Linda is a great speaker with deep understanding and lots of wisdom to share. Thanks a lot for the tutorial.

Domain-Specific Languages

Leonardo Borges gave a detailed summary of this tutorial, including:

Marting Fowler started discussing what DSL's are and giving some examples that many of us use in our day to day Job. Like the XML configuration files in the Java world. It is a kind of DSL, it has it's own keywords and syntax in order to express some information that will be used , for instance, to configure an underlying framework.

The problem with XML is that it becomes hard to see the overall behavior behind it. It's not very fluent to understand the purpose of an XML file just by looking at it for the first time. There is too much "noise". Things that get into the way of the readability. - YAML files are an much more readable alternatives to XML.

Ola Bini also attended this tutorial and shared his thoughts:

I've seen bits and parts of this tutorial before, but seeing as the three speakers are working on evolving it to a full and coherent "pedagogical framework" for teaching DSLs, the current presentation had changed quite a bit since the last time. I really liked it and I recommend it to anyone interested in getting a firm grasp about what DSLs are. Having Rebecca talk about external DSLs in the context of parsers and grammars makes total sense, and the only real thing I would say was a problem was the time allotted to it. Her part of the subject was large enough that 75 minutes felt a bit rushed. Of course, I don't see how Martins or Neals parts could be compressed much more either, so maybe the subject actually is too large for a one day tutorial? Anyway, great stuff.

Mark Edgington wrote about this tutorial as well:

My take out was that the use of DSL's needs to get more prominence, in a similar way to Agile development; which has taken years to gain main stream prominence. Through the use of DSL's developers will gain skills and solve problems in a more elegant fashion, so it's a definite win and with a Martin Fowler book on the subject it should get wider prominence.

Cleve Gibbon also had several take-away points, including:

  • There is a lot of unknowns surrounding the whole area of DSLs, and what Fowler and Co are doing are exploring the problem space and proposing options.
  • Some languages, Ruby, Groovy, Scala lend themselves more easily to writing readable internal DSLs
  • There were a LOT of Ruby guys in the room, including the leads on rSpec, an internal DSL for Ruby testing
  • The definition for DSLs is wide open for interpretation and a lot more will happen in the space over the coming months
  • I need to do some more reading around the concept of a language workbenches. Very interesting stuff.

Jay Fields was inspired to write an article about DSL Simplexity:

At QCon London I caught the Domain Specific Language (DSL) tutorial by Martin Fowler, Neal Ford, and Rebecca Parsons. While Martin covered how and why you would want to create a DSL he discussed hiding complexity and designing solutions for specific problems. As Martin went into further detail all I could think was: Simplexity.

Anders prefers simple all the way down. Of course, hopefully we all prefer simplicity; however, the devil generally lives in the details.

Coding the Architecture: From Developer To Architect

Simon Brown published a document used during this tutorial on his blog:

During our tutorial last week at QCon, we asked attendees to define the software architecture for a small software system and provided a handout containing some guidelines. Since this may prove useful for other people, we're making Software Architecture Document Guidelines v0.1 available for download.

This document was also written about on InfoQ.

The Zen of Agile Management

Mark Edgington attended this tutorial:

David Anderson came at agile almost from the standpoint of standard problem software projects. He looked at how these could be edged towards the agile world through clear identification of the value stream (process) and the examination of metrics around this. His key take outs that quality should be the focus and reducing the work in progress (WIP) leads to efficiency: in effect shorter cycles or sprints work far better than large batches of work. Having concentrated on agile methodologies, this viewpoint of how to get to agile inspired lots of thought; it is often the case that a covert agile approach must be followed, where a full and open method such as Scrum, cannot be taken.

Build Scalable, Maintainable, Distributed Enterprise .NET Solutions with nServiceBus

Udi Dahan had some thoughts on his tutorial:

I gave a full day tutorial on nServiceBus and we had a full house! The tutorial was about 90% how to think about distributed systems, and 10% mapping those concepts onto nServiceBus. I made an effort to cram about 3 days of a 5 day training course I give clients into one day, but I think I was only about 85% successful. People didn't have the time needed to let things really sink in and ask questions, but the lively forums and skype conversations available will probably do the trick.

Keynotes

Erich Gamma: How Eclipse Changed my Views on Software Development

Several people commented on this keynote, including Jan Balje:

A good keynote by a famous name about the development of Eclipse with points about architecture, open source, process etc. At the end he demonstrated Jazz, a really (really) big environment for large distributed software development. It's looks interesting, alas no open source.

Jevgeni Kabanov:

One interesting thing I didn't know was that Eclipse now includes API tools that allow to annotate APIs with version information and fine-grained access control and then make Eclipse IDE check these for you.

So WTF is Jazz? Explanation starts with great clouds of fog: "integration", "community" and even "Eclipse experience". Scary diagrams with a lot of arrows. Starts sounding like Twitter on steroids — you can follow events or channels, events are produced from both web and Eclipse plugins. You can also track basic statistics like defect progress. Doubt you can get updates to IM or SMS, though :)

Simon Brown:

The thing I will say is that it's always good to hear the real-world stories about iterative development - in this case, the Eclipse team work on 6 week iterations and don't necessarily have a fully shippable product at the end of them. Erich wrapped up the keynote with a demo of the new Jazz platform, which pulls together all of the tools in your standard development suite into a fully integrated workflow. This looks like a rehash and enhancement of Rational's Unified Change Management (UCM) platform using Eclipse as the front-end. That might not be totally accurate (I think I saw a Subversion bridge in the slides), but you get the point.

Steven Mileham:

Opening talk from Erich Gamma on how the Eclipse IDE has changed software development for him and his team in the seven years since he began writing it. Interesting stuff, showing the transition from a closed source project to an open source community project, how they migrated from a waterfall methodology which allowed them a slow build of development then only to have to panic near the deadline, to a more iterative agile methodology which forced the team to focus more on delivery and shipping. Finally he introduced Jazz as a team collaboration tool which integrates very tightly with Eclipse.

Libor Soucek:

The biggest take out was that program release shall stick with firm release date. The best date working form Eclipse point of view is June as this creates the smallest disruptions due holidays and other events. They are developing in 6 week dev. "sprints" periods (1 week planning, 4 development and 1 integration and code finalization/testing) during year to add new features for yearly release. One month before release they fully dedicate all dev. resources to final code polishing and testing.

What was also quite interesting on Erich's presentation was describing how fast formed strong Eclipse community was. Community fully covered support of newly joined developers/users, writing manuals, etc. which actually freed original IBM dev. resources to again do what they know the best - full speed development. This is definitely very interesting model to consider for similarly typed projects.

and Maarten Manders:

Erich Gamma's keynote was basically a promotion talk for IBM Rational's newest product called Jazz. It's a project management software, like Jira (the one we use), but it goes far beyond in terms of integration, because it works so well with Eclipse. Furthermore, it seems to have a good system for managing development teams. All in all it seems like a nice product - I especially like the fact that it was made in respect of agile development techniques. The big question that remains is: How well does it work with PHP?

Tim Anderson also gave a detailed summary of this keynote, including:

Gamma spelt out the Eclipse philosophy, the starting point being that everything is a a plug-in; that APIs matter a lot and its better to get a small API right rather than get it wrong and have to support it for ever.

He then talked about iteration, a key tenet of agile development. He showed a great slide which charted the progress of some projects, from "all the time in the world" at the beginning, to "say goodbye to your loved ones" at the end, followed by total exhaustion after the thing is shipped. Iterative development with continuous builds and sign-offs every 6 weeks is less stressful and more productive.

Martin Fowler and Jim Webber: Does my Bus Look Big in This?

Jim Webber had some comments on his keynote:

The meme of the conference for me came from the keynote talk I did with Martin Fowler. The middleware vendors aren't giving us big powerful software as they think, but instead of proffering big flabby software complete, with what came out during the talk as "enterprise man boobs."

The slides are available at the QCon Web site, but they mightn't make too much sense without the video (which the QCon guys will release in the coming months), apart from the picture of the flabby bloke whose man boobs will forever epitomise enterprise middleware.

Many people wrote about this keynote, including Andrew Whitehouse:

I have personally experienced what I believe is a lot of unnecessary complexity in Enterprise software, and it is refreshing to see Martin and Jim cut through this to come up with a set of principles for effective (lightweight) delivery. I'm also pleased that ThoughtWorks are actively promoting (J)Ruby on Rails in an Enterprise context as this seems a natural successor to "traditional" Java development, and in my opinion they seem to be one of the most enlightened consultancies on how to deliver Enterprise software effectively. (They're also agile.)

Libor Soucek:

Day was finished with great entertaining keynote given by Martin Fowler and Jim Webber on theme ESB use in SOA based applications. In many aspects it remained me presentations from Don Box.

Even thought presentation was essentially same information value as I have already seen recorded on internet for example here never less it was big fun time. They make so much show and great marketing value to convince all attending people to prefer internet based message integration via standard HTTP protocol for SOA compare to ISV specific solution. I would say everyone must get this message clearly presented and seriously start to think how to use it in own solutions.

Patric Fornasier:

Jim and Martin's keynote called 'Does my Bus look big in this' mercilessly dissected current middleware approaches. It was and extraordinarily sharp-witted and entertaining talk in which they used their entire linguistic repertoire to pick on middleware vendors and conceive beautifully eloquent metaphors such as "man boobs" (instead of bluntly calling it fat, ugly, bloated middleware products).

Stefan Norberg:

After a busy day at the office I managed to get to QCon just in time for the keynote by Martin Fowler and Jim Webber. Both of them are as you probably know great speakers and the keynote was entertaining. They had slides on why EAI, SOA and ESB:s are a joke (as a concept, to maintain), but I can't say they provided any alternatives. They basically said:
  1. Focus on business value from the first iteration
  2. Keep it simple
In reality though, you often get the most business value by buying 3rd party software. And it needs to be integrated... Not that I enjoy it though. If you build it yourself, then obviously light HTTP services is the way to go.

Maarten Manders:

Imagine Martin Fowler (in his beloved leather pants) dancing on stage while Jim Webber is giving another rant on bloated enterprise middleware (check out his last take on ESB's). Of course, I'm talking about the evil Enterprise Service Bus, which has become so fat, that it's grown enterprise manboobs! The slides speak for themselves, get them here! Unfortunately, there's no video yet.

And Tim Anderson:

TIBCO, BizTalk, webMethods, you name it, "they're a pain in the neck to use", said Webber.

Enterprise Service Bus? Should be called the "Erroneous Spaghetti Box". SOA? "A dog's breakfast."

According to Fowler and Webber, the Web is the answer. "The dumbness of the internet is a real win... it allows you to do things that you did not think of." The Web is ubiquitous middleware, incremental and low risk.

     Squid is your Enterprise Bus... We're not going to need all this crazy middleware that middleware vendors try to sell us. We don't like ESBs... The big up-front middleware approach just isn't very sensible.

Kent Beck: Trends in Agile Development

Kent Beck's keynote was another popular discussion topic, with several people writing about it such as Joachim Recht:

One interesting bit came up in regard to discipline. I've always said that XP and agile processes take discipline to implement and use. Kent Beck's take on this was that it was just the opposite - not doing XP was hard for him. Instead, it's more or less a question of habit, which is where the problem often lies: Changing part of yourself requires an investment, but it's not completely clear when the investment will yield a profit. Ironically, this economical argument is also used to promote XP: push the cost into the future and pull the profit closer - for example by releasing often, not gold plating, and so on.

Peter Gillard-Moss:

There were lots of interesting slides about the rise in tests, quick releases and lots of other agileness but the most interesting aspect of the talk for me was the rise of the new generation of tech savvy business professionals. The old "wizards" detectable by their strange socially inappropriate behaviour are out as a generation of Nu-Geeks with social skills like listening, team work and emotional intelligence are rising to the challenge of making businesses happy.

Patric Fornasier:

Kent Beck's keynote was also quite interesting. He thinks that in the future we will write (even) more tests, deploy applications more frequently (apparently Flickr does it every 30 minutes!), work in teams that are more distributed, and solve more complex problems. He also believes that with the rise of a new generation of tech savvy business people, software developers will increasingly loose their 'wizard status' and need to invest more in their non-technical skills.

Steve Vinoski:

Kent Beck's keynote was excellent. It was about developer responsibility, developer integrity, and the relationships developers have with those around them (here's a good summary). Extremely insightful, not unexpectedly of course, and covering important topics that are unfortunately often taboo among technical folk.

Mark Edgington:

Highlighting that the focus should be on social rather than technical skills Kent coaxed developers towards integration with the business people. He pointed out that honesty works and hiding behind complexity and changing requirements is not the best way to build business partners and get them to trust in the software your developing. This is a simple and yet key problem for many.

Tim Anderson:

Kent Beck is really a relationship consultant, or should that be counsellor? This is not a bad thing. Beck gave a keynote this morning here at Qcon and talked a bit about techie topics like frequent deployment (he claims that Flickr deploys every half an hour) and creating more tests more often, but the main focus of his talk is relationships within the development team and between the team and the business people (if they regard themselves as separate).

Beck says that the ubiquity of computing is changing the typical characteristics of a programmer. When only geeks had computers, programmers were inevitably geeky - and for whatever reason, that often meant something of a social misfit. Today everyone grows up with computers, which he says makes programming more accessible to non-geeks, who have better social skills.

Antonio Goncalves:

He didn't talk much about the technical skills that a team should have but more about social skills. No need to have a very cleaver technical guy if he can't work within a team. One person can ruin productivity of the entire team. But that brings another recruitment issue. How to evaluate social skills ? This seems under estimated but as Kent said, it can be learned. For him one of the most important skill is to be able to listen.

And Steven Mileham:

The day began with an interesting talk on Agile development methodologies (yet again). The different aspect of this presentation, was that it was more focused on the social skills needed in a team in order to work in an iterative, collaborative manner. His main point seemed to be to focus on what you as a team are good at, and making sure that your energies go into that, rather than inventive spin and lies or excuses as to why performance wasn't what it should have been.

Phil Manchester of Reg Developer also wrote an article about this keynote:

Beck noted wryly that traditional approaches to software development ran contrary to economic realities. Yet, despite bold attempts at change - such as experimental work on URL-driven design (UDD), literally generating HTML code in real time in response to a web request, during the early days of XP - he has settled on a measured approach.

"Received wisdom is that if you spend time up front getting the design right you avoid costs later. But the longer you spend getting the design right the more your upfront costs are and the longer it takes for the software to start earning. So a rational model of software is to design it quickly - the economic pressure to improvise presents an interesting challenge," Beck told QCon.

BOFs

JCP

Patrick Curran described the JCP BOF which happened on Thursday night:

The discussion ranged over a variety of topics, but the primary focus was on how individuals and JUGs could get involved in the JCP. Several people expressed concern about what they saw as obstacles to entry (for example, the legal "participation agreement" that members must sign and which many people find intimidating, and we all recognized that it is more difficult for an individual to get involved than for someone whose activity is sponsored by their employer. However, since we call our organization the Java Community Process I am determined to do whatever I can to encourage and enable individuals to participate. My primary reason for attending QCon was to meet with a broad cross-section of Java community members and I'm glad to report that I was able to do so. I'd like to thank the QCon organizers for giving us this opportunity, and of course I also want to thank everyone who attended our sessions.

Antonio Goncalves also related his impression of the BOF:

It was a very informal BoF, as I like them. About twenty people turn up and with Patrick Curran, Rod Johnson, Peter Pilgrim.. we talked about the JCP. Rod was less hard admitting that the JCP has opened up a lot. I gave my experience of being an expert member and others about being JSR leaders. We talked a lot about transparency, open mailing list, wikis… ideas that would bring more transparency to the JCP. I was surprised to see that the spec leader does more or less what he/she wants. There are even some JSRs that already have a public mailing list. I asked Patrick what is the percentage of individual participating at the JCP. I was expecting a figure between 10% or 20%, but no, it's three quarters. I'm not the only individual involved in that then ;o)

InfoQ

Dionysios G. Synodinos described the InfoQ BOF which also took place on Thursday night:

At this BOF I had the opportunity to meet the faces behind InfoQ and learn about how it evolved and the future directions. It was a friendly interaction, were several participants gave their opinions on how this information-rich community site can grow.

The BOF was taking so long that the team from Software Engineering Radio that were holding the next BOF in the same room arrived and we all had the chance to interact together until the place had to close.

SOA, REST and the Web

Erik Johnson had some thoughts on REST which arose after attending this track:

The public "API" for a RESTful application is its URI address space. You can invent a list of URIs mapped to resources and state sequences all you like. But the reuse potential is limited to whatever your callers can get out of that URI space. Like REST, SQL databases have a uniform interface. But look at the practically unlimited variety of resources you can access. Obviously, a REST URI shouldn't be a SQL statement and I'm not trying to shoehorn XQuery into a URI. All I'm saying is that a URI space can incorporate parent-child and relational characteristics from a data model – using relational database behavior as a guide. This has been a key aspect (for 8+ years, BTW) in developing URI strategies for our products.

The emerging specs and toolkits, like WADL and WCF, feature URI template constructs. But URI templates have no notion of resource linkages (parent, relational, or otherwise) and that limits their effectiveness. At QCon, there was little consensus that WADL was the right way to describe a REST application. But I think REST description languages resource types are coming and I'd like their creators to at least consider resource linkage features for URI templates. It's all been done before.

Mark Little also reiterated two points about REST and HTTP that he mentioned at the conference:

  • I've been developing applications on the Web since it was first released: being at University at the time, I had a lot of freedom to play. I even wrote a browser in InterViews! (Anyone else remember gopher?) Anyway, I remember being glad when the first URN proposal came out because it looked to address some of the issues we mentioned at the time, through the definition of a specific set of name servers: no longer would you have to use URLs directly, but you'd use URNs and the infrastructure would look them up via the name server(s) for you. Sound familiar? Well fast forward 10 years and that never happened. Or did it? Well if you consider what a naming service (or trading service) does for you, WTF is Google or Yahoo?
  • My friend and co-InfoQ colleague/editor Stefan has another nice article on REST. In it he addresses some of the common mis-conceptions around REST, and specifically the perceived lack of pub/sub. You what? As he and I mentioned separately, it seems pretty obvious that RSS and Atom are the right approach in RESTland. The feedback I got at QCon the other week put this approach high on my pet projects list for this vacation, so I've been working on that for our ESB as well as some other stealth projects of my own.

Tim Anderson also discussed the track as a whole:

Stefan Tilkov led a track on SOA, REST and the Web. Now, the general theme of this (following on from the Fowler/Webber session the day before on the shortcomings of the Enterprise Bus) is that SOAP and WSDL and WS-* have failed to deliver and that REST is fundamentally a better approach to designing distributed inter-application systems. What's wrong with WS-*, SOAP and WSDL? Too many standards; too complex; too brittle; too incompatible; too few free and open source implementations; leaky abstractions; hijacked by middleware vendors who have an interest in keeping technology arcane and expensive.

By contrast REST is being embraced for all sorts of reasons, ranging from purist arguments about the value of resource-based computing where everything has an URI, to pragmatic arguments along the lines of "it works, I can use it, I understand it." However, if you poke at some of the solutions which are described as REST, it turns out that some are more RESTful than others - using HTTP as a transport for POX (plain old XML) is not necessarily REST.

Steven Mileham enjoyed learning about REST:

The rest of the day I spent in the "SOA, REST and the Web" track. Having now finally grasped the concept of REST services, I want to go back and rewrite all the web services I've already built. Whereas traditional "Web Services" focus on defining specific interfaces and APIs which must be continually maintained, so that if the back end is changed, the consumers of that service must be updated, REST utilises the standard operations of HTTP;
  • GET
  • PUT
  • POST
  • DELETE
By using these methods, any HTTP client can now consume your service, the services is identified by a URI which describes the resource, eg; http://example.com/orders/2007 would return all orders from 2007.

Johan Eltes of Callista wrote a long post discussing the challenge of complexity versus simplicity in an API, and commented:

Former Web Service evangelist, like Steve Vinosk (Iona, Virtue), Mark Little (HP, WS-* spec-lead, JBOSS) are spending the time bashing SOAP and WS-*. HTTP Web Services and the architecture represented by REST is the new reaction to the over-complicated best-practice. REST has been used for many years and is core interfacing technology at global players like Google. Amazon is also increasing the use of REST. Looking at the history, is there anything specific with REST, that prevents it from starting its journey up the complexity scale, repeating the history of mainstream pre-decessors? Either that, or fail due to inability of accommodating new requirements? Are we heading REST-* ?

REST: A Pragmatic Introduction to the Web's Architecture

Jan Balje had some comments on this presentation:

As this was a new concept to me, I decided to listen in. A good talk, although I didn't completely understand it in one go. It seems REST is a set of 5 principles which you can apply when developing webapplications. This gives you a lot of technical possibilities. But as far as I can see it's an alternative to webservices. An important new trend already and we still haven't finished with the previous one.

Dionysios G. Synodinos also enjoyed this prsentation:

A great presentation on the basic principles of REST and why you should care about it. It is interesting to see that after so much technological elaboration the last few years, it is all coming back to the basic nature of the WWW.

Using REST to aid WS-* - Building a RESTful SOA Registry

Stefan Tilkov wrote a detailed summary of this talk, including:

  • Talk is about applying REST to a real application problem - managing web services metadata
  • Hasn't seen Web UIs that are also APIs - e.g. people can't set Accept headers for debugging purposes in their browsers
  • Benefits of AtomPub: clear behavior of POST, GET, PUT, DELETE
  • Maybe there could be a generalized REST or Atom query language
  • Main problem in SOAs: trust as a root cause
  • WS-* metadata isn't enough for the real world - some real life annotations are needed
  • Lifecycle handling - still basic, more coming in version 1.1

REST, Reuse, and Serendipity

Stefan Tilkov wrote a detailed summary of this talk, including:

  • "The definition of a legacy application is 'one that works'" -- QoTD!
  • EAI products typically centralized hubs, proprietary and expensive, costly
  • Code generation (whether from WSDL or from code) creates deceptively significant consumer-service coupling
  • Integration problem summary: proprietary approaches too expensive, standard approaches focus on implementation languages, not distributed systems issues, new interfaces -> new application protocols (something he never notics ), ad hoc data formats coupled to interfaces -- all of this inhibit reuse
  • A question to consider -- was the pipe invented on day 1? Or discovered later -- serendipity?
  • Most of the stuff Steve's done in the past -- building ORBs and such -- dealt with the effects of having a specific interface (generating code, creating the runtime infrastructure). Most of this stuff disappears in REST
  • Praise for the REST dissertation - "the clearest architecture document he's read"
  • Summary: RPC-oriented systems try to extend language paradigms over the wire -- encourages variation (in methods, datatypes), which can't scale
  • REST is purpose-built for distributed systems, properly separates concerns and allows constrained variability only where required

Antonio Goncalves also gave a summary of this talk:

He started with an architectural slide showing interoperability between different systems using DB, SMTP, HTTP, MOM (expensive), ESB and EAI (same thing, just relabelled), JCA, RPC (ignores partial failure), JAX-WS (marshalling/unmarshalling)... Before talking about REST, Steve talked a bit about Unix pipes. They have a very uniform interface and standard file descriptor. Any command can take something in input (stdin), produce something (on stdout) and deals with errors (stderr). That's why we can combine them in any way. REST has also a uniform interface (GET, PUT, POST, DELETE) and you can pipe resources and encourage combination of orthogonal application.

Diary of a Fence Sitting SOA Geek

Stefan Tilkov also wrote detailed notes about this talk, including:

  • 2000-2005: spent 5 years going backwards to distribution transparency - it feels like 1970 all over again
  • Distribution must be explicit, otherwise you're not able to deal with failures
  • Problems with tight coupling introduced by trying to make a remote interaction look like a local interaction
  • uniform interface enables generic infrastructural asupport
  • specific interface allows for more limited generic support
  • Standards: The Web is a series of standards, universal adoption has to count for something, REST/HTTP is ubiquitous

A Couple of Ways to Skin an Internet-Scale Cat

Stefan Tilkov had detailed notes from this talk, such as:

  • Schizophrenic on whether or not to prefer messaging or Web
  • The XML fairy sprinkles pixy dust (which may in fact be crack cocaine) on your enterprise systems
  • Not everything needs to be an OASIS standard. We know not to take a leak in public. (He said this )
  • Serendipity is great - don't let the RESTafarians tell you different
  • Innovation at the edges of the Web - not by some central design authority such as the W3C
  • With changing contracts as part of a resource, we can't be too imperative anymore
  • Summary: both the Web and Web services community suffer from piss-poor patterns and practices and awful tooling

Patric Fornasier also commented that this talk "showed some pretty interesting ideas that use existing Web infrastructure (i.e. no WS-*) for integrating applications and realizing business workflows", and Mark Edgington said that this talk was "As expected a full energy opinionated talk on why REST together with the internet as your enterprise bus is leaps and bounds above anything vendors or WSDL and The WS-* (death star) specs have to offer".

The Cloud as the New Middleware Platform

Maarten Manders described what he saw as a new method of application development based on the cloud:

Need some more storage? Take S3. Need to quickly scale up with another 20 servers? Take EC2! Need to get to a user's mails, calender and other stuff? Use the services of Facebook, Google and Yahoo. In the end, just mash it all together with Yahoo! pipes! I think it was Nati Shalom, who made this interesting remark about cloud computing: Developing new applications yields very small risks nowadays, because it's so cheap & easy to plug together your application. If you stumble, you won't fall hard. And if you succeed, the cloud will do the heavy lifting and help you scaling out. In my opinion, this could be the next big leap to make (web) development again more agile after the uprise of dynamically typed languages.

Filippo Diotalevi said that this track was "the most interesting track I attended at QCon", and that:

Cloud computing can be seen in two different ways:
  • from developer/architect perspective, it is an architectural style that relies on the idea of having a "cloud" of (unlocalized, dynamic, distributed, unreliable but redundant) services that can be discovered and used by applications
  • from a "user" perspective, it can be seen as a way of creating new services with no (or just a few line of) code, simply wiring together different services and content providers freely available on the cloud (internet)

Amazon Services: Building Blocks for True Internet Applications

Jan Balje described how Amazon's Services are removing some of the excuses for project failures:

Most programmers or students use lack of hardware-resources as one of the reasons their project did not meet the expectations the teachers (or themselves) initially had of it. If we had more computing power, they say, we could have made this or that feature working, we could have some more work done in the little time that was available for the project, or the query would not have taken as long as it does now.

Jeff Barr from Amazon.com has put the lie to these kinds of arguments. Being, in his own words, a real web-service evangelist, he introduced the gathering to the other Amazon.com, the one that at the moment has three data-centers (two in the US, one in Ireland) that enable everybody to get as much computing-power as they need on the fly, for a very little amount of money. Amazon has created web-services that take care of all the muck (as the other guy from Amazon, Jeff Bezos, used to call it) of programming, such as load-balancing, initializing servers and services and that kind of more mundane work. Once registered, users can fire up servers using a FireFox-extension and ssh to them immediately. If needs be, another server can be fired up using the exact configuration of the first one.

Jevgeni Kabanov gave an overview of the Amazon services described during the talk, including:

The first part of the talk is about S3 file hosting, which is done through a Firefox plugin (which I mistook for FireFTP first). You can upload files, assign ACLs, get the URL for publishing and pay proportionally to storage used, requests done and bandwidth used (all three have assigned fees). Nothing technically fancy, but cool nevertheless.

The next part is about EC2, which is a virtual server on-the-fly renting. You basically store a (special) disk image on the S3 service and then boot from it a number of virtual servers. You get root access to your servers and pay per hour of use. You can add/remove servers both programmatically and from a Firefox extension.

Steven Mileham was surprised to learn about Amazon's services:

I wasn't aware that Amazon even had anything in this sector, I was expecting services for e-commerce to be honest. What they actually covered though, was a completely scalable server infrastructure upon which you could run any application you wanted. Using their server farms, they host numerous virtual servers, which for a reasonable fee, can be dynamically created, clustered and utilised for an arbitrary amount of time, charged by the hour and storage. Creation of servers was entirely scripted to allow for scaling when demand reached a specific point or more storage was required.

An anonymous attendee looked at the cost benefits of using these services:

Given the costs and inflexibility that typical, old-style computing centres, it is certainly an interesting option for us. A quick calculation tells me that to replicate a part of our infrastructure where Amazon might be a good idea would cost us about $900/month. I don't know the exact figures of our current data center provider, but I'd be surprised if it was that low.

Mark Edgington also attended this talk:

I already use Jungle Disk which is the amazon S3 (simple storage service). But this talk went through the entire set of services, giving enough insight into each to provoke thought, as to potential uses. Of great interest to me was the Elastic compute cloud, allowing for fast scalability and setup, with a time, bandwidth and computing power pricing model. The up and coming Simple DB, an object database looked very interesting.

Application Services on the Web: SalesForce.com

Johan Eltes of Callista discussed some thoughts that this presentation brought to mind:

So SalesForce is just an example of an application built on the Forces.com SaaS platform.

The platform has a lot of nifty 4GL features that boosts development of business applications (the domain of Force.com). The more I heard about Force.com, the more it made me think about SAP MySap ERP application and the business application platform on which SAP applications have been built for ages.

The loop was closed when the SalesForce architect revealed a proprietary business logic language named Apex (The SAP business logic language is named "ABAP"). As SaaS grows, it will be interesting to see if Force.com becomes the "SAP of SaaS".

Google GData: reading and writing data on the web

Mark Edgington summarized this presentation:

The google data API talk concentrated on decisions behind the selection of REST over SOAP; basically RESTs four operations get,put, post and delete are likely to cover 90% of your needs. Also the extensions they have developed around query, authentication, concurrent operations and batch updates. These concepts were tied in nicely to examples of use and comments regarding the benefits of building on or with standards; less need to document a big one.

Yahoo Pipes: Middleware in the Cloud

Mark Edgington had some thoughts after this talk:

I posted about pipes about a year ago and it has since increased its modules from 20 to 50 and makes up 1/3 of all mash-up calls to Google. I really need to play with it some more, it really is very cool, bringing a lot of power without the need to code and enabling those that can to spend more time on the applications that consume the data.

Panel: Programming the Cloud

Mark Edgington described this panel as "Without a doubt the highlight of the day so far":

The panel of the days presenters covered the whole spectrum of cloud computing from current position to future issues. I have five pages of notes from this so not one for my N810 or my thumbs will go dead. The key points of interest to me where the fact that the cloud is almost a renewal of some old technology ideas that did not quite make it, mixed in with standard tried and tested ideas and innovative pricing. If there was or will be a key issue it has to be Security (trust), i think it would only take one major security breach (loss or steal of data) and it could take down a company; many will have to base themselves firmly around trust so one to watch.

Mark also discussed the Rubik's cubes which were given to each member of the panel.

Banking: Complex High Volume/Low Latency Architectures

Libor Soucek had some overall thoughts on this track:

Banking track was very interesting for me not only because I'm working in exactly same domain field but also because challenges imposed by high volume/low latency systems demands very well balanced architecture with extremely careful selection of technology in use. Moreover in this domain is true that some of the latest/greatest stuff of emerging technologies is not always usable here (i.e. for example dynamic languages, WS-*, etc.)

Technology in the Investment Banking Space

Simon Brown gave an overview of this talk:

First up was John Davies who jumped in at the last minute because the speaker for that slot couldn't make it to the conference. Instead of a session about domain specific languages, John presented an overview of technology within the investment banking space. It was a really interesting talk and very nicely summarised many of the trends that we've seen over the past few years (e.g. compressed on the wire message formats rather than XML, etc). The key takeaway point for me was that you need to design for scalability. This is one of the reasons why I think it's important that software systems have an explicit and intentional architecture, with somebody taking responsibility for it.

Libor Soucek also had some takeaway points:

Actually none of the presented application on "banking" track did even touch Windows and/or .NET based system to my surprise! According to John Davies presentation 80-85 percent applications are actually written in Java in this domain. Rest of "market" share is then predominantly occupied by C/C++ due its performance/memory capabilities and predictability of execution (i.e. all "standard" GC based environments are fighting with unpredictable time of execution on near real time application with low latency here).

Keeping 99.95% Uptime on 400+ Key Systems at Merrill

Maarten Manders was impressed by this presentation:

I think, the best talk on the Banking Track was held by Iain Mortimer, Chief Architect at Merill Lynch. He told us how they gather 9 billlion monitoring messages a day! It turns out that every component in their infrastructure and application stack is constantly producing monitoring messages and they really seem to care about microsecond latencies while doing so.
Naturally, the most challenging task is to make sense of this log tsunami. The goal is to reliably spot system failures without getting spammed by useless alerts. So if for example your hard disk is full, your system will produce tons of error messages: First of all you'll get a capacity error. Then, files can't be written anymore, queries fail, your service queues stack up and finally, you'll run out of memory. Every one of these failures will generate a lot of redundant error messages. However, the one and only message you're interested in, is that your disk is full. Fixing it will make the others disappear - that's called correlation.

Mark Edgington also had some thoughts on this talk:

It provided a very clear view of how Merrel Lynch deals with the billions of daily messages, produced by there systems globally. The break down of message precedence and the aim of automated fixing of an issue within an 18 second window, was very interesting. The compounded issue of differing vendor error messages, dashboards and the overarching job of combining these into monitoring dashboards at a zone, site, region and global levels was a real eye opener.

Simon Brown also attended this talk:

Although this was a relatively interesting session, I do think that the title was misleading. Instead of talking about how the 99.95% availability target was being satisfied, Iain talked about how Merrill monitored those systems, and particularly about the rules that they used to monitor those systems. Iain said that their availability requirements allow for 18 seconds of downtime per day, but didn't go into detail about any failover or recovery techniques that allowed them to meet that goal.

Real-time Java for Latency Critical Banking Applications

Simon Brown gave a summary of this talk:

Although there were a couple of banking examples thrown in, this session was essentially a generic RTJS talk. The closest I've got to real-time Java is BEA's JRockit JVM with deterministic garbage collection but, as Betrand said, garbage collection is only one part of the story - Java apps also suffer jitter from the JIT compiler kicking in at unwanted times, etc. While this isn't something I'll probably try out myself (Sun's real-time Java VM only runs on Solaris 10 at the moment), what they've done is built a framework onto which you can build your applications where you decide which parts of it are regular Java, soft real-time or hard-real time. My understanding is that the hard real-time stuff is made possible by utilising the underlying OS real-time threading and some clever use of non-heap memory spaces, in addition to appropriately scheduling the garbage collector so that it doesn't interfere. Cool stuff and I think we'll be seeing this pop up in the banking industry soon.

From Betting to Gaming to Tradefair

Nik Silver had some comments related to this presentation:

Even the very specific presentations contained valuable points that could be generalised and reused. For example, Matt Youill and Asher Glynn of Betfair talked through how they scaled the transaction processing on their servers by a hundred-fold. Guardian.co.uk doesn't need that kind of throughput, so the details were primarily of intellectual benefit. But a key practical lesson was how they approached the problem: by presenting it to industry players as a challenge carrying great kudos to the winning company.

Simon Brown also attended this presentation:

Next was Betfair talking about their new Tradefair platform and some of the challenges that they need to overcome to provide a highly scalable, highly available trading platform. Again, there was some interesting discussion of the problems and high-level solutions, although many people (myself included) came out of the session not really understanding what they had done. They were very sketchy with the details and I'm left wondering why they couldn't have implemented their system using something like JavaSpaces (what they described sounded like a JavaSpace - put many things in and match them up). The thing I did like about the session was their openness in admitting that none of the solutions were ideal (all had trade-offs) so they had to pick the one that fitted their needs the most.

LiquidityHub

Simon Brown described this as "one of the best sessions of the day":

It presented an overview of the business problem, an overview of the chosen architecture and a look at how some of the technologies were used to build the platform. This project shows that it is possible to build a high volume, low latency platform with mainstream Java-based technologies. BEA's JRockit JVM was used to reduce the jitter of the Java runtime, making it possible to achieve a service level agreement stating that messages should pass through the platform in under 100ms. With its good coverage of everything from the business problem down to some of the implementation details, this was a great way to end the first day.

Jan Balje also gave a summary of this talk:

These people faced about the same problems as the previous ones. They had to achieve something like 20.000 transactions *per second* with a latency of maximum 100ms. They achieved this using Java! The key was Weblogic Real Time, a alternative JVM implementation with real time guarantees.

Programming Languages of Tomorrow

The Busy .NET Developer's Guide to F#

Ola Bini attended this talk:

Me as a Ruby person and programming language nerd had quite a good selection of tracks. I ended up seeing Ted's presentation on F#, which made me feel: wow! Microsoft took ML and did exactly what they've done to all languages on the CLR - added support for .NET objects in the mix.

Haskell: Functional Programming on Steroids

Dionysios G. Synodinos commented on this talk:

Although the title of the presentation had changed from the one in the schedule, the part about the steroids was 100% representative of the speaker. A person of an academic background, working for Microsoft Research and maintaining a GNU licensed Haskell compiler... wow that guy was awesome

An anonymous attendee also added "Clearly, easily the best lecture of the whole conference. Unfortunately, I was a bit overwhelmed by it and did not take make many notes".

Functions + Messages + Concurrency = Erlang

Many people discussed this presentation, including Jan Balje:

After that we went to the presentation about Erlang, a new programming language that's especially suited for use with concurrency. The language is hot on the fashionlists and might become very relevant with the rise of multicore systems. Take a look at the slides when they are available. One to watch. Joe Armstrong (called "the nutty professor" by another participant) also wrote a book about it.

"Paul":

I was lucky to attend a talk at QCON2008 by Joe Armstrong. He pointed out no-one in the hardware world is currently anticipating a limit to scaling by multiple cores; there is anticipation of thousands of cores within the next 10 years. Joe also pointed out Amdahl's Law and noted that if 10% of your program is serial then the most speedup you can get is 10x. This is very thought-provoking: we will need to push concurrent programming into the core of development but, from my own experience, we desperately need new programming paradigms to make sure we don't create terribly buggy software.

And Steve Vinoski:

The best part of the week, though, was getting to meet and hang out with Joe and other Erlang folk. Joe's really an excellent guy. He's quite energetic, and his brain just doesn't stop. He's curious about a lot of technical things beyond Erlang, and I found discussions with him to be full of interesting questions and insights. Given the fact that I work with Erlang quite a lot these days, my hope going in was simply that I'd get a chance to just say hi to him, but I turned out to be lucky enough to spend many hours with him over the course of the conference.

An anonymous attendee also wrote up a detailed summary of the talk, including:

Erlang is essentially a general-purpose, but was designed for a specific domain: telecom switches. These programs are highly concurrent (with hundreds of thousands of parallel activities), with soft real time requirements, with massive network distribution, high availability, and very large software (LOC of >1M) that is required to be upgradable without shutting anything down.

Until 2002, one could hit the whole chip in one clock cycle. So cores stopped getting larger but more numerous.

Hence:
  • each year, a sequential program will get slower
  • each year, a concurrent program will get faster

Peter Pilgrim also attended, and gave a detailed summary:

Why was Erlang invented? It is 20 years ago, invented to solve highly concurrency telephone switches with traffic 100000 activities in real-time. Telecom is the world's planet-wide biggest distributed computer.

Why is Erlang becoming popular? In the 1980's chips got bigger and the clock frequency got faster then faster. One day the speed of light bumped into you. The limit of Amdahl's Law was predicted in 1992 but actually was breached in 2002. You could not physically hit 100% of the entire chip in one clock cycle, hence the Speed-of-Light the distance messages by electrons can travel was impossible to cover. Hence the technology changed to mutliple core (multi-core).

:

 

The Busy Java Developer's Guide to Scala

Dionysios G. Synodinos had some thoughts about this presentation:

Ted's presentations are both informative and enjoyable. He has a way to communicate his thoughts directly to the audience and a very distinctive sense of humor to sugar-coate it all.

One of the impressions that this talk left to me is that even though this genre of languages is getting much attention these days, nobody actually has much experience in the enterprise field and the actual patterns of usage are yet to be established.

Peter Pilgrim also had detailed notes about this talk, including:

Quickshort in Scala is a lot shorter than the equivalent in Java. Another example was Prints XML to the console. Arbitrary place holder syntax using Scala XML framework. Scala supports XPath syntax using libraries that are just imported, because Scala has allows function names with arbitary characters. Scala does not have any sense of operation overall. Scala is a pure object oriented language in the sense that every value is an object. Scala is also a functional language.

Open Space session

Danilo Sato talked about an impromptu Open Space session which took the place of a cancelled talk:

The track finished with an Open Space style session, where participants discussed about which factors are driving the increasing interest and resurgence of different languages. For me this was the most interesting discussion of the day and it strengthened the case for polyglot programmers. One of the topics was that it usually takes years for someone to become an expert at something and that it's harder to leave that knowledge behind to learn something new. I think that it all comes down to whether you want to be a specialist or a generalist and I've already stated my position of trying to be both. Another interesting aspect of the discussion was Martin's point that the evolution happens in cycles and that after a period of stabilization, it's time (again) for broadening the options and looking for new ideas that will lead us to the next big thing. In these times it's important to look for new learning opportunities instead of narrowing your knowledge. I think it's time for me to use the generalist hat for a while... :-)

Evolving Java

Concurrency, Past and Present

Peter Pilgrim wrote a detailed summary of the talk, including:

Brian Goetz does NOT expect people to dump Java and move to JKOCaml, Erlang or other model any time soon.
He promotes "Immutable object where you can". Surprise, surprise. Sometimes it cheaper to make a copy than it is to share. Copying an immutable object is always thread safe.

He recommends to take a look at Scala, in particular the Scala Actors library.

Blending Java with Dynamic Languages

Ola Bini had praise for this talk, saying "This talk was useful in explaining why you'd want to do something like this, and why it's such a powerful technique."

Phil Manchester of Reg Developer also wrote an article about this talk:

Venkat Subramaniam, the chairman of software training company Agile Developer, told QCon Java has grown beyond a language and the excitement is now centered on the combination of the Java platform with dynamic languages such as Groovy, JRuby and Jython.

"Multi-language environments mean you can get full interoperability between constructs created in different languages. Dynamic languages also give you the power of metaprogramming and domain-specific languages. This improves productivity and allows users to be more expressive," Subramaniam said

Evolving the JVM

Peter Pilgrim wrote a summary of this presentation, including:

Dynamic byte code loading - Current method of introduce arbitary byte code is cumbersome. ClassLoader, JVM memory expensive. Basically a solution would be to use anonymous classes that includes the byte code.

Continuations - essentially Ola proposed direct ability to perform stack manipulation inside the JVM.
The idea is pretty powerful as RIFE has a library for this. The concept of continuations allows work flow based and conversation state computations to paused and resumed.

The Cathedral, the Bazaar and the Commissar: The Evolution of Innovation in Enterprise Java

Dionysios G. Synodinos wrote a summary of this presentation:

This presentation built upon Eric Raymond's seminal paper that analyses why open source works so well, which was named "The Cathedral and the Bazaar".

The addition was the Commissar, a role coming from the Soviet Union that was Rod's perception of the actual role that the Java Community Process plays in the evolution of the Java ecosystem. This was a rough metaphor and he tried to back it up with several examples from the fall of the USSR.

In the past many of his preachings, like the lightweight approach of POJOs instead of EJB, have managed to influence the progress of Java. This is more evident than ever in Java EE 5 and the roadmap for the proposed Java EE 6. It will be interesting to see if his views on how the JCP should alter its modus operandi, will actually convince Sun to fundamentally reorganize the process which drives the Java future.

Tim Anderson also discussed this presentation in detail, including:

Johnson asked what seems to me to be a key question: what should be standardized? He said that it is silly to try both to innovate and to standardize at the same time, because the committee will get it wrong. You should standardize in areas that are well known, understood, and proven in the market.

Despite appearances, Johnson is not an enemy of the JCP. He spoke warmly of the current chairman, Patrick Curran, who is trying to reform the organization; and feels that real progress is being made. Curran was also at QCon seeking opinions on the JCP and its future.

Johnson also feels that Java has moved on. "The Java world is no longer a one-party state," he said.

Antonio Goncalves also wrote about this presentation:

It was a sometimes hard view of the JCP. Being an Expert Member and having followed Java EE for many years, I have to say that I share most of what he said. His presentation was divided in three parts 1) What are the sources of innovation : disagreeing with people, experimentation, competitions… 2) History of Java EE : before J2EE (vendor locking, fragmented market), the promise of J2EE (JCP becomes dominant, it creates a market), the decline of J2EE and the rise of open source. 3) What's next. That's where Rod talked about the cathedral (one company creates it all), the Bazaar (many people in a disorder way create it) and the Commissar (a dictatorial way of doing business, i.e JCP is the USSR commissar). Now, open source is not a Bazaar anymore but can be more seen as a cathedral (JBoss, Eclipse, Spring...). The JCP doesn't control Java, there's also OASIS, OSGi, W3C, OMG, Open source...

Peter Pilgrim also wrote detailed notes on this presentation, including:

Open source produces fast experimentation/review cycles. Biggest event in the future of the JCP is not connected to Java at all, opinied Johnson. Clearly, he believes, Sun is very serious about open source since Sun has recently purchased MySQL AB for 1billion dollars in stocks and shares.

What does tomorrow look like? From the expected and known standardisation cycle that Johnson described, he said that we are in the part of the cycle where are in the lack of innovation, at least coming from the JCP. We should aim to keep the benefits of the standards without loosing the innovation edge. Change needs to be more rapid. JCP needs to adapt to survive.

Evolving the Java Language

Peter Pilgrim posted many of the topics of this presentation, including the long-term goals for Java:

* Regularise existing language
** Reification
** Further Generics simplification

* Concurrency support
** Immutable data
** Control Abstractions
** Closures
** Actors, etc

Simon Brown also attended this talk, and had some thoughts:

The final session I attended was Neil Gafter's look at the new features that are being considered for Java 7 and beyond. I've not been following this too closely and it was interesting to catch up with it all. One of the things that struck me most was that the Java platform JSR hasn't even been started yet and that Sun don't seem to have enough resources to do everything that they want to (apparently JavaFX is more important?). I was under the impression that major releases of the platform were going to be on an 18 month cycle, but clearly that's not going to happen. I also don't necessarily understand where/how the open source stuff fits into all of this. There are some nice smaller features being considered for Java 7 (multi exception catching, easy exception rethrowing, the ability to switch on Strings, etc) but part of me thinks that maybe the bigger language changes (e.g. closures) shouldn't be implemented. Perhaps it might be better to stop making big changes to the Java language and start putting more effort into something else (e.g. Scala, Groovy, etc).

Antonio Goncalves also commented:

Basically, the roadmap of Java 7 is still uncertain. Has Neal said, the Properties proposal might not be on JDK 7. And the closure topic came along. Neal said something quite funny about that "lots of companies want closure except two : Sun Microsystems and my boss (Google Inc)". In fact, as hard it is to believe, the JSR for Java 7 hasn't even started yet. And as Neal pointed out, there are not enough resources at Sun to make it happen in a decent timeframe (looks like teams are busy with JavaFX).

Solution Track

Architectural Implications of RESTful design

Michael Hunger had several thoughts about this presentation, and described the basic principles of the talk as:

1) all resources are named by an URI
2) resources are immutable and copied
3) you can construct arbitrary URI which present a computation and use other URIs as parameters
(e.g. active:imageOperation+operation@fllcc:/doc/rotate45.xml+image@http://imageurl)

With these precodition Peter showed a kind of functional programming approach. You just write (or have tools write) your programm (function, expression) as a cascade of URIs.

Tim Anderson also attended this talk:

Peter Rodgers of 1060 Research spoke about his NetKernel, which is a kind of REST runtime. "I'm typing byte code", he explained, as he put together URI strings that performed various operations. He observed that much computing can be reduced to doing something to some resource with another resource, and that this can be expressed as a URI. Here's an example:

    Active:toUpper+operand@ffcpl:/demo/data.xml

In effect this is functional programming via URIs.

Introducing Spring Batch

Phil Manchester of Reg Developer wrote an article about this talk:

Dave Syer, one of Spring Batch's lead committers, said around 40 organizations are working with Spring's Java-based framework, which aims to replace aging mainframe batch applications written in Cobol. It works alongside SpringSource's enterprise Java tools such as Spring Integration.

Panel: Open Source and Open Standards

Patrick Curran described the atmosphere at this panel:

The session was well-attended, and the discussion was lively. On the whole people were supportive of the JCP, and believe in the importance of the work we do. It was argued that both open source and open standards have their place, and that they can and often do complement each other. (Open source methodologies enable feedback from real-world users, thereby improving specifications, while standardization encourages adoption and interoperability.) Some members of the panel and the audience expressed familiar concerns - that the process is weighted against individuals, that we need to be more open and transparent, and that we should adopt open-source development and licensing models for Reference Implementations and conformance test suites (TCKs). I'll be sure to take this feedback into account as we work to evolve the JCP over the coming months.

Panel Discussions: Architecting for Performance and Scalability

In addition to a summary of the panel, Simon Brown said:

The first session I attended on day 2 was called "Architecting for Performance and Scalability", where representatives from Terracotta, (Oracle) Coherence, GigaSpaces, etc (and eBay) came together to talk about the different approaches to building scalable systems. It was surprisingly civilised and it was interesting to compare and contrast each vendor's approach to dealing with the scalability problem.

Clustered Architecture Patterns: Delivering Scalability and Availability

James Governor wrote a detailed summary of this talk, including:

I am sitting here next a guy that works at a large investment bank and he says its an interesting and ground-breaking approach.

    "Terracotta does for distribution of state what the garbage collector did for managing memory".

Which seems like a good transition to Ari's pitch. As usual Ari starts at the beginning, with the fact he built the clustering architecture for Walmart.com. The story is a really good one because it concerns the realities of topdown vs bottom up architectural engineering.

Jevgeni Kabanov also enjoyed this talk:

The best talk I've been to so far is TerraCotta's introduction and patterns. Ari is a good speaker with passion, intensity and speed that I admire (though some others might find the talk a bit too informative).

I've heard about TerraCotta before, but this was the first time I got to know the details. The basic functionality that they claim is transparently clustering your objects, so that all changes on one JVM are visible in all the rest.

Mark Edgington discussed this presentation as well:

Ari Zilka CTO of Terracota gave an excellent presentation on how the product works and how it can be used. I was not going to attend this session, but he was excellent on the previous sessions panel, so i was drawn to it. His view on the world of using stateful in memory data and replication goes against the views of many, but is very compelling. I'll be looking into it deeply.

As did Antonio Goncalves:

Ari Zilka session was Clustered Architecture Patterns Delivering Scalability and Availability with Terracotta. Coming from application server clustered, Terracotta looks like a refreshing technology… but also hides some magic behinds. It hooks into a JVM to replicate object graphs across JVMs. One sentence that came often was : serialization of object is expensive, so just don't serialize them. Terracotta replicates live objects, doesn't serialise them, that's why it can be 10 times faster than common caching or clustering.

Testing by Example with Spring 2.5

Steven Mileham attended this talk:

After the eBay presentation, I wandered over to the Solution track to check out the testing framework for the Spring platform. Basically it creates a wrapper around JUnit 3.8, 4.0 or TestNG that lets you "wire up" an application through Spring configuration and Java 5 annotations.

Browser & Emerging Rich Client Technologies

GWT + Gears: The Browser is the Platform

Antonio Goncalves talked about his impression of this talk:

After he presentation I really had some doubts about choosing Flex for rich client. I think Flex is a very powerful platform to develop Rich internet applications but GWT looks really promising and it is Java, not Action Script. Didier did many demonstrations and finished with a funny slide predicting the death of JSPs, ASPs. IT history is full of predictions, let's see if this one happens or not. GWT compiles Java code into JavaScript but in a very efficient way : one javascript file per browser (no more if IE then else if Firefox… in your code) and the code is obfuscated. On of is full demo was only 60 Kb of javascript (that you can't and don't want to read).

Didier Girard also posted all of the demo applications that he used during this presentation on his blog.

The DOM Scripting Toolkit: jQuery

Remy Sharp gave an overview of his talk, and also described it as:

Last Friday I did my first bit of public speaking. I presented jQuery at QCon.

John asked me a couple of months ago, so I pushed the fear aside to give room for the flattery and agreed.

If you're reading this blog post, and you did happen to see my presentation, I would really love to hear your feedback - good or bad - it's all very useful to me.

Tackling Code Debt

Simon Brown wrote about this talk:

Following this was a session entitled "Tackling Code Debt", which basically focussed on why continuous refactoring is essential to maintain a high quality codebase. This seemed to be a rehash of some of the existing material around refactoring and agile development. Something that did strike a chord though was that somebody needs to take ownership of this whole process and motivate the team to refactor while they develop. I'd say that's part of the architect's role.

Agile in Practice

Therese Hansen enjoyed this track:

There are several great speakers in this track which include Linda who always works the topic from a funny angle - today it was the angle of cycles and it sparked a great discussion about sleep cycles and work cycles. I really learned something very useful that I can take with me home - human beings work/sleep best in 90 minute periods and therefore it is best to do everything in multiples of 90 minutes. And regarding work it is important to take breaks and to focus on one thing at a time and several of the audience members had statistics backing up that statement.

Managers in Scrum

Carl Knibbs posted notes and diagrams from this talk.

Agile Mashups

Jan Balje discussed this presentation:

A talk about how teams in the field don't follow the XP/DSDM/Scrum-book, but combine practices that work for them. Nothing really new, but a nice confirmation from the speaker who has a lot of contacts in the field. The room is packed, testifying to the continuing interest in agile methodologies. By the way, a 'Ziffer' is a Zero Feature Iteration. By the way, the percentage of women in the audience is significantly higher than in our students. Maybe it's just a dutch problem?

"Paul" also attended this talk and added several thoughts, including:

I attended Rachel Davies' talk at QCON 2008 about Agile Mashups. She made the point that in the real world people take a variety of practices from different Agile methods; as a simple example there are Scrum teams using TDD and XP teams using burndown charts. She pointed out a few practices that seem more optional, such as pair programming and sitting together.

I find there's a very interesting tension. On the one hand I think it's important to know what "good agile" looks like. There is a danger that some teams throw away their documentation, hack and claim to be Agile. So to sort this "cowboy agile" from the real thing, you could use the Nokia Test, which is a checklist: tick the boxes and you are doing Scrum :-)

"Paul" also discussed the idea of Polishing Iterations:

In Rachel Davies' Agile Mashup talk at QCON 2008, she noted that many teams have a "polishing" iteration, where no new functionality is released.

My team have recently added this: we find we need a bit of space to step back and look at the application from a "big picture" point-of-view. Sometimes it's useful to look at consistency across the application: particularly from a GUI point-of-view. It's also good to make time for exploratory testing. Finally, we like to make some space to incorporate the feedback we've got from the business during the iteration.

Therese Hansen liked some of the questions which the speaker asked:

One of the questions was "How XP are you?" and specifically:

Can you claim to be a XP-team...
  • if you don't use index cards?
  • if you don't write code test-first?
  • if you don't program in pairs?
  • if you don't sit together?
  • If you don't have an onsite customer?

Carl Knibbs posted notes and diagrams from this talk.

Beyond Agile

Willem van den Ende described this talk he gave with Marc Evers:

Marc and I presented another iteration of "Beyond Agile - People versus Process" at qcon. This transcript was inspired by the re-done introduction. I plan to write some more soon… We had an article in production that we're going to rewrite based on last weeks Agile Open France. The choreographies, the random order and the questions you saw at the top of this post worked well, so we are going to do more with that.

A Kanban System for Software Engineering

Adam Shimali thought that this presentation was the best talk at QCon and described it in detail, starting with:

Like a lot of agile things, to get started all you need is a white board and post-its. However the speaker David Anderson, was anxious to dispel the myth that Kanban was in fact anything to do with whiteboards and post-its. These are enabling "technologies" and they also help create the foundation for transparency as well. However the real issue is:

    "How much work is currently in progress?"

I'll come back to this point, but essentially "work in progress" is what Kanban is all about.

XpDay Sampler

Measure for Measure

Steve Freeman had several thoughts about this presentation, including:

Yesterday, during the XpDay Sampler track at QCon, Keith Braithwaite presented the latest version of his talk on measuring the characteristics of Test-Driven code. Very briefly, many natural phenomena follow a power law distribution (read the slides for more explanation), in spoken language this is usually known as Zipf's Law. Keith found that tracking the number of methods in a code base for each level of cyclomatic complexity looks like such a power law distribution where the code has comprehensive unit tests, and in practice all the conforming examples were written Test-First; trust a physicist to notice this. This matters because low-complexity methods contain many fewer mistakes.

.NET: Client, Server, Cloud

Building Smart Windows Applications

Jan Balje mentioned this talk:

Daniel Moth demonstrated some interesting new features of Visual Studio. He did this at such a record speed that, to understand it, the public will have to download the videos from his blog and play them at half speed. Still, it's nice to see what can be done nowadays.

Building Rich Internet Applications

Mike Taulty has posted the full contents of his presentation on his blog.

Windows as a Web Platform

Jan Balje had some comments:

Mister Nelson is a very driven spreaker who keeps interesting (and funny) contact with his audience, who does not try to demonstrate more than is actually interesting at that moment, and who can make his story up as he goed along (which proves that he knows what he is talking about).

The Rise of Ruby

Panel: When is Rails an Appropriate Choice?

Siva Jagadeesan described this panel's discussions:

The discussion for some reason was totally off topic. Probably the discussion about what types of project can be implemented using rails happened like 5% of time. Most of the discussion was about why nobody from ruby community was worried about ruby running so slow in Windows. Someone from audience mentioned if Ruby community wants ruby to become mainstream and accepted as a language by all enterprises, it should run faster in Windows. Panel members said that they do not really care that much about ruby getting accepted in Enterprises. They told that they are doing it because Ruby as a language makes easy for them to solve the problems.

I totally accept that. If I find a language that is helping me to solve a problem more eloquently, than I will use that language. I do not care whether that language will be accepted by all big enterprises or not. If there is someone smart in those enterprises, than s/he will decide what language is good for solving their particular problem.

Peter Pilgrim also had a summary of this discussion:

It is problem of trust and confusion. Does Rails scale over the enterprise? Admittedly the panel agreed. They suggested asking the question back, how much performance do you need? Ruby is one or two orders of magnitude slower than Java. There are however VM coming out in far as performance is concerned.

Basically no technical solutions here other than the obvious. This discussion descending into how do we market Ruby to enterprises? One idea they gave is to be subversive. For example develop in testing, automate builds. Introduce it as systems administrator tool, so that it is only used internally. Well Groovy can do this as well as scripting tool.

Effective Design

Mark Edgington described this talk:

Another excellent talk in which Kent provided his latest views on how he thinks problems should be solved from the design point of view. He started by following on from his keynote, pushing that we must design with people in mind; design for the skills of your availoble developers.

Intentions & Interfaces - Making Patterns Concrete

Udi Dahan had some thoughts on his talk:

From the feedback I heard after the talk, I think many people were surprised how many different parts of a system can be designed this way, and how flexible it is without making the code any more complex. The message was this:

    Make Roles Explicit

Despite its simplicity, that leads to IEntity, IValidator where T : IEntity, (which I wrote about a year ago - generic validation) and with a bit of Service Locator capabilities, you can add a line of code to your infrastructure that will validate all entities before they're sent from the client to the server.

A Tale of Two Systems

Simon Brown had some thoughts about this talk:

The next session I attended was called "A Tale of Two Systems", which basically presented a picture of what happens when you do and don't design your software. Anybody experienced in software development won't have seen any surprises here, but it was nice to see the good and the bad contrasted in a very down-to-earth way. There was a definite agile spin of all of this; with talk of flat team structures and a distribution of the design responsibility throughout the team. In fact, Pete stated that "he'd never worked on a project that needed an architect". While these approaches work well for small and/or simple projects, I'm still of the opinion that *some* architecture needs to be performed up-front and that somebody needs to take ultimate responsibility.

Peter Pilgrim wrote up a detailed summary of this talk, including:

Good Design leads to better code, better team and success. Design matters: it can go spectacularly wrong, or can go right. However good project management is essential. One has to make decisions at the right time. Punting off difficult decisions and use-cases until one actually has time and the necessity to bring them to fruition is a really good idea. (I think he meant on further reflection that we should save complex decisions in strategy until you can get "think" time as opposed to making the wrong decision in "doing" time.) Good design comes from not being afraid of changing design. Good design also is derived from healthy working relationship.

User Interfaces: Meeting the Challenge of Simplicity

Phil Wills agreed with this presentation:

In proof either of my amazing prescience, or total lack of original insight, almost immediately after I'd made my previous post I attended an excellent talk by Giles Colborne at QCon on simlicity in user interfaces where he expressed the difference between the likes of me and  the vast majority, who don't appreciate that Vim is the best way to edit text, by saying that most people are more interested in getting from A to B without crashing, than in doing so efficiently.  Not sure that he realised how literal some of us are in our favouring the risk of crashing.

Mark Edgington recounted the main points of this talk:

A simple journey to put across some guidelines to aid in designing user interfaces. The talk lacked a bit od depth for me, in that some of the observations felt a little personal rather than having mucg evidence to back them up. It was what i needed though to allow me to think about the previuos talk. The guidelines:
  • Understand the context
  • Just simple enough ( shrink, embody and hide)
  • Organise
  • Don't make the user wait
  • Test
Yes all a little general and obvious, but that's what simplicity is all about; the simple things that get ignored.

Simon Brown also had some thoughts on this talk:

Following the talk about software design was a talk about user interface design, entitled "User Interfaces: Meeting the challenge of simplicity". This session looked at the art of designing user interfaces so that they appear simple to the user, and that how making even the smallest of changes can have a huge impact. One of the most interesting parts of this session was that it almost completely paralleled the session that preceded it; in terms of talking about agile development, feedback, simplicity, you aren't going to need it (YAGNI), etc.

Effective Design

Architectures You've Always Wondered About

Filippo Diotalevi had some thoughts on the track as a whole:

Another great track. People from eBay, BBC, MySpace explaining the inner bits of their architectures, their failures and their successes; that's a kind of presentation that should never miss in a conference. Btw, did you know that MySpaces is running on a .Net stack?

eBay's Architectural Principles

There were many attendees who commented on this presentation, such as Jan Balje:

The day started off with Randy Shoup talking about the architecture behind eBay. This was really architecture on the highest level. The amount of data/transactions/servers etc that ebay has is huge. An impressive talk, the slides are warmly recommended.

Nik Silver:

his presentation was very clearly constructed to show their principles of scalability, and some concrete examples of how these work in practice. You probably wouldn't use their periodic batch processing method to generate recommendations — if only because it's odds on you don't have a recommendation system — but you could take the overarching principle of "async everywhere" and apply that to the next scalable application that you need to work on.

Stefan Norberg:

Partition everything! Partition your system("functional split") and your data ("horizontal split"). It doesn't matter what tool or technology you use. If you can't split it you can't scale it. Simple as that. Regardless if you're using a fancy grid solution or just multiple databases.

Use asynchronous processing everywhere! If you have synchronous coupled systems they scale together and fail together. The least scalable system will limit scalability and the least available system will limit your uptime. If you use asynchronous, decoupled, systems then the system can scale independently of each other.

Danilo Sato:

I had already read about eBay's transactionless style for achieving availability and scalability through data partitioning, but it was interesting to hear about the way they approach deployment for new code and features. There are no changes that cannot be undone. They have automated deployment tools that manage the dependencies between different components (a la package management systems such as apt) that allows rollout ant rollbacks (a la Rails' migrations) of different pieces of code. Interesting stuff!

Simon Brown:

Randy Shoup talked through the key principles, which are :
  1. Partition everything
  2. Async everywhere
  3. Automate everything
  4. Remember everything fails
I've said this before about some of the other sessions, but I really like it when we get to look behind the scenes at what other people are doing, particularly when you see that every system has it's own set of trade-offs and compromises.

Steven Mileham:

A really interesting look at how they design a flexible architecture that allows for their systems to scale with the traffic going to the eBay site, and still enable them to roll out new code releases ever couple of weeks.
[...]
The main enabler of this architecture is their dedication to keep it as stateless as possible. The only time they use a session is the process by which a user creates an auction on the site through a multi-page wizard style interface.

And Peter Pilgrim:

Resource Markdown. They want to detect failure as a quickly as possible. They monitor applications constantly, there is a graceful degradation of the failing resource in such that it gets marked down. The application stops making calls to that resource, and there work is deferred to queues. Critical functionality will fail. First they try to call it repeatedly for a set time. Second if the resource is still available, then work postpone to an asynchronous event.

Explicit "markup" allows resource to be resource and brought online in a controlled way. They manage the client still trying to connect to the resource.

Architecture in the Media Production Workflow

Many attendees discussed this presentation, including Mark Edgington:

It was interesting to hear about the problems the BBC has in identifying the location of requests in order to both serve advertisements and apply DRM. We tend to forget that the BBC gets a near set amount of money with which to work, so spending money serving content outside the UK for nothing would be an expensive business. Also giving away content which is often under license agreements would also cause legal issues. I was interested to learn that the advertisement is coming on line now, due to the foreign office charging there policy of funding the BBC to deliver outside the UK.

Simon Brown:

The next session I attended was about the BBC website, primarily from the perspective of what the user sees. The speakers had literally been drafted in the day before and while I liked their actual presentation, I was left wanting more information about the architecture behind the website. They did go into some details about how many servers they had, etc but not much on technologies and the like. Hats off for pulling something together so quickly though!

Danilo Sato:

Nothing really ground breaking in terms of technology, but it was interesting to hear about their current process of migrating a huge physical media storage (with guys in motorbikes taking tapes from site to site) to digital format, and how it changes their editorial process… somehow the image of the guy in a motorbike reminded me of the Pigeon's high bandwidth transfer protocol :-)

And Tim Anderson:

They are talking about video on bbc.co.uk. Previously this has been handled through pop-up pages that give a choice between Windows Media Player and Real Media. The BBC will now be standardising on Adobe Flash video, embedded in the page rather than in a pop-up. Their research has found that embedded video has a much better click-through than the pop-up style. It also has editorial implications, because it is better integrated into the page. In due course, Flash will be the sole public format (an archive is also kept in some other format).

There is going to be increasing video on the site. Apparently the BBC is getting better at negotiating rights to video content, and we can expect lots of video from this year's Olympics, for example.

Anderson also discussed the BBC's plan to rebuild it's web platform:

Apparently the budget has just been approved, which means the BBC will be going ahead with a new content platform built on Java supplemented by a lightweight PHP layer. The primary goal is flexibility. Recently the BBC went live with a new widgety home page which demonstrates its interest in personalization; ambitions include more extensive customization, more of a social platform (possibly using OpenSocial, OpenID); making a platform more amenable to mash-ups; data-only APIs. As an aside, the BBC home page right now is a bit broken; it says "due to technical problems we are displaying a simplified version of the BBC homepage." After yesterday's session, I know a bit about why this is. The BBC's current site is mostly based on Perl scripts and static pages. It's not really a content management system. The recent home page innovations, which I blogged about recently, are not hosted on the new platform, but are a somewhat hacky affair built on the old platform using SSI and parsing cookies with regular expressions. It went live, but is currently not very reliable. It also uses more CPU, which ultimately means more servers are needed.

Peter Pilgrim posted a detailed summary of the talk including a roadmap:

  • more streamless integration with iPlayer
  • more Mobile
  • more searchable media
  • more integarated local content
  • more Sport with right
  • more Widgets and Syndication
  • more Advertising
  • jokingly more work and less holiday with the run into the Olympic Games 2012

Nik Silver saw this presentation as part of a larger theme of "No magic, no silver bullets, but plenty of solid advice and experience":

All of this was summed up very nicely by the team from BBC News: John O'Donovan, Kevin Hinde and Ross Heritage. They were asked how they managed performance testing for the iPlayer. John spent a few moments describing some of the techniques they employed, but got to the point when he realised the audience really wanted some eye-opening enlightenment which he didn't have. At this moment Kevin stepped forward and said straight out "There's no secret sauce". Indeed not: they just work hard and stick to strong principles.

Behind the Scenes at MySpace.com

Dionysios G. Synodinos praised this talk:

Nice walk through the various administrative tool that the team that handles Myspace.com have build on top of the .NET platform, to monitor a system that serves hundreds of users.

Market Risk System @ BNP Paribas

Mark Edgington had some notes on this talk:

By outlining clearly the problem space and problems faced by IT departments in the banking world;procurement and strategic sign-off and procurement. You got a good feel for how the architecture came together. In this case i felt that the open source decisions, many made due to restrictions, lead ultimately to success and what i would expect to be a happy development team. Probably the most surprising part of the architecture where a set of processes running from Java Main; it seemed to have come about as application servers where the remit of another team, and asking them for involvement was not an option.

Domain Specific Languages in Practice

External Textual DSLs Made Simple

Ola Bini commented that this presentation was "Highly informative and something that I'll keep in mind if I see something that would be benefit from it. The approach is definitely not for all problem domains, of course."

Interviews

Steven Mileham discussed the new interview format which was tested out at QCon London:

InfoQ, the organisers of the conference regularly host recorded interviews with industry shapers on their website. For this conference they invited an audience in to participate in the interviews. I watched an excellent interview with Mark Little, a developer for Redhat who has worked on many of the current web service standards. He spoke about transactional web services, specifically WSTX's two models acid transactions and business activity transactions. For a SOA environment, BA transactions should be used, although this just means providing compensation methods for each service. He talked about the great divide between SOAP Web services and RESTful services, how he wishes they would just "kiss and make up". Finally he mentioned the JBoss Redhat merger last year, comparing JBoss to a teenage son to Redhat's 40 year old father.

Social Events

Several people wrote about the social events, such as Libor Soucek:

In the bar I have finally take a chance to talk personally to high profile people like Steve Vinoski and Jim Webber which bolgs I read regularly already long time. I was mainly interested in getting their personal opinion on use of REST/ATOM in high performance systems as they are not usually addressing that in their write-ups/presentations.

Jim Webber was quite open and admitted that his recommendations are mainly applicable to systems where latency is generally 1second and more. This seems fair to me. Actually vast majority of enterprise applications can fit into this category where prime examples are ERP systems and like.

Steve Vinoski suggested that for such case one shall probably follow REST model conceptually if not directly via common HTTP version due performance constrains. That is certainly possible but I have quite strong doubt is practical here. Does anyone know such successful application in this field to confirm Steve's suggestions?

Therese Hansen:

One of the greatest things about going to QCon is that you can meet all the fantastic speakers in a very informal setting. Last night I was having conversations with people like Joe Armstrong, Steve Vinoski, Jonathan Trevor and Kresten Krab Thorup in the hotel bar and that was fantastic - I learned a lot.

And Ola Bini:

Then there was the speakers dinner... Lots of interesting discussions with lovely people. =)

Opinions about QCon itself

Erik Johnson said:

I'm on a plane returning from the QCon 2008 conference in London. It was a top-notch event and among the great presentations, two things I learned stand out. The first was that I need want to learn Erlang. I spent some time with Erlang inventor Joe Armstrong, and had such good fun that I've already downloaded the bits and bought the book. Second, the REST rationale has really gelled and the proponents no longer see a need to argue their case – it's time to mature the story.

Nik Silver said:

There were very few moments for me during QCon London 2008 of earth-shaking enlightenment — if any. But every hour of the three days of the conference there were insights and guidance that could be tucked away, and reused later to save hours, days or weeks of time elsewhere. Snake-oil salesmen where thin on the ground, and instead there were dozens of people saying one or both of:
  • This is what we did; and
  • This is what you can do.
No magic, no silver bullets, but plenty of solid advice and experience.

Other thoughts:

  • Anders Sveen - QCon is on, and I'm not there. I had a blast there last year, so major envy to everyone that is.
  • Matthew Ford - I've just spent the last week at QCon and I've just about fully recovered (it was pretty intense). Considering there was only one Ruby track I was a little worried that I wouldn't find most of the talks interesting, however this wasn't the case. The most interesting talks had very little to do with Ruby.
  • Libor Soucek - The QCon is without doubt top ranked conference and first one which I visited thanks to my employer. I was very anxious to see what such conference is like and I have to admit it did not disappoint me at all even I have been part of it only during Wednesday.
  • Ola Bini - I had a great time and I look forward to being back the next time. I can definitely recommend QCon as one of the best conferences around in this industry.
  • Steve Vinoski - I just returned home from QCon London, and its excellence exceeded my expectations. As usual, the quality of speakers QCon attracts (just like JAOO) is outstanding, and they cover a very wide variety of topics.
  • Dionysios G. Synodinos - My impressions about the organization of the event are excellent. The facilities were more than adequate, the schedule was practical and worked out fine, and the quality of the presentations was very high.
  • Danilo Sato - I was really impressed with the quality of the conference, from tracks, to sessions, and speakers. QCon is one of the best technical conferences I've participated and I recommend it for anyone interested in enterprise software development. I'm looking forward to attending again next year.
  • Filippo Diotalevi - All in all, it was a really good conference. The level of the presentations was very high, and I particularly enjoyed the fact that there were a lot of speeches not strictly related to Java… obviously nothing against this programming language ;-), but I think a conference should be also a good opportunity to learn something different from the usual tools and languages.
  • Antonio Goncalves - I only had two days at the conference and I have to say, QCon is different from what I'm used to. The audience looked more experienced (or older if you want) and the quality of the presentations was really high. It's not just for techies and not just for Java too. There was several tracks like Agile, Ruby, Middleware, Web...
  • Mark Edgington - In short fantastic. I've attended many larger conferences and I found the smaller size more enabling for communication, both with the speakers and conference attendees. I attended tutorials on Agile management and DSL's (Domain Specific Languages) and followed tracks on cloud computing, effective design and architectures. Each of these had a great set of speakers and there was only one session in the whole week that I felt was weak. I left the conference armed with lots of ideas and inspiration and a handful of excellent contacts.
  • O'Reilly GMT - Over the last three days, O'Reilly have been working a book stand at QCon in the Queen Elizabeth Conference Centre in London. While we haven't been able to get into any of the presentations ourselves, word is this has been a fantastic conference, both stimulating and friendly, and it's certainly a pleasure to be here.
  • Simon Brown - I was talking to a few people about the conference and their experience was the same as mine - Thursday was the best day and there were a couple of time slots where I wish I could have split myself into two. Overall it was another great event and I highly recommend it for anybody thinking about attending next year.

Takeaway Points

Jan Balje said:

So, what do we make of all the information that was poured over us the last days? What is the most important? Of course, we will need to investigate all this stuff further. But at first outlook, I would say:
 
  1. REST looks like an important new trend.
  2. Erlang might answer the need for more Concurrent Oriented programming languages.
  3. Java for enterprise applications is very much alive, also for real time systems.

Peter Bakker had a long list, including:

  • Stateless Architectures push the "state" problem to the database, the trend is to reclaim the state in the application server and services, and put state close to the user
  • To make really scalable solutions: divide, split, partition, work asynchronously, no state
  • first build something simple, then think of all the "...ilities"
  • the accountability of software also holds for organic food: effective, reliable, reasonably priced. Compare bloathed software to industrial food: unneccessary features are like unneccessary additives.

Mark Little related a thought that came to mind:

I wanted to mention something that came up there but which has been playing on my mind for a while anyway: the art of beautiful code and refactoring. I heard a number of people saying that you shouldn't touch a programming language if you can't (easily) refactor applications written using it. I've heard similar arguments before, which comes back to the IDEs available. I'd always taken this as more of a personal preference than any kind of Fundamental Law, and maybe that (personal preference) is how many people mean it. However, listening to some at QCon it's starting to border on the latter, which really started me thinking.

Maybe it's just me, but I've never consciously factored in the question "Can I refactor my code?" when choosing a language for a particular problem.

Markus Voelter posed some questions about functional programming and concurrency:

  • if I use a nice, potentially sideeffect-free functional language (say: F#), what do I do with all the libraries (here: .NET) that are not functional?
  • which parts of my system should I write in a functional language for good concurrency support, and where should I not do that?
  • How much concurrency do I handle on platform/infrastructure-level (e.g. processes, EJBs, etc.) and how much do I handle on the language level? Which granularity is useful for which task?
  • Also: Assuming the platform provides a concurrency model, what can the language do to make sure I cannot (or I am discouraged from) interfering with the platform's concurrency model?
So, functional and concurrency experts in the world, please unite! and write a bunch of (context,problem,solution,tradeoff)-tuples (also known as Patterns) and present them at a future JAOO or QCon conference.

Cleve Gibbon pondered how software development has changed:

Rebecca was spot on when she pointed out that tools such as yacc/lex and to a lesser degree antlr, have received bad press as being archaic, blunt tools with magical powers that are wielded by the chosen few. In reality, there hasn't been much cause for the masses to use them. Will the drive for external DSLs where you want to create your own language make them more popular? I doubt it. I think people will still opt for internal DSLs that are written essentially in the host language (e.g. rSpec, JMock, and so on).

But I have to realise now that those guys/gals born in the eighties and heaven forbid the nineties, are operating off a completely different programming stack. Their minds are in different places and their toolset is somewhat orthogonal to mine.

Mark Edgington described the week after returning to the office from QCon:

Conversation around cloud computing has been a big hit. I got some good contacts and these have lead to investigation on using the Elastic computing and S3 services from amazon for one of our clients. Thoughts from 'The Zen of Agile Management' have allowed me to view our Agile and Prince 2 processes in a new light; I expect my observations to work round to discussion and change in the coming weeks. Also the Domain Specific Language knowledge has invigorated conversation around framework and language selection on projects. All in all a good week.

Simon Brown questioned whether UML is losing popularity:

So then, is UML on the way out? I'd be interested in your thoughts on the following.
  • What notation do you use for your architecture and design diagrams?
  • Is a standard diagramming notation important to you?
  • How does your audience influence how you create diagrams?
  • If you do use UML, what's your UML tool of choice?

Video

This year at QCon, a new experiment was tried out - a video blog called Live@QCon:

Adblock

 

Podcast

Coding the Architecture has created a podcast discussing QCon London:

As promised the 2nd CTA podcast is a roundtable discussion between some of the CTA contributors - namely Simon Brown, Sam Dalton and Kevin Seal. In this podcast we discuss some of the themes emerging from the recent QCon conference held in London and our views on those themes.

Conclusion

QCon London was a great success and we are very proud to have been able to offer such a conference. It will continue to be an annual event in both London and San Francisco, with the next QCon being around the same time next year in each location. We also look forward to bringing QCon into other regions which InfoQ serves, such as China and Japan. Thanks everyone for coming and we'll see you next year!!!!

Rate this Article

Adoption
Style

BT