Key Takeaway Points and Lessons Learned from QCon London 2008
In this article, we present the views and perspectives of many of the attendees who blogged about QCon, so that you can get a feeling what the impressions and experiences of QCon London (March 2008) were. From the first tutorials to the last sessions, people discussed many aspects of QCon in their blogs. You can also see numerous attendee-taken photos of QCon on Flickr.This QCon was InfoQ's third conference and the second annual in London. The event was produced in partnership with Trifork, the company who produces the JAOO conference in Denmark. There were over 600 badges printed, with 70% of the attendees being team lead, architect and above. 60% were attending from the UK and 40% from mostly around Europe. Over 100 speakers presented at QCon London including Kent Beck, Martin Fowler, and Erich Gamma. Going forward, QCon will continue to run in the UK around March of every year, and QCon San Fracisco (see previous blogger comments) will be running November 17-21st 2008.
Table of ContentsTutorials
* Patterns for Introducing Agile Practices
* Domain-Specific Languages
* Coding the Architecture: From Developer To Architect
* The Zen of Agile Management
* Build Scalable, Maintainable, Distributed Enterprise .NET Solutions with nServiceBus
* Erich Gamma: How Eclipse Changed my Views on Software Development
* Martin Fowler and Jim Webber: Does my Bus Look Big in This?
* Kent Beck: Trends in Agile Development
SOA, REST and the Web
* REST: A Pragmatic Introduction to the Web's Architecture
* Using REST to aid WS-* - Building a RESTful SOA Registry
* REST, Reuse, and Serendipity
* Diary of a Fence Sitting SOA Geek
* A Couple of Ways to Skin an Internet-Scale Cat
The Cloud as the New Middleware Platform
* Amazon Services: Building Blocks for True Internet Applications
* Application Services on the Web: SalesForce.com
* Google GData: reading and writing data on the web
* Yahoo Pipes: Middleware in the Cloud
* Panel: Programming the Cloud
Banking: Complex High Volume/Low Latency Architectures
* Technology in the Investment Banking Space
* Keeping 99.95% Uptime on 400+ Key Systems at Merrill
* Real-time Java for Latency Critical Banking Applications
* From Betting to Gaming to Tradefair
Programming Languages of Tomorrow
* The Busy .NET Developer's Guide to F#
* Haskell: Functional Programming on Steroids
* Functions + Messages + Concurrency = Erlang
* The Busy Java Developer's Guide to Scala
* Open Space session
* Concurrency, Past and Present
* Blending Java with Dynamic Languages
* Evolving the JVM
* The Cathedral, the Bazaar and the Commissar: The Evolution of Innovation in Enterprise Java
* Evolving the Java Language
* Architectural Implications of RESTful design
* Introducing Spring Batch
* Panel: Open Source and Open Standards
* Panel Discussions: Architecting for Performance and Scalability
* Clustered Architecture Patterns: Delivering Scalability and Availability
* Testing by Example with Spring 2.5
Browser & Emerging Rich Client Technologies
* GWT + Gears: The Browser is the Platform
* The DOM Scripting Toolkit: jQuery
* Tackling Code Debt
Agile in Practice
* Managers in Scrum
* Agile Mashups
* Beyond Agile
* A Kanban System for Software Engineering
* Measure for Measure
.NET: Client, Server, Cloud
* Building Smart Windows Applications
* Building Rich Internet Applications
* Windows as a Web Platform
The Rise of Ruby
* Panel: When is Rails an Appropriate Choice?
* Intentions & Interfaces - Making Patterns Concrete
* A Tale of Two Systems
* User Interfaces: Meeting the Challenge of Simplicity
* Effective Design
Architectures You've Always Wondered About
* eBay's Architectural Principles
* Architecture in the Media Production Workflow
* Behind the Scenes at MySpace.com
* Market Risk System @ BNP Paribas
Domain Specific Languages in Practice
* External Textual DSLs Made Simple
Opinions about QCon itself
On Monday I was delighted to attend Lindas tutorial. It had a very personal touch, was very interactive. She presented the patterns for introducing new ideas into organizations using a play with us participants as actors. Actually this was the story of herself introducing Design Patterns back in 1996. During the discussion many of the subtleties of the patterns (i.e context and forces) were addressed. Having us contributing by playing and asking lots of questions helped a lot to really absorb the essence of the patterns. It was a lot of fun.
Linda is a great speaker with deep understanding and lots of wisdom to share. Thanks a lot for the tutorial.
Marting Fowler started discussing what DSL's are and giving some examples that many of us use in our day to day Job. Like the XML configuration files in the Java world. It is a kind of DSL, it has it's own keywords and syntax in order to express some information that will be used , for instance, to configure an underlying framework.
The problem with XML is that it becomes hard to see the overall behavior behind it. It's not very fluent to understand the purpose of an XML file just by looking at it for the first time. There is too much "noise". Things that get into the way of the readability. - YAML files are an much more readable alternatives to XML.
I've seen bits and parts of this tutorial before, but seeing as the three speakers are working on evolving it to a full and coherent "pedagogical framework" for teaching DSLs, the current presentation had changed quite a bit since the last time. I really liked it and I recommend it to anyone interested in getting a firm grasp about what DSLs are. Having Rebecca talk about external DSLs in the context of parsers and grammars makes total sense, and the only real thing I would say was a problem was the time allotted to it. Her part of the subject was large enough that 75 minutes felt a bit rushed. Of course, I don't see how Martins or Neals parts could be compressed much more either, so maybe the subject actually is too large for a one day tutorial? Anyway, great stuff.
My take out was that the use of DSL's needs to get more prominence, in a similar way to Agile development; which has taken years to gain main stream prominence. Through the use of DSL's developers will gain skills and solve problems in a more elegant fashion, so it's a definite win and with a Martin Fowler book on the subject it should get wider prominence.
- There is a lot of unknowns surrounding the whole area of DSLs, and what Fowler and Co are doing are exploring the problem space and proposing options.
- Some languages, Ruby, Groovy, Scala lend themselves more easily to writing readable internal DSLs
- There were a LOT of Ruby guys in the room, including the leads on rSpec, an internal DSL for Ruby testing
- The definition for DSLs is wide open for interpretation and a lot more will happen in the space over the coming months
- I need to do some more reading around the concept of a language workbenches. Very interesting stuff.
At QCon London I caught the Domain Specific Language (DSL) tutorial by Martin Fowler, Neal Ford, and Rebecca Parsons. While Martin covered how and why you would want to create a DSL he discussed hiding complexity and designing solutions for specific problems. As Martin went into further detail all I could think was: Simplexity.
Anders prefers simple all the way down. Of course, hopefully we all prefer simplicity; however, the devil generally lives in the details.
During our tutorial last week at QCon, we asked attendees to define the software architecture for a small software system and provided a handout containing some guidelines. Since this may prove useful for other people, we're making Software Architecture Document Guidelines v0.1 available for download.
This document was also written about on InfoQ.
David Anderson came at agile almost from the standpoint of standard problem software projects. He looked at how these could be edged towards the agile world through clear identification of the value stream (process) and the examination of metrics around this. His key take outs that quality should be the focus and reducing the work in progress (WIP) leads to efficiency: in effect shorter cycles or sprints work far better than large batches of work. Having concentrated on agile methodologies, this viewpoint of how to get to agile inspired lots of thought; it is often the case that a covert agile approach must be followed, where a full and open method such as Scrum, cannot be taken.
I gave a full day tutorial on nServiceBus and we had a full house! The tutorial was about 90% how to think about distributed systems, and 10% mapping those concepts onto nServiceBus. I made an effort to cram about 3 days of a 5 day training course I give clients into one day, but I think I was only about 85% successful. People didn't have the time needed to let things really sink in and ask questions, but the lively forums and skype conversations available will probably do the trick.
Several people commented on this keynote, including Jan Balje:
A good keynote by a famous name about the development of Eclipse with points about architecture, open source, process etc. At the end he demonstrated Jazz, a really (really) big environment for large distributed software development. It's looks interesting, alas no open source.
One interesting thing I didn't know was that Eclipse now includes API tools that allow to annotate APIs with version information and fine-grained access control and then make Eclipse IDE check these for you.
So WTF is Jazz? Explanation starts with great clouds of fog: "integration", "community" and even "Eclipse experience". Scary diagrams with a lot of arrows. Starts sounding like Twitter on steroids — you can follow events or channels, events are produced from both web and Eclipse plugins. You can also track basic statistics like defect progress. Doubt you can get updates to IM or SMS, though :)
The thing I will say is that it's always good to hear the real-world stories about iterative development - in this case, the Eclipse team work on 6 week iterations and don't necessarily have a fully shippable product at the end of them. Erich wrapped up the keynote with a demo of the new Jazz platform, which pulls together all of the tools in your standard development suite into a fully integrated workflow. This looks like a rehash and enhancement of Rational's Unified Change Management (UCM) platform using Eclipse as the front-end. That might not be totally accurate (I think I saw a Subversion bridge in the slides), but you get the point.
Opening talk from Erich Gamma on how the Eclipse IDE has changed software development for him and his team in the seven years since he began writing it. Interesting stuff, showing the transition from a closed source project to an open source community project, how they migrated from a waterfall methodology which allowed them a slow build of development then only to have to panic near the deadline, to a more iterative agile methodology which forced the team to focus more on delivery and shipping. Finally he introduced Jazz as a team collaboration tool which integrates very tightly with Eclipse.
The biggest take out was that program release shall stick with firm release date. The best date working form Eclipse point of view is June as this creates the smallest disruptions due holidays and other events. They are developing in 6 week dev. "sprints" periods (1 week planning, 4 development and 1 integration and code finalization/testing) during year to add new features for yearly release. One month before release they fully dedicate all dev. resources to final code polishing and testing.
What was also quite interesting on Erich's presentation was describing how fast formed strong Eclipse community was. Community fully covered support of newly joined developers/users, writing manuals, etc. which actually freed original IBM dev. resources to again do what they know the best - full speed development. This is definitely very interesting model to consider for similarly typed projects.
and Maarten Manders:
Erich Gamma's keynote was basically a promotion talk for IBM Rational's newest product called Jazz. It's a project management software, like Jira (the one we use), but it goes far beyond in terms of integration, because it works so well with Eclipse. Furthermore, it seems to have a good system for managing development teams. All in all it seems like a nice product - I especially like the fact that it was made in respect of agile development techniques. The big question that remains is: How well does it work with PHP?
Gamma spelt out the Eclipse philosophy, the starting point being that everything is a a plug-in; that APIs matter a lot and its better to get a small API right rather than get it wrong and have to support it for ever.
He then talked about iteration, a key tenet of agile development. He showed a great slide which charted the progress of some projects, from "all the time in the world" at the beginning, to "say goodbye to your loved ones" at the end, followed by total exhaustion after the thing is shipped. Iterative development with continuous builds and sign-offs every 6 weeks is less stressful and more productive.
The meme of the conference for me came from the keynote talk I did with Martin Fowler. The middleware vendors aren't giving us big powerful software as they think, but instead of proffering big flabby software complete, with what came out during the talk as "enterprise man boobs."
The slides are available at the QCon Web site, but they mightn't make too much sense without the video (which the QCon guys will release in the coming months), apart from the picture of the flabby bloke whose man boobs will forever epitomise enterprise middleware.
Many people wrote about this keynote, including Andrew Whitehouse:
I have personally experienced what I believe is a lot of unnecessary complexity in Enterprise software, and it is refreshing to see Martin and Jim cut through this to come up with a set of principles for effective (lightweight) delivery. I'm also pleased that ThoughtWorks are actively promoting (J)Ruby on Rails in an Enterprise context as this seems a natural successor to "traditional" Java development, and in my opinion they seem to be one of the most enlightened consultancies on how to deliver Enterprise software effectively. (They're also agile.)
Day was finished with great entertaining keynote given by Martin Fowler and Jim Webber on theme ESB use in SOA based applications. In many aspects it remained me presentations from Don Box.
Even thought presentation was essentially same information value as I have already seen recorded on internet for example here never less it was big fun time. They make so much show and great marketing value to convince all attending people to prefer internet based message integration via standard HTTP protocol for SOA compare to ISV specific solution. I would say everyone must get this message clearly presented and seriously start to think how to use it in own solutions.
Jim and Martin's keynote called 'Does my Bus look big in this' mercilessly dissected current middleware approaches. It was and extraordinarily sharp-witted and entertaining talk in which they used their entire linguistic repertoire to pick on middleware vendors and conceive beautifully eloquent metaphors such as "man boobs" (instead of bluntly calling it fat, ugly, bloated middleware products).
After a busy day at the office I managed to get to QCon just in time for the keynote by Martin Fowler and Jim Webber. Both of them are as you probably know great speakers and the keynote was entertaining. They had slides on why EAI, SOA and ESB:s are a joke (as a concept, to maintain), but I can't say they provided any alternatives. They basically said:
In reality though, you often get the most business value by buying 3rd party software. And it needs to be integrated... Not that I enjoy it though. If you build it yourself, then obviously light HTTP services is the way to go.
- Focus on business value from the first iteration
- Keep it simple
Imagine Martin Fowler (in his beloved leather pants) dancing on stage while Jim Webber is giving another rant on bloated enterprise middleware (check out his last take on ESB's). Of course, I'm talking about the evil Enterprise Service Bus, which has become so fat, that it's grown enterprise manboobs! The slides speak for themselves, get them here! Unfortunately, there's no video yet.
And Tim Anderson:
TIBCO, BizTalk, webMethods, you name it, "they're a pain in the neck to use", said Webber.
Enterprise Service Bus? Should be called the "Erroneous Spaghetti Box". SOA? "A dog's breakfast."
According to Fowler and Webber, the Web is the answer. "The dumbness of the internet is a real win... it allows you to do things that you did not think of." The Web is ubiquitous middleware, incremental and low risk.
Squid is your Enterprise Bus... We're not going to need all this crazy middleware that middleware vendors try to sell us. We don't like ESBs... The big up-front middleware approach just isn't very sensible.
Kent Beck's keynote was another popular discussion topic, with several people writing about it such as Joachim Recht:
One interesting bit came up in regard to discipline. I've always said that XP and agile processes take discipline to implement and use. Kent Beck's take on this was that it was just the opposite - not doing XP was hard for him. Instead, it's more or less a question of habit, which is where the problem often lies: Changing part of yourself requires an investment, but it's not completely clear when the investment will yield a profit. Ironically, this economical argument is also used to promote XP: push the cost into the future and pull the profit closer - for example by releasing often, not gold plating, and so on.
There were lots of interesting slides about the rise in tests, quick releases and lots of other agileness but the most interesting aspect of the talk for me was the rise of the new generation of tech savvy business professionals. The old "wizards" detectable by their strange socially inappropriate behaviour are out as a generation of Nu-Geeks with social skills like listening, team work and emotional intelligence are rising to the challenge of making businesses happy.
Kent Beck's keynote was also quite interesting. He thinks that in the future we will write (even) more tests, deploy applications more frequently (apparently Flickr does it every 30 minutes!), work in teams that are more distributed, and solve more complex problems. He also believes that with the rise of a new generation of tech savvy business people, software developers will increasingly loose their 'wizard status' and need to invest more in their non-technical skills.
Kent Beck's keynote was excellent. It was about developer responsibility, developer integrity, and the relationships developers have with those around them (here's a good summary). Extremely insightful, not unexpectedly of course, and covering important topics that are unfortunately often taboo among technical folk.
Highlighting that the focus should be on social rather than technical skills Kent coaxed developers towards integration with the business people. He pointed out that honesty works and hiding behind complexity and changing requirements is not the best way to build business partners and get them to trust in the software your developing. This is a simple and yet key problem for many.
Kent Beck is really a relationship consultant, or should that be counsellor? This is not a bad thing. Beck gave a keynote this morning here at Qcon and talked a bit about techie topics like frequent deployment (he claims that Flickr deploys every half an hour) and creating more tests more often, but the main focus of his talk is relationships within the development team and between the team and the business people (if they regard themselves as separate).
Beck says that the ubiquity of computing is changing the typical characteristics of a programmer. When only geeks had computers, programmers were inevitably geeky - and for whatever reason, that often meant something of a social misfit. Today everyone grows up with computers, which he says makes programming more accessible to non-geeks, who have better social skills.
He didn't talk much about the technical skills that a team should have but more about social skills. No need to have a very cleaver technical guy if he can't work within a team. One person can ruin productivity of the entire team. But that brings another recruitment issue. How to evaluate social skills ? This seems under estimated but as Kent said, it can be learned. For him one of the most important skill is to be able to listen.
And Steven Mileham:
The day began with an interesting talk on Agile development methodologies (yet again). The different aspect of this presentation, was that it was more focused on the social skills needed in a team in order to work in an iterative, collaborative manner. His main point seemed to be to focus on what you as a team are good at, and making sure that your energies go into that, rather than inventive spin and lies or excuses as to why performance wasn't what it should have been.
Beck noted wryly that traditional approaches to software development ran contrary to economic realities. Yet, despite bold attempts at change - such as experimental work on URL-driven design (UDD), literally generating HTML code in real time in response to a web request, during the early days of XP - he has settled on a measured approach.
"Received wisdom is that if you spend time up front getting the design right you avoid costs later. But the longer you spend getting the design right the more your upfront costs are and the longer it takes for the software to start earning. So a rational model of software is to design it quickly - the economic pressure to improvise presents an interesting challenge," Beck told QCon.
The discussion ranged over a variety of topics, but the primary focus was on how individuals and JUGs could get involved in the JCP. Several people expressed concern about what they saw as obstacles to entry (for example, the legal "participation agreement" that members must sign and which many people find intimidating, and we all recognized that it is more difficult for an individual to get involved than for someone whose activity is sponsored by their employer. However, since we call our organization the Java Community Process I am determined to do whatever I can to encourage and enable individuals to participate. My primary reason for attending QCon was to meet with a broad cross-section of Java community members and I'm glad to report that I was able to do so. I'd like to thank the QCon organizers for giving us this opportunity, and of course I also want to thank everyone who attended our sessions.
It was a very informal BoF, as I like them. About twenty people turn up and with Patrick Curran, Rod Johnson, Peter Pilgrim.. we talked about the JCP. Rod was less hard admitting that the JCP has opened up a lot. I gave my experience of being an expert member and others about being JSR leaders. We talked a lot about transparency, open mailing list, wikis… ideas that would bring more transparency to the JCP. I was surprised to see that the spec leader does more or less what he/she wants. There are even some JSRs that already have a public mailing list. I asked Patrick what is the percentage of individual participating at the JCP. I was expecting a figure between 10% or 20%, but no, it's three quarters. I'm not the only individual involved in that then ;o)
At this BOF I had the opportunity to meet the faces behind InfoQ and learn about how it evolved and the future directions. It was a friendly interaction, were several participants gave their opinions on how this information-rich community site can grow.
The BOF was taking so long that the team from Software Engineering Radio that were holding the next BOF in the same room arrived and we all had the chance to interact together until the place had to close.
The public "API" for a RESTful application is its URI address space. You can invent a list of URIs mapped to resources and state sequences all you like. But the reuse potential is limited to whatever your callers can get out of that URI space. Like REST, SQL databases have a uniform interface. But look at the practically unlimited variety of resources you can access. Obviously, a REST URI shouldn't be a SQL statement and I'm not trying to shoehorn XQuery into a URI. All I'm saying is that a URI space can incorporate parent-child and relational characteristics from a data model – using relational database behavior as a guide. This has been a key aspect (for 8+ years, BTW) in developing URI strategies for our products.
The emerging specs and toolkits, like WADL and WCF, feature URI template constructs. But URI templates have no notion of resource linkages (parent, relational, or otherwise) and that limits their effectiveness. At QCon, there was little consensus that WADL was the right way to describe a REST application. But I think REST description languages resource types are coming and I'd like their creators to at least consider resource linkage features for URI templates. It's all been done before.
- I've been developing applications on the Web since it was first released: being at University at the time, I had a lot of freedom to play. I even wrote a browser in InterViews! (Anyone else remember gopher?) Anyway, I remember being glad when the first URN proposal came out because it looked to address some of the issues we mentioned at the time, through the definition of a specific set of name servers: no longer would you have to use URLs directly, but you'd use URNs and the infrastructure would look them up via the name server(s) for you. Sound familiar? Well fast forward 10 years and that never happened. Or did it? Well if you consider what a naming service (or trading service) does for you, WTF is Google or Yahoo?
- My friend and co-InfoQ colleague/editor Stefan has another nice article on REST. In it he addresses some of the common mis-conceptions around REST, and specifically the perceived lack of pub/sub. You what? As he and I mentioned separately, it seems pretty obvious that RSS and Atom are the right approach in RESTland. The feedback I got at QCon the other week put this approach high on my pet projects list for this vacation, so I've been working on that for our ESB as well as some other stealth projects of my own.
Stefan Tilkov led a track on SOA, REST and the Web. Now, the general theme of this (following on from the Fowler/Webber session the day before on the shortcomings of the Enterprise Bus) is that SOAP and WSDL and WS-* have failed to deliver and that REST is fundamentally a better approach to designing distributed inter-application systems. What's wrong with WS-*, SOAP and WSDL? Too many standards; too complex; too brittle; too incompatible; too few free and open source implementations; leaky abstractions; hijacked by middleware vendors who have an interest in keeping technology arcane and expensive.
By contrast REST is being embraced for all sorts of reasons, ranging from purist arguments about the value of resource-based computing where everything has an URI, to pragmatic arguments along the lines of "it works, I can use it, I understand it." However, if you poke at some of the solutions which are described as REST, it turns out that some are more RESTful than others - using HTTP as a transport for POX (plain old XML) is not necessarily REST.
The rest of the day I spent in the "SOA, REST and the Web" track. Having now finally grasped the concept of REST services, I want to go back and rewrite all the web services I've already built. Whereas traditional "Web Services" focus on defining specific interfaces and APIs which must be continually maintained, so that if the back end is changed, the consumers of that service must be updated, REST utilises the standard operations of HTTP;
By using these methods, any HTTP client can now consume your service, the services is identified by a URI which describes the resource, eg; http://example.com/orders/2007 would return all orders from 2007.
Former Web Service evangelist, like Steve Vinosk (Iona, Virtue), Mark Little (HP, WS-* spec-lead, JBOSS) are spending the time bashing SOAP and WS-*. HTTP Web Services and the architecture represented by REST is the new reaction to the over-complicated best-practice. REST has been used for many years and is core interfacing technology at global players like Google. Amazon is also increasing the use of REST. Looking at the history, is there anything specific with REST, that prevents it from starting its journey up the complexity scale, repeating the history of mainstream pre-decessors? Either that, or fail due to inability of accommodating new requirements? Are we heading REST-* ?
As this was a new concept to me, I decided to listen in. A good talk, although I didn't completely understand it in one go. It seems REST is a set of 5 principles which you can apply when developing webapplications. This gives you a lot of technical possibilities. But as far as I can see it's an alternative to webservices. An important new trend already and we still haven't finished with the previous one.
A great presentation on the basic principles of REST and why you should care about it. It is interesting to see that after so much technological elaboration the last few years, it is all coming back to the basic nature of the WWW.
- Talk is about applying REST to a real application problem - managing web services metadata
- Hasn't seen Web UIs that are also APIs - e.g. people can't set Accept headers for debugging purposes in their browsers
- Benefits of AtomPub: clear behavior of POST, GET, PUT, DELETE
- Maybe there could be a generalized REST or Atom query language
- Main problem in SOAs: trust as a root cause
- WS-* metadata isn't enough for the real world - some real life annotations are needed
- Lifecycle handling - still basic, more coming in version 1.1
- "The definition of a legacy application is 'one that works'" -- QoTD!
- EAI products typically centralized hubs, proprietary and expensive, costly
- Code generation (whether from WSDL or from code) creates deceptively significant consumer-service coupling
- Integration problem summary: proprietary approaches too expensive, standard approaches focus on implementation languages, not distributed systems issues, new interfaces -> new application protocols (something he never notics ), ad hoc data formats coupled to interfaces -- all of this inhibit reuse
- A question to consider -- was the pipe invented on day 1? Or discovered later -- serendipity?
- Most of the stuff Steve's done in the past -- building ORBs and such -- dealt with the effects of having a specific interface (generating code, creating the runtime infrastructure). Most of this stuff disappears in REST
- Praise for the REST dissertation - "the clearest architecture document he's read"
- Summary: RPC-oriented systems try to extend language paradigms over the wire -- encourages variation (in methods, datatypes), which can't scale
- REST is purpose-built for distributed systems, properly separates concerns and allows constrained variability only where required
He started with an architectural slide showing interoperability between different systems using DB, SMTP, HTTP, MOM (expensive), ESB and EAI (same thing, just relabelled), JCA, RPC (ignores partial failure), JAX-WS (marshalling/unmarshalling)... Before talking about REST, Steve talked a bit about Unix pipes. They have a very uniform interface and standard file descriptor. Any command can take something in input (stdin), produce something (on stdout) and deals with errors (stderr). That's why we can combine them in any way. REST has also a uniform interface (GET, PUT, POST, DELETE) and you can pipe resources and encourage combination of orthogonal application.
- 2000-2005: spent 5 years going backwards to distribution transparency - it feels like 1970 all over again
- Distribution must be explicit, otherwise you're not able to deal with failures
- Problems with tight coupling introduced by trying to make a remote interaction look like a local interaction
- uniform interface enables generic infrastructural asupport
- specific interface allows for more limited generic support
- Standards: The Web is a series of standards, universal adoption has to count for something, REST/HTTP is ubiquitous
- Schizophrenic on whether or not to prefer messaging or Web
- The XML fairy sprinkles pixy dust (which may in fact be crack cocaine) on your enterprise systems
- Not everything needs to be an OASIS standard. We know not to take a leak in public. (He said this )
- Serendipity is great - don't let the RESTafarians tell you different
- Innovation at the edges of the Web - not by some central design authority such as the W3C
- With changing contracts as part of a resource, we can't be too imperative anymore
- Summary: both the Web and Web services community suffer from piss-poor patterns and practices and awful tooling
Patric Fornasier also commented that this talk "showed some pretty interesting ideas that use existing Web infrastructure (i.e. no WS-*) for integrating applications and realizing business workflows", and Mark Edgington said that this talk was "As expected a full energy opinionated talk on why REST together with the internet as your enterprise bus is leaps and bounds above anything vendors or WSDL and The WS-* (death star) specs have to offer".
Need some more storage? Take S3. Need to quickly scale up with another 20 servers? Take EC2! Need to get to a user's mails, calender and other stuff? Use the services of Facebook, Google and Yahoo. In the end, just mash it all together with Yahoo! pipes! I think it was Nati Shalom, who made this interesting remark about cloud computing: Developing new applications yields very small risks nowadays, because it's so cheap & easy to plug together your application. If you stumble, you won't fall hard. And if you succeed, the cloud will do the heavy lifting and help you scaling out. In my opinion, this could be the next big leap to make (web) development again more agile after the uprise of dynamically typed languages.
Filippo Diotalevi said that this track was "the most interesting track I attended at QCon", and that:
Cloud computing can be seen in two different ways:
- from developer/architect perspective, it is an architectural style that relies on the idea of having a "cloud" of (unlocalized, dynamic, distributed, unreliable but redundant) services that can be discovered and used by applications
- from a "user" perspective, it can be seen as a way of creating new services with no (or just a few line of) code, simply wiring together different services and content providers freely available on the cloud (internet)
Jan Balje described how Amazon's Services are removing some of the excuses for project failures:
Most programmers or students use lack of hardware-resources as one of the reasons their project did not meet the expectations the teachers (or themselves) initially had of it. If we had more computing power, they say, we could have made this or that feature working, we could have some more work done in the little time that was available for the project, or the query would not have taken as long as it does now.
Jeff Barr from Amazon.com has put the lie to these kinds of arguments. Being, in his own words, a real web-service evangelist, he introduced the gathering to the other Amazon.com, the one that at the moment has three data-centers (two in the US, one in Ireland) that enable everybody to get as much computing-power as they need on the fly, for a very little amount of money. Amazon has created web-services that take care of all the muck (as the other guy from Amazon, Jeff Bezos, used to call it) of programming, such as load-balancing, initializing servers and services and that kind of more mundane work. Once registered, users can fire up servers using a FireFox-extension and ssh to them immediately. If needs be, another server can be fired up using the exact configuration of the first one.
The first part of the talk is about S3 file hosting, which is done through a Firefox plugin (which I mistook for FireFTP first). You can upload files, assign ACLs, get the URL for publishing and pay proportionally to storage used, requests done and bandwidth used (all three have assigned fees). Nothing technically fancy, but cool nevertheless.
The next part is about EC2, which is a virtual server on-the-fly renting. You basically store a (special) disk image on the S3 service and then boot from it a number of virtual servers. You get root access to your servers and pay per hour of use. You can add/remove servers both programmatically and from a Firefox extension.
I wasn't aware that Amazon even had anything in this sector, I was expecting services for e-commerce to be honest. What they actually covered though, was a completely scalable server infrastructure upon which you could run any application you wanted. Using their server farms, they host numerous virtual servers, which for a reasonable fee, can be dynamically created, clustered and utilised for an arbitrary amount of time, charged by the hour and storage. Creation of servers was entirely scripted to allow for scaling when demand reached a specific point or more storage was required.
An anonymous attendee looked at the cost benefits of using these services:
Given the costs and inflexibility that typical, old-style computing centres, it is certainly an interesting option for us. A quick calculation tells me that to replicate a part of our infrastructure where Amazon might be a good idea would cost us about $900/month. I don't know the exact figures of our current data center provider, but I'd be surprised if it was that low.
I already use Jungle Disk which is the amazon S3 (simple storage service). But this talk went through the entire set of services, giving enough insight into each to provoke thought, as to potential uses. Of great interest to me was the Elastic compute cloud, allowing for fast scalability and setup, with a time, bandwidth and computing power pricing model. The up and coming Simple DB, an object database looked very interesting.
So SalesForce is just an example of an application built on the Forces.com SaaS platform.
The platform has a lot of nifty 4GL features that boosts development of business applications (the domain of Force.com). The more I heard about Force.com, the more it made me think about SAP MySap ERP application and the business application platform on which SAP applications have been built for ages.
The loop was closed when the SalesForce architect revealed a proprietary business logic language named Apex (The SAP business logic language is named "ABAP"). As SaaS grows, it will be interesting to see if Force.com becomes the "SAP of SaaS".
The google data API talk concentrated on decisions behind the selection of REST over SOAP; basically RESTs four operations get,put, post and delete are likely to cover 90% of your needs. Also the extensions they have developed around query, authentication, concurrent operations and batch updates. These concepts were tied in nicely to examples of use and comments regarding the benefits of building on or with standards; less need to document a big one.
I posted about pipes about a year ago and it has since increased its modules from 20 to 50 and makes up 1/3 of all mash-up calls to Google. I really need to play with it some more, it really is very cool, bringing a lot of power without the need to code and enabling those that can to spend more time on the applications that consume the data.
The panel of the days presenters covered the whole spectrum of cloud computing from current position to future issues. I have five pages of notes from this so not one for my N810 or my thumbs will go dead. The key points of interest to me where the fact that the cloud is almost a renewal of some old technology ideas that did not quite make it, mixed in with standard tried and tested ideas and innovative pricing. If there was or will be a key issue it has to be Security (trust), i think it would only take one major security breach (loss or steal of data) and it could take down a company; many will have to base themselves firmly around trust so one to watch.
Mark also discussed the Rubik's cubes which were given to each member of the panel.
Banking track was very interesting for me not only because I'm working in exactly same domain field but also because challenges imposed by high volume/low latency systems demands very well balanced architecture with extremely careful selection of technology in use. Moreover in this domain is true that some of the latest/greatest stuff of emerging technologies is not always usable here (i.e. for example dynamic languages, WS-*, etc.)
First up was John Davies who jumped in at the last minute because the speaker for that slot couldn't make it to the conference. Instead of a session about domain specific languages, John presented an overview of technology within the investment banking space. It was a really interesting talk and very nicely summarised many of the trends that we've seen over the past few years (e.g. compressed on the wire message formats rather than XML, etc). The key takeaway point for me was that you need to design for scalability. This is one of the reasons why I think it's important that software systems have an explicit and intentional architecture, with somebody taking responsibility for it.
Actually none of the presented application on "banking" track did even touch Windows and/or .NET based system to my surprise! According to John Davies presentation 80-85 percent applications are actually written in Java in this domain. Rest of "market" share is then predominantly occupied by C/C++ due its performance/memory capabilities and predictability of execution (i.e. all "standard" GC based environments are fighting with unpredictable time of execution on near real time application with low latency here).
I think, the best talk on the Banking Track was held by Iain Mortimer, Chief Architect at Merill Lynch. He told us how they gather 9 billlion monitoring messages a day! It turns out that every component in their infrastructure and application stack is constantly producing monitoring messages and they really seem to care about microsecond latencies while doing so.
Naturally, the most challenging task is to make sense of this log tsunami. The goal is to reliably spot system failures without getting spammed by useless alerts. So if for example your hard disk is full, your system will produce tons of error messages: First of all you'll get a capacity error. Then, files can't be written anymore, queries fail, your service queues stack up and finally, you'll run out of memory. Every one of these failures will generate a lot of redundant error messages. However, the one and only message you're interested in, is that your disk is full. Fixing it will make the others disappear - that's called correlation.
It provided a very clear view of how Merrel Lynch deals with the billions of daily messages, produced by there systems globally. The break down of message precedence and the aim of automated fixing of an issue within an 18 second window, was very interesting. The compounded issue of differing vendor error messages, dashboards and the overarching job of combining these into monitoring dashboards at a zone, site, region and global levels was a real eye opener.
Although this was a relatively interesting session, I do think that the title was misleading. Instead of talking about how the 99.95% availability target was being satisfied, Iain talked about how Merrill monitored those systems, and particularly about the rules that they used to monitor those systems. Iain said that their availability requirements allow for 18 seconds of downtime per day, but didn't go into detail about any failover or recovery techniques that allowed them to meet that goal.
Although there were a couple of banking examples thrown in, this session was essentially a generic RTJS talk. The closest I've got to real-time Java is BEA's JRockit JVM with deterministic garbage collection but, as Betrand said, garbage collection is only one part of the story - Java apps also suffer jitter from the JIT compiler kicking in at unwanted times, etc. While this isn't something I'll probably try out myself (Sun's real-time Java VM only runs on Solaris 10 at the moment), what they've done is built a framework onto which you can build your applications where you decide which parts of it are regular Java, soft real-time or hard-real time. My understanding is that the hard real-time stuff is made possible by utilising the underlying OS real-time threading and some clever use of non-heap memory spaces, in addition to appropriately scheduling the garbage collector so that it doesn't interfere. Cool stuff and I think we'll be seeing this pop up in the banking industry soon.
Even the very specific presentations contained valuable points that could be generalised and reused. For example, Matt Youill and Asher Glynn of Betfair talked through how they scaled the transaction processing on their servers by a hundred-fold. Guardian.co.uk doesn't need that kind of throughput, so the details were primarily of intellectual benefit. But a key practical lesson was how they approached the problem: by presenting it to industry players as a challenge carrying great kudos to the winning company.
Next was Betfair talking about their new Tradefair platform and some of the challenges that they need to overcome to provide a highly scalable, highly available trading platform. Again, there was some interesting discussion of the problems and high-level solutions, although many people (myself included) came out of the session not really understanding what they had done. They were very sketchy with the details and I'm left wondering why they couldn't have implemented their system using something like JavaSpaces (what they described sounded like a JavaSpace - put many things in and match them up). The thing I did like about the session was their openness in admitting that none of the solutions were ideal (all had trade-offs) so they had to pick the one that fitted their needs the most.
It presented an overview of the business problem, an overview of the chosen architecture and a look at how some of the technologies were used to build the platform. This project shows that it is possible to build a high volume, low latency platform with mainstream Java-based technologies. BEA's JRockit JVM was used to reduce the jitter of the Java runtime, making it possible to achieve a service level agreement stating that messages should pass through the platform in under 100ms. With its good coverage of everything from the business problem down to some of the implementation details, this was a great way to end the first day.
These people faced about the same problems as the previous ones. They had to achieve something like 20.000 transactions *per second* with a latency of maximum 100ms. They achieved this using Java! The key was Weblogic Real Time, a alternative JVM implementation with real time guarantees.
Me as a Ruby person and programming language nerd had quite a good selection of tracks. I ended up seeing Ted's presentation on F#, which made me feel: wow! Microsoft took ML and did exactly what they've done to all languages on the CLR - added support for .NET objects in the mix.
Although the title of the presentation had changed from the one in the schedule, the part about the steroids was 100% representative of the speaker. A person of an academic background, working for Microsoft Research and maintaining a GNU licensed Haskell compiler... wow that guy was awesome
An anonymous attendee also added "Clearly, easily the best lecture of the whole conference. Unfortunately, I was a bit overwhelmed by it and did not take make many notes".
Many people discussed this presentation, including Jan Balje:
After that we went to the presentation about Erlang, a new programming language that's especially suited for use with concurrency. The language is hot on the fashionlists and might become very relevant with the rise of multicore systems. Take a look at the slides when they are available. One to watch. Joe Armstrong (called "the nutty professor" by another participant) also wrote a book about it.
I was lucky to attend a talk at QCON2008 by Joe Armstrong. He pointed out no-one in the hardware world is currently anticipating a limit to scaling by multiple cores; there is anticipation of thousands of cores within the next 10 years. Joe also pointed out Amdahl's Law and noted that if 10% of your program is serial then the most speedup you can get is 10x. This is very thought-provoking: we will need to push concurrent programming into the core of development but, from my own experience, we desperately need new programming paradigms to make sure we don't create terribly buggy software.
And Steve Vinoski:
The best part of the week, though, was getting to meet and hang out with Joe and other Erlang folk. Joe's really an excellent guy. He's quite energetic, and his brain just doesn't stop. He's curious about a lot of technical things beyond Erlang, and I found discussions with him to be full of interesting questions and insights. Given the fact that I work with Erlang quite a lot these days, my hope going in was simply that I'd get a chance to just say hi to him, but I turned out to be lucky enough to spend many hours with him over the course of the conference.
Erlang is essentially a general-purpose, but was designed for a specific domain: telecom switches. These programs are highly concurrent (with hundreds of thousands of parallel activities), with soft real time requirements, with massive network distribution, high availability, and very large software (LOC of >1M) that is required to be upgradable without shutting anything down.
Until 2002, one could hit the whole chip in one clock cycle. So cores stopped getting larger but more numerous.
- each year, a sequential program will get slower
- each year, a concurrent program will get faster
Why was Erlang invented? It is 20 years ago, invented to solve highly concurrency telephone switches with traffic 100000 activities in real-time. Telecom is the world's planet-wide biggest distributed computer.
Why is Erlang becoming popular? In the 1980's chips got bigger and the clock frequency got faster then faster. One day the speed of light bumped into you. The limit of Amdahl's Law was predicted in 1992 but actually was breached in 2002. You could not physically hit 100% of the entire chip in one clock cycle, hence the Speed-of-Light the distance messages by electrons can travel was impossible to cover. Hence the technology changed to mutliple core (multi-core).
Ted's presentations are both informative and enjoyable. He has a way to communicate his thoughts directly to the audience and a very distinctive sense of humor to sugar-coate it all.
One of the impressions that this talk left to me is that even though this genre of languages is getting much attention these days, nobody actually has much experience in the enterprise field and the actual patterns of usage are yet to be established.
Quickshort in Scala is a lot shorter than the equivalent in Java. Another example was Prints XML to the console. Arbitrary place holder syntax using Scala XML framework. Scala supports XPath syntax using libraries that are just imported, because Scala has allows function names with arbitary characters. Scala does not have any sense of operation overall. Scala is a pure object oriented language in the sense that every value is an object. Scala is also a functional language.
The track finished with an Open Space style session, where participants discussed about which factors are driving the increasing interest and resurgence of different languages. For me this was the most interesting discussion of the day and it strengthened the case for polyglot programmers. One of the topics was that it usually takes years for someone to become an expert at something and that it's harder to leave that knowledge behind to learn something new. I think that it all comes down to whether you want to be a specialist or a generalist and I've already stated my position of trying to be both. Another interesting aspect of the discussion was Martin's point that the evolution happens in cycles and that after a period of stabilization, it's time (again) for broadening the options and looking for new ideas that will lead us to the next big thing. In these times it's important to look for new learning opportunities instead of narrowing your knowledge. I think it's time for me to use the generalist hat for a while... :-)
Brian Goetz does NOT expect people to dump Java and move to JKOCaml, Erlang or other model any time soon.
He promotes "Immutable object where you can". Surprise, surprise. Sometimes it cheaper to make a copy than it is to share. Copying an immutable object is always thread safe.
He recommends to take a look at Scala, in particular the Scala Actors library.
Ola Bini had praise for this talk, saying "This talk was useful in explaining why you'd want to do something like this, and why it's such a powerful technique."
Venkat Subramaniam, the chairman of software training company Agile Developer, told QCon Java has grown beyond a language and the excitement is now centered on the combination of the Java platform with dynamic languages such as Groovy, JRuby and Jython.
"Multi-language environments mean you can get full interoperability between constructs created in different languages. Dynamic languages also give you the power of metaprogramming and domain-specific languages. This improves productivity and allows users to be more expressive," Subramaniam said
Dynamic byte code loading - Current method of introduce arbitary byte code is cumbersome. ClassLoader, JVM memory expensive. Basically a solution would be to use anonymous classes that includes the byte code.
Continuations - essentially Ola proposed direct ability to perform stack manipulation inside the JVM.
The idea is pretty powerful as RIFE has a library for this. The concept of continuations allows work flow based and conversation state computations to paused and resumed.
This presentation built upon Eric Raymond's seminal paper that analyses why open source works so well, which was named "The Cathedral and the Bazaar".
The addition was the Commissar, a role coming from the Soviet Union that was Rod's perception of the actual role that the Java Community Process plays in the evolution of the Java ecosystem. This was a rough metaphor and he tried to back it up with several examples from the fall of the USSR.
In the past many of his preachings, like the lightweight approach of POJOs instead of EJB, have managed to influence the progress of Java. This is more evident than ever in Java EE 5 and the roadmap for the proposed Java EE 6. It will be interesting to see if his views on how the JCP should alter its modus operandi, will actually convince Sun to fundamentally reorganize the process which drives the Java future.
Tim Anderson also discussed this presentation in detail, including:
Johnson asked what seems to me to be a key question: what should be standardized? He said that it is silly to try both to innovate and to standardize at the same time, because the committee will get it wrong. You should standardize in areas that are well known, understood, and proven in the market.
Despite appearances, Johnson is not an enemy of the JCP. He spoke warmly of the current chairman, Patrick Curran, who is trying to reform the organization; and feels that real progress is being made. Curran was also at QCon seeking opinions on the JCP and its future.
Johnson also feels that Java has moved on. "The Java world is no longer a one-party state," he said.
It was a sometimes hard view of the JCP. Being an Expert Member and having followed Java EE for many years, I have to say that I share most of what he said. His presentation was divided in three parts 1) What are the sources of innovation : disagreeing with people, experimentation, competitions… 2) History of Java EE : before J2EE (vendor locking, fragmented market), the promise of J2EE (JCP becomes dominant, it creates a market), the decline of J2EE and the rise of open source. 3) What's next. That's where Rod talked about the cathedral (one company creates it all), the Bazaar (many people in a disorder way create it) and the Commissar (a dictatorial way of doing business, i.e JCP is the USSR commissar). Now, open source is not a Bazaar anymore but can be more seen as a cathedral (JBoss, Eclipse, Spring...). The JCP doesn't control Java, there's also OASIS, OSGi, W3C, OMG, Open source...
Open source produces fast experimentation/review cycles. Biggest event in the future of the JCP is not connected to Java at all, opinied Johnson. Clearly, he believes, Sun is very serious about open source since Sun has recently purchased MySQL AB for 1billion dollars in stocks and shares.
What does tomorrow look like? From the expected and known standardisation cycle that Johnson described, he said that we are in the part of the cycle where are in the lack of innovation, at least coming from the JCP. We should aim to keep the benefits of the standards without loosing the innovation edge. Change needs to be more rapid. JCP needs to adapt to survive.
* Regularise existing language
** Further Generics simplification
* Concurrency support
** Immutable data
** Control Abstractions
** Actors, etc
The final session I attended was Neil Gafter's look at the new features that are being considered for Java 7 and beyond. I've not been following this too closely and it was interesting to catch up with it all. One of the things that struck me most was that the Java platform JSR hasn't even been started yet and that Sun don't seem to have enough resources to do everything that they want to (apparently JavaFX is more important?). I was under the impression that major releases of the platform were going to be on an 18 month cycle, but clearly that's not going to happen. I also don't necessarily understand where/how the open source stuff fits into all of this. There are some nice smaller features being considered for Java 7 (multi exception catching, easy exception rethrowing, the ability to switch on Strings, etc) but part of me thinks that maybe the bigger language changes (e.g. closures) shouldn't be implemented. Perhaps it might be better to stop making big changes to the Java language and start putting more effort into something else (e.g. Scala, Groovy, etc).
Basically, the roadmap of Java 7 is still uncertain. Has Neal said, the Properties proposal might not be on JDK 7. And the closure topic came along. Neal said something quite funny about that "lots of companies want closure except two : Sun Microsystems and my boss (Google Inc)". In fact, as hard it is to believe, the JSR for Java 7 hasn't even started yet. And as Neal pointed out, there are not enough resources at Sun to make it happen in a decent timeframe (looks like teams are busy with JavaFX).
1) all resources are named by an URI
2) resources are immutable and copied
3) you can construct arbitrary URI which present a computation and use other URIs as parameters
With these precodition Peter showed a kind of functional programming approach. You just write (or have tools write) your programm (function, expression) as a cascade of URIs.
Peter Rodgers of 1060 Research spoke about his NetKernel, which is a kind of REST runtime. "I'm typing byte code", he explained, as he put together URI strings that performed various operations. He observed that much computing can be reduced to doing something to some resource with another resource, and that this can be expressed as a URI. Here's an example:
In effect this is functional programming via URIs.
Dave Syer, one of Spring Batch's lead committers, said around 40 organizations are working with Spring's Java-based framework, which aims to replace aging mainframe batch applications written in Cobol. It works alongside SpringSource's enterprise Java tools such as Spring Integration.
The session was well-attended, and the discussion was lively. On the whole people were supportive of the JCP, and believe in the importance of the work we do. It was argued that both open source and open standards have their place, and that they can and often do complement each other. (Open source methodologies enable feedback from real-world users, thereby improving specifications, while standardization encourages adoption and interoperability.) Some members of the panel and the audience expressed familiar concerns - that the process is weighted against individuals, that we need to be more open and transparent, and that we should adopt open-source development and licensing models for Reference Implementations and conformance test suites (TCKs). I'll be sure to take this feedback into account as we work to evolve the JCP over the coming months.
The first session I attended on day 2 was called "Architecting for Performance and Scalability", where representatives from Terracotta, (Oracle) Coherence, GigaSpaces, etc (and eBay) came together to talk about the different approaches to building scalable systems. It was surprisingly civilised and it was interesting to compare and contrast each vendor's approach to dealing with the scalability problem.
I am sitting here next a guy that works at a large investment bank and he says its an interesting and ground-breaking approach.
"Terracotta does for distribution of state what the garbage collector did for managing memory".
Which seems like a good transition to Ari's pitch. As usual Ari starts at the beginning, with the fact he built the clustering architecture for Walmart.com. The story is a really good one because it concerns the realities of topdown vs bottom up architectural engineering.
The best talk I've been to so far is TerraCotta's introduction and patterns. Ari is a good speaker with passion, intensity and speed that I admire (though some others might find the talk a bit too informative).
I've heard about TerraCotta before, but this was the first time I got to know the details. The basic functionality that they claim is transparently clustering your objects, so that all changes on one JVM are visible in all the rest.
Ari Zilka CTO of Terracota gave an excellent presentation on how the product works and how it can be used. I was not going to attend this session, but he was excellent on the previous sessions panel, so i was drawn to it. His view on the world of using stateful in memory data and replication goes against the views of many, but is very compelling. I'll be looking into it deeply.
As did Antonio Goncalves:
Ari Zilka session was Clustered Architecture Patterns Delivering Scalability and Availability with Terracotta. Coming from application server clustered, Terracotta looks like a refreshing technology… but also hides some magic behinds. It hooks into a JVM to replicate object graphs across JVMs. One sentence that came often was : serialization of object is expensive, so just don't serialize them. Terracotta replicates live objects, doesn't serialise them, that's why it can be 10 times faster than common caching or clustering.
After the eBay presentation, I wandered over to the Solution track to check out the testing framework for the Spring platform. Basically it creates a wrapper around JUnit 3.8, 4.0 or TestNG that lets you "wire up" an application through Spring configuration and Java 5 annotations.
Last Friday I did my first bit of public speaking. I presented jQuery at QCon.
John asked me a couple of months ago, so I pushed the fear aside to give room for the flattery and agreed.
If you're reading this blog post, and you did happen to see my presentation, I would really love to hear your feedback - good or bad - it's all very useful to me.
Following this was a session entitled "Tackling Code Debt", which basically focussed on why continuous refactoring is essential to maintain a high quality codebase. This seemed to be a rehash of some of the existing material around refactoring and agile development. Something that did strike a chord though was that somebody needs to take ownership of this whole process and motivate the team to refactor while they develop. I'd say that's part of the architect's role.
There are several great speakers in this track which include Linda who always works the topic from a funny angle - today it was the angle of cycles and it sparked a great discussion about sleep cycles and work cycles. I really learned something very useful that I can take with me home - human beings work/sleep best in 90 minute periods and therefore it is best to do everything in multiples of 90 minutes. And regarding work it is important to take breaks and to focus on one thing at a time and several of the audience members had statistics backing up that statement.
A talk about how teams in the field don't follow the XP/DSDM/Scrum-book, but combine practices that work for them. Nothing really new, but a nice confirmation from the speaker who has a lot of contacts in the field. The room is packed, testifying to the continuing interest in agile methodologies. By the way, a 'Ziffer' is a Zero Feature Iteration. By the way, the percentage of women in the audience is significantly higher than in our students. Maybe it's just a dutch problem?
I attended Rachel Davies' talk at QCON 2008 about Agile Mashups. She made the point that in the real world people take a variety of practices from different Agile methods; as a simple example there are Scrum teams using TDD and XP teams using burndown charts. She pointed out a few practices that seem more optional, such as pair programming and sitting together.
I find there's a very interesting tension. On the one hand I think it's important to know what "good agile" looks like. There is a danger that some teams throw away their documentation, hack and claim to be Agile. So to sort this "cowboy agile" from the real thing, you could use the Nokia Test, which is a checklist: tick the boxes and you are doing Scrum :-)
"Paul" also discussed the idea of Polishing Iterations:
In Rachel Davies' Agile Mashup talk at QCON 2008, she noted that many teams have a "polishing" iteration, where no new functionality is released.
My team have recently added this: we find we need a bit of space to step back and look at the application from a "big picture" point-of-view. Sometimes it's useful to look at consistency across the application: particularly from a GUI point-of-view. It's also good to make time for exploratory testing. Finally, we like to make some space to incorporate the feedback we've got from the business during the iteration.
One of the questions was "How XP are you?" and specifically:
Can you claim to be a XP-team...
- if you don't use index cards?
- if you don't write code test-first?
- if you don't program in pairs?
- if you don't sit together?
- If you don't have an onsite customer?
Marc and I presented another iteration of "Beyond Agile - People versus Process" at qcon. This transcript was inspired by the re-done introduction. I plan to write some more soon… We had an article in production that we're going to rewrite based on last weeks Agile Open France. The choreographies, the random order and the questions you saw at the top of this post worked well, so we are going to do more with that.
Like a lot of agile things, to get started all you need is a white board and post-its. However the speaker David Anderson, was anxious to dispel the myth that Kanban was in fact anything to do with whiteboards and post-its. These are enabling "technologies" and they also help create the foundation for transparency as well. However the real issue is:
"How much work is currently in progress?"
I'll come back to this point, but essentially "work in progress" is what Kanban is all about.
Yesterday, during the XpDay Sampler track at QCon, Keith Braithwaite presented the latest version of his talk on measuring the characteristics of Test-Driven code. Very briefly, many natural phenomena follow a power law distribution (read the slides for more explanation), in spoken language this is usually known as Zipf's Law. Keith found that tracking the number of methods in a code base for each level of cyclomatic complexity looks like such a power law distribution where the code has comprehensive unit tests, and in practice all the conforming examples were written Test-First; trust a physicist to notice this. This matters because low-complexity methods contain many fewer mistakes.
Daniel Moth demonstrated some interesting new features of Visual Studio. He did this at such a record speed that, to understand it, the public will have to download the videos from his blog and play them at half speed. Still, it's nice to see what can be done nowadays.
Mister Nelson is a very driven spreaker who keeps interesting (and funny) contact with his audience, who does not try to demonstrate more than is actually interesting at that moment, and who can make his story up as he goed along (which proves that he knows what he is talking about).
The discussion for some reason was totally off topic. Probably the discussion about what types of project can be implemented using rails happened like 5% of time. Most of the discussion was about why nobody from ruby community was worried about ruby running so slow in Windows. Someone from audience mentioned if Ruby community wants ruby to become mainstream and accepted as a language by all enterprises, it should run faster in Windows. Panel members said that they do not really care that much about ruby getting accepted in Enterprises. They told that they are doing it because Ruby as a language makes easy for them to solve the problems.
I totally accept that. If I find a language that is helping me to solve a problem more eloquently, than I will use that language. I do not care whether that language will be accepted by all big enterprises or not. If there is someone smart in those enterprises, than s/he will decide what language is good for solving their particular problem.
It is problem of trust and confusion. Does Rails scale over the enterprise? Admittedly the panel agreed. They suggested asking the question back, how much performance do you need? Ruby is one or two orders of magnitude slower than Java. There are however VM coming out in far as performance is concerned.
Basically no technical solutions here other than the obvious. This discussion descending into how do we market Ruby to enterprises? One idea they gave is to be subversive. For example develop in testing, automate builds. Introduce it as systems administrator tool, so that it is only used internally. Well Groovy can do this as well as scripting tool.
Another excellent talk in which Kent provided his latest views on how he thinks problems should be solved from the design point of view. He started by following on from his keynote, pushing that we must design with people in mind; design for the skills of your availoble developers.
From the feedback I heard after the talk, I think many people were surprised how many different parts of a system can be designed this way, and how flexible it is without making the code any more complex. The message was this:
Make Roles Explicit
Despite its simplicity, that leads to IEntity, IValidator
where T : IEntity, (which I wrote about a year ago - generic validation) and with a bit of Service Locator capabilities, you can add a line of code to your infrastructure that will validate all entities before they're sent from the client to the server.
The next session I attended was called "A Tale of Two Systems", which basically presented a picture of what happens when you do and don't design your software. Anybody experienced in software development won't have seen any surprises here, but it was nice to see the good and the bad contrasted in a very down-to-earth way. There was a definite agile spin of all of this; with talk of flat team structures and a distribution of the design responsibility throughout the team. In fact, Pete stated that "he'd never worked on a project that needed an architect". While these approaches work well for small and/or simple projects, I'm still of the opinion that *some* architecture needs to be performed up-front and that somebody needs to take ultimate responsibility.
Good Design leads to better code, better team and success. Design matters: it can go spectacularly wrong, or can go right. However good project management is essential. One has to make decisions at the right time. Punting off difficult decisions and use-cases until one actually has time and the necessity to bring them to fruition is a really good idea. (I think he meant on further reflection that we should save complex decisions in strategy until you can get "think" time as opposed to making the wrong decision in "doing" time.) Good design comes from not being afraid of changing design. Good design also is derived from healthy working relationship.
In proof either of my amazing prescience, or total lack of original insight, almost immediately after I'd made my previous post I attended an excellent talk by Giles Colborne at QCon on simlicity in user interfaces where he expressed the difference between the likes of me and the vast majority, who don't appreciate that Vim is the best way to edit text, by saying that most people are more interested in getting from A to B without crashing, than in doing so efficiently. Not sure that he realised how literal some of us are in our favouring the risk of crashing.
A simple journey to put across some guidelines to aid in designing user interfaces. The talk lacked a bit od depth for me, in that some of the observations felt a little personal rather than having mucg evidence to back them up. It was what i needed though to allow me to think about the previuos talk. The guidelines:
Yes all a little general and obvious, but that's what simplicity is all about; the simple things that get ignored.
- Understand the context
- Just simple enough ( shrink, embody and hide)
- Don't make the user wait
Following the talk about software design was a talk about user interface design, entitled "User Interfaces: Meeting the challenge of simplicity". This session looked at the art of designing user interfaces so that they appear simple to the user, and that how making even the smallest of changes can have a huge impact. One of the most interesting parts of this session was that it almost completely paralleled the session that preceded it; in terms of talking about agile development, feedback, simplicity, you aren't going to need it (YAGNI), etc.
Another great track. People from eBay, BBC, MySpace explaining the inner bits of their architectures, their failures and their successes; that's a kind of presentation that should never miss in a conference. Btw, did you know that MySpaces is running on a .Net stack?
There were many attendees who commented on this presentation, such as Jan Balje:
The day started off with Randy Shoup talking about the architecture behind eBay. This was really architecture on the highest level. The amount of data/transactions/servers etc that ebay has is huge. An impressive talk, the slides are warmly recommended.
his presentation was very clearly constructed to show their principles of scalability, and some concrete examples of how these work in practice. You probably wouldn't use their periodic batch processing method to generate recommendations — if only because it's odds on you don't have a recommendation system — but you could take the overarching principle of "async everywhere" and apply that to the next scalable application that you need to work on.
Partition everything! Partition your system("functional split") and your data ("horizontal split"). It doesn't matter what tool or technology you use. If you can't split it you can't scale it. Simple as that. Regardless if you're using a fancy grid solution or just multiple databases.
Use asynchronous processing everywhere! If you have synchronous coupled systems they scale together and fail together. The least scalable system will limit scalability and the least available system will limit your uptime. If you use asynchronous, decoupled, systems then the system can scale independently of each other.
I had already read about eBay's transactionless style for achieving availability and scalability through data partitioning, but it was interesting to hear about the way they approach deployment for new code and features. There are no changes that cannot be undone. They have automated deployment tools that manage the dependencies between different components (a la package management systems such as apt) that allows rollout ant rollbacks (a la Rails' migrations) of different pieces of code. Interesting stuff!
Randy Shoup talked through the key principles, which are :
I've said this before about some of the other sessions, but I really like it when we get to look behind the scenes at what other people are doing, particularly when you see that every system has it's own set of trade-offs and compromises.
- Partition everything
- Async everywhere
- Automate everything
- Remember everything fails
A really interesting look at how they design a flexible architecture that allows for their systems to scale with the traffic going to the eBay site, and still enable them to roll out new code releases ever couple of weeks.
The main enabler of this architecture is their dedication to keep it as stateless as possible. The only time they use a session is the process by which a user creates an auction on the site through a multi-page wizard style interface.
And Peter Pilgrim:
Resource Markdown. They want to detect failure as a quickly as possible. They monitor applications constantly, there is a graceful degradation of the failing resource in such that it gets marked down. The application stops making calls to that resource, and there work is deferred to queues. Critical functionality will fail. First they try to call it repeatedly for a set time. Second if the resource is still available, then work postpone to an asynchronous event.
Explicit "markup" allows resource to be resource and brought online in a controlled way. They manage the client still trying to connect to the resource.
Many attendees discussed this presentation, including Mark Edgington:
It was interesting to hear about the problems the BBC has in identifying the location of requests in order to both serve advertisements and apply DRM. We tend to forget that the BBC gets a near set amount of money with which to work, so spending money serving content outside the UK for nothing would be an expensive business. Also giving away content which is often under license agreements would also cause legal issues. I was interested to learn that the advertisement is coming on line now, due to the foreign office charging there policy of funding the BBC to deliver outside the UK.
The next session I attended was about the BBC website, primarily from the perspective of what the user sees. The speakers had literally been drafted in the day before and while I liked their actual presentation, I was left wanting more information about the architecture behind the website. They did go into some details about how many servers they had, etc but not much on technologies and the like. Hats off for pulling something together so quickly though!
Nothing really ground breaking in terms of technology, but it was interesting to hear about their current process of migrating a huge physical media storage (with guys in motorbikes taking tapes from site to site) to digital format, and how it changes their editorial process… somehow the image of the guy in a motorbike reminded me of the Pigeon's high bandwidth transfer protocol :-)
And Tim Anderson:
They are talking about video on bbc.co.uk. Previously this has been handled through pop-up pages that give a choice between Windows Media Player and Real Media. The BBC will now be standardising on Adobe Flash video, embedded in the page rather than in a pop-up. Their research has found that embedded video has a much better click-through than the pop-up style. It also has editorial implications, because it is better integrated into the page. In due course, Flash will be the sole public format (an archive is also kept in some other format).
There is going to be increasing video on the site. Apparently the BBC is getting better at negotiating rights to video content, and we can expect lots of video from this year's Olympics, for example.
Anderson also discussed the BBC's plan to rebuild it's web platform:
Apparently the budget has just been approved, which means the BBC will be going ahead with a new content platform built on Java supplemented by a lightweight PHP layer. The primary goal is flexibility. Recently the BBC went live with a new widgety home page which demonstrates its interest in personalization; ambitions include more extensive customization, more of a social platform (possibly using OpenSocial, OpenID); making a platform more amenable to mash-ups; data-only APIs. As an aside, the BBC home page right now is a bit broken; it says "due to technical problems we are displaying a simplified version of the BBC homepage." After yesterday's session, I know a bit about why this is. The BBC's current site is mostly based on Perl scripts and static pages. It's not really a content management system. The recent home page innovations, which I blogged about recently, are not hosted on the new platform, but are a somewhat hacky affair built on the old platform using SSI and parsing cookies with regular expressions. It went live, but is currently not very reliable. It also uses more CPU, which ultimately means more servers are needed.
- more streamless integration with iPlayer
- more Mobile
- more searchable media
- more integarated local content
- more Sport with right
- more Widgets and Syndication
- more Advertising
- jokingly more work and less holiday with the run into the Olympic Games 2012
All of this was summed up very nicely by the team from BBC News: John O'Donovan, Kevin Hinde and Ross Heritage. They were asked how they managed performance testing for the iPlayer. John spent a few moments describing some of the techniques they employed, but got to the point when he realised the audience really wanted some eye-opening enlightenment which he didn't have. At this moment Kevin stepped forward and said straight out "There's no secret sauce". Indeed not: they just work hard and stick to strong principles.
Nice walk through the various administrative tool that the team that handles Myspace.com have build on top of the .NET platform, to monitor a system that serves hundreds of users.
By outlining clearly the problem space and problems faced by IT departments in the banking world;procurement and strategic sign-off and procurement. You got a good feel for how the architecture came together. In this case i felt that the open source decisions, many made due to restrictions, lead ultimately to success and what i would expect to be a happy development team. Probably the most surprising part of the architecture where a set of processes running from Java Main; it seemed to have come about as application servers where the remit of another team, and asking them for involvement was not an option.
Ola Bini commented that this presentation was "Highly informative and something that I'll keep in mind if I see something that would be benefit from it. The approach is definitely not for all problem domains, of course."
InfoQ, the organisers of the conference regularly host recorded interviews with industry shapers on their website. For this conference they invited an audience in to participate in the interviews. I watched an excellent interview with Mark Little, a developer for Redhat who has worked on many of the current web service standards. He spoke about transactional web services, specifically WSTX's two models acid transactions and business activity transactions. For a SOA environment, BA transactions should be used, although this just means providing compensation methods for each service. He talked about the great divide between SOAP Web services and RESTful services, how he wishes they would just "kiss and make up". Finally he mentioned the JBoss Redhat merger last year, comparing JBoss to a teenage son to Redhat's 40 year old father.
Several people wrote about the social events, such as Libor Soucek:
In the bar I have finally take a chance to talk personally to high profile people like Steve Vinoski and Jim Webber which bolgs I read regularly already long time. I was mainly interested in getting their personal opinion on use of REST/ATOM in high performance systems as they are not usually addressing that in their write-ups/presentations.
Jim Webber was quite open and admitted that his recommendations are mainly applicable to systems where latency is generally 1second and more. This seems fair to me. Actually vast majority of enterprise applications can fit into this category where prime examples are ERP systems and like.
Steve Vinoski suggested that for such case one shall probably follow REST model conceptually if not directly via common HTTP version due performance constrains. That is certainly possible but I have quite strong doubt is practical here. Does anyone know such successful application in this field to confirm Steve's suggestions?
One of the greatest things about going to QCon is that you can meet all the fantastic speakers in a very informal setting. Last night I was having conversations with people like Joe Armstrong, Steve Vinoski, Jonathan Trevor and Kresten Krab Thorup in the hotel bar and that was fantastic - I learned a lot.
And Ola Bini:
Then there was the speakers dinner... Lots of interesting discussions with lovely people. =)
Erik Johnson said:
I'm on a plane returning from the QCon 2008 conference in London. It was a top-notch event and among the great presentations, two things I learned stand out. The first was that I need want to learn Erlang. I spent some time with Erlang inventor Joe Armstrong, and had such good fun that I've already downloaded the bits and bought the book. Second, the REST rationale has really gelled and the proponents no longer see a need to argue their case – it's time to mature the story.
Nik Silver said:
There were very few moments for me during QCon London 2008 of earth-shaking enlightenment — if any. But every hour of the three days of the conference there were insights and guidance that could be tucked away, and reused later to save hours, days or weeks of time elsewhere. Snake-oil salesmen where thin on the ground, and instead there were dozens of people saying one or both of:
No magic, no silver bullets, but plenty of solid advice and experience.
- This is what we did; and
- This is what you can do.
- Anders Sveen - QCon is on, and I'm not there. I had a blast there last year, so major envy to everyone that is.
- Matthew Ford - I've just spent the last week at QCon and I've just about fully recovered (it was pretty intense). Considering there was only one Ruby track I was a little worried that I wouldn't find most of the talks interesting, however this wasn't the case. The most interesting talks had very little to do with Ruby.
- Libor Soucek - The QCon is without doubt top ranked conference and first one which I visited thanks to my employer. I was very anxious to see what such conference is like and I have to admit it did not disappoint me at all even I have been part of it only during Wednesday.
- Ola Bini - I had a great time and I look forward to being back the next time. I can definitely recommend QCon as one of the best conferences around in this industry.
- Steve Vinoski - I just returned home from QCon London, and its excellence exceeded my expectations. As usual, the quality of speakers QCon attracts (just like JAOO) is outstanding, and they cover a very wide variety of topics.
- Dionysios G. Synodinos - My impressions about the organization of the event are excellent. The facilities were more than adequate, the schedule was practical and worked out fine, and the quality of the presentations was very high.
- Danilo Sato - I was really impressed with the quality of the conference, from tracks, to sessions, and speakers. QCon is one of the best technical conferences I've participated and I recommend it for anyone interested in enterprise software development. I'm looking forward to attending again next year.
- Filippo Diotalevi - All in all, it was a really good conference. The level of the presentations was very high, and I particularly enjoyed the fact that there were a lot of speeches not strictly related to Java… obviously nothing against this programming language ;-), but I think a conference should be also a good opportunity to learn something different from the usual tools and languages.
- Antonio Goncalves - I only had two days at the conference and I have to say, QCon is different from what I'm used to. The audience looked more experienced (or older if you want) and the quality of the presentations was really high. It's not just for techies and not just for Java too. There was several tracks like Agile, Ruby, Middleware, Web...
- Mark Edgington - In short fantastic. I've attended many larger conferences and I found the smaller size more enabling for communication, both with the speakers and conference attendees. I attended tutorials on Agile management and DSL's (Domain Specific Languages) and followed tracks on cloud computing, effective design and architectures. Each of these had a great set of speakers and there was only one session in the whole week that I felt was weak. I left the conference armed with lots of ideas and inspiration and a handful of excellent contacts.
- O'Reilly GMT - Over the last three days, O'Reilly have been working a book stand at QCon in the Queen Elizabeth Conference Centre in London. While we haven't been able to get into any of the presentations ourselves, word is this has been a fantastic conference, both stimulating and friendly, and it's certainly a pleasure to be here.
- Simon Brown - I was talking to a few people about the conference and their experience was the same as mine - Thursday was the best day and there were a couple of time slots where I wish I could have split myself into two. Overall it was another great event and I highly recommend it for anybody thinking about attending next year.
Jan Balje said:
So, what do we make of all the information that was poured over us the last days? What is the most important? Of course, we will need to investigate all this stuff further. But at first outlook, I would say:
- REST looks like an important new trend.
- Erlang might answer the need for more Concurrent Oriented programming languages.
- Java for enterprise applications is very much alive, also for real time systems.
Peter Bakker had a long list, including:
- Stateless Architectures push the "state" problem to the database, the trend is to reclaim the state in the application server and services, and put state close to the user
- To make really scalable solutions: divide, split, partition, work asynchronously, no state
- first build something simple, then think of all the "...ilities"
- the accountability of software also holds for organic food: effective, reliable, reasonably priced. Compare bloathed software to industrial food: unneccessary features are like unneccessary additives.
Mark Little related a thought that came to mind:
I wanted to mention something that came up there but which has been playing on my mind for a while anyway: the art of beautiful code and refactoring. I heard a number of people saying that you shouldn't touch a programming language if you can't (easily) refactor applications written using it. I've heard similar arguments before, which comes back to the IDEs available. I'd always taken this as more of a personal preference than any kind of Fundamental Law, and maybe that (personal preference) is how many people mean it. However, listening to some at QCon it's starting to border on the latter, which really started me thinking.
Maybe it's just me, but I've never consciously factored in the question "Can I refactor my code?" when choosing a language for a particular problem.
Markus Voelter posed some questions about functional programming and concurrency:
So, functional and concurrency experts in the world, please unite! and write a bunch of (context,problem,solution,tradeoff)-tuples (also known as Patterns) and present them at a future JAOO or QCon conference.
- if I use a nice, potentially sideeffect-free functional language (say: F#), what do I do with all the libraries (here: .NET) that are not functional?
- which parts of my system should I write in a functional language for good concurrency support, and where should I not do that?
- How much concurrency do I handle on platform/infrastructure-level (e.g. processes, EJBs, etc.) and how much do I handle on the language level? Which granularity is useful for which task?
- Also: Assuming the platform provides a concurrency model, what can the language do to make sure I cannot (or I am discouraged from) interfering with the platform's concurrency model?
Cleve Gibbon pondered how software development has changed:
Rebecca was spot on when she pointed out that tools such as yacc/lex and to a lesser degree antlr, have received bad press as being archaic, blunt tools with magical powers that are wielded by the chosen few. In reality, there hasn't been much cause for the masses to use them. Will the drive for external DSLs where you want to create your own language make them more popular? I doubt it. I think people will still opt for internal DSLs that are written essentially in the host language (e.g. rSpec, JMock, and so on).
But I have to realise now that those guys/gals born in the eighties and heaven forbid the nineties, are operating off a completely different programming stack. Their minds are in different places and their toolset is somewhat orthogonal to mine.
Mark Edgington described the week after returning to the office from QCon:
Conversation around cloud computing has been a big hit. I got some good contacts and these have lead to investigation on using the Elastic computing and S3 services from amazon for one of our clients. Thoughts from 'The Zen of Agile Management' have allowed me to view our Agile and Prince 2 processes in a new light; I expect my observations to work round to discussion and change in the coming weeks. Also the Domain Specific Language knowledge has invigorated conversation around framework and language selection on projects. All in all a good week.
Simon Brown questioned whether UML is losing popularity:
So then, is UML on the way out? I'd be interested in your thoughts on the following.
- What notation do you use for your architecture and design diagrams?
- Is a standard diagramming notation important to you?
- How does your audience influence how you create diagrams?
- If you do use UML, what's your UML tool of choice?
This year at QCon, a new experiment was tried out - a video blog called Live@QCon:
Coding the Architecture has created a podcast discussing QCon London:
As promised the 2nd CTA podcast is a roundtable discussion between some of the CTA contributors - namely Simon Brown, Sam Dalton and Kevin Seal. In this podcast we discuss some of the themes emerging from the recent QCon conference held in London and our views on those themes.
QCon London was a great success and we are very proud to have been able to offer such a conference. It will continue to be an annual event in both London and San Francisco, with the next QCon being around the same time next year in each location. We also look forward to bringing QCon into other regions which InfoQ serves, such as China and Japan. Thanks everyone for coming and we'll see you next year!!!!
InfoQ Sep 01, 2015