BT

The First Few Months of a New Team

| Posted by João Miranda Follow 2 Followers on Oct 30, 2015. Estimated reading time: 19 minutes |

Last January, the OutSystems R&D group introduced a new team, called DevOps. Now that the team has been working together for a few months, we thought it would be a good time to reflect on the journey so far and share it with the community.

We’ll look at:

  • lessons learned, both in the “keep doing” and “start doing” categories
  • the team members and their background (4 people, “young” in many different ways)
  • the ways we organized ourselves (Scrum with Kanban elements)
  • some data we gathered from our work management tool (JIRA Agile) for our first completed project

Lessons learned

Over these months we learned a lot. To keep this article’s word count manageable, we’ll focus on the people-centered stuff. But we also had to grapple with a lot of hard, technical, challenges at the same time. The OutSystems Platform can be deployed on-premises, in the cloud or in a hybrid solution. It is a very large product, offered both in .Net and Java technologies and fully supporting SQL Server, Oracle and MySQL.

The main lessons we learned follows.

What we want to keep doing

  • Using the team’s “Ultimate Goal” to frame the team’s activities and projects
  • Plan projects with fixed time and variable scope
  • Throw as much autonomy to each team member as they can handle
  • Maintain a blameless culture
  • Use chat rooms
  • Do regular Lunch‘n’Learns (a.k.a. brown-bag lunches)

What we’re working hard on improving

  • Keeping our QA infrastructure always in top-shape is everyone’s responsibility
  • Build smaller, faster tests - get that feedback as fast as possible
  • Never stray far from inch-pebbles planning

But let’s get some context before detailing those lessons.

The team

The team is composed of four people. We all come from different backgrounds, with different experiences. The word that best describes the team right now is young. That bit of information provides an important background to the remainder of the article.

Two of us (Rita and Helder) have just graduated and they are living their first software development professional experience. João (Proença) has more experience and has spent most of his (short) professional career at OutSystems: he previously worked at R&D, Support and CloudOps. He has just returned to R&D. This article’s author is the oldest, having graduated back in 2000, but he’s been at OutSystems for only a year now.

So… the team is young in age, young at Outsystems, young as a team… and when we started back in January we had a project to deliver in a couple of months, without breaking a complex product, the OutSystems Platform. It was (is) a tall order, but we should be proud that OutSystems R&D trusted us to do well. Luckily, OutSystems has a very supportive culture, so we got a lot of help along the way.

How we organized ourselves

One of the first things we did was defining how we were going to manage our work. We are strong believers in agile processes, we do believe they provide the strongest foundations to keep everyone in the same page, with the same ultimate goal in mind, while giving plenty of autonomy to everyone.

OutSystems R&D as a whole must follow a few guidelines. Do two-week iterations; do iteration review meetings (a.k.a. sprint review meetings). There are some others, but these two are the most important regarding work management. Each team has lots of room to define its own internal processes.

This freedom cannot be stressed enough, it has powerful effects in so many ways. It sends a message of trust: the organization trusts you to do the right thing. It increases efficiency: each team can adapt its own rules to its own context. This is apparent in almost all decisions we took.

And we do love freedom, so we set out to define a set of principles:

  • Minimize explicit/formal coordination. After all, we are just a team of 4, sitting next to each other.
  • Work-in-progress (WIP) limits are a *good* thing. They help us keep our focus.
  • Minimize user stories descriptions. At OutSystems R&D the whole team acts as “product owners”, so we all know what we are talking about.
  • Minimize planning effort. Small team, two-week iterations, co-located.

Workflows & WIP Limits

We use JIRA Agile for our work management. We’re not going to say that the tool is not important, because it is. We’re really enjoying JIRA Agile. It has some nice default reports (the version report is awesome!!) Having said that, many other tools would probably fit our needs.

We do not model the user story’s whole workflow, mainly to simplify JIRA’s configuration. We’re only modeling the story workflow within a sprint. It is not ideal, we’d like to visualize the whole workflow, but it’s good enough for our purposes. If you’re dealing with more complex environments (maybe your stories need to be formally approved), additional states would help.

Our workflow looks like this:

The workflow states are all self-explanatory:

  • To do - The story is in the backlog, or assigned to a sprint but waiting to be initiated.
  • Waiting 3rd party - The story was initiated, but it now has a dependency on a 3rd party. We try to minimize this scenario as much as possible, but if we know that a story will be blocked for longer periods (> 1 or 2 days), we put the story in this state. If the story is going to be blocked for very long periods, we just place it in the to do state. Clearly the story isn’t that important.
  • In progress - The team is actively working on it.
  • Test, review & accept - The story is being tested and reviewed by another team member.
  • Done - The story is done for the sprint. It still has a long way to go until it ends up in a new product release though. It would be really nice to provide greater transparency to the downstream activities…

We’ve set WIP limits to all the intermediate states. Currently, we’ve set WIP limits of 3, 4 and 3 for “Waiting 3rd party”, “In progress” and “Test, review & accept”, respectively. The limits are not very tight, but they are enough to keep us honest and manage the work in progress seamlessly. More than once, they’ve helped us identify stories that were “idle” without any real causes, allowing us to quickly fix the issues and nudging the team be more collaborative.

Depending on your context, you might want to use tighter WIP limits. For instance, if you feel the team picks up new stories when faced with minor impediments, it’s probably best to tighten the limits. Never forget you’re free to experiment.

Iteration planning

Iteration planning can be a slog. We, as most people, don’t like slogs. Fortunately we can avoid them: we have autonomy so we can set the right balance between formal and informal methods.

Given our context, we can swing heavily towards the informal practices. We are developing a product for developing applications, so we are experts in the problem domain. We act like product owners for our own projects. We do two-week iterations. We are co-located... we repeat ourselves.

We do estimate each story, but we use those estimations as rough guides. We do use velocity but again, we use it as a rough guide. We add stories to the sprint taking into account the stories estimates, the velocity and make some adjustments if we know someone will be out of office during the sprint. In the end, we look at the sprint backlog and check if it feels alright. If it is, that’s our commitment.

Well, commitment is not the right word. The goal is not to rigorously complete the stories at the beginning of the sprint. If we’re within the 80%/120% completeness range we’re ok. If we add stories during the sprint (with moderation), that’s ok.

The overriding goal, always, is to make steady progress to the ultimate goal (call it mission, vision, whatever) of the project. JIRA’s version report is very helpful in this regard, as it allows us to control scope and it’s reasonably effective in predicting the future based on historical data.

(Click on the image to enlarge it)

We’re pretty rigorous about creating and estimating issues for (almost) everything we do. Creating issues must be a frictionless process for this to work. Creating and estimating issues helps everyone know what’s happening and also helps with the above reports. These rules support our aim of “minimizing explicit/formal coordination”.

We do not breakdown the stories into tasks when adding them to a sprint, so we don’t estimate tasks. We do (re)discuss what each story means when adding it to a sprint, ensuring we’re all synced. Not having and estimating tasks is a huge time saver for us. We can spend that additional time either on freely discussing the story in more detail or on shortening the planning meeting.

Iteration retrospectives

We do iteration retrospectives. Again, we do not follow a rigid structure. The main goal is just to spend some time looking back (and forward) and to learn together. At the start of the retrospective we decide if we want to focus on a specific topic. If that’s the case, we just discuss it in a free-form format. For instance, in our last iteration did an informal post-mortem about the release process that was the main theme of the iteration. When we do not want to focus on a single item, we default to the “stickies” approach.

By now, probably everyone knows the stickies approach. Each team member writes in their own stickies the good (“continue doing”), the bad (“stop doing”) and the needs improvement (“start doing”). We then stick them on the wall, cluster them by subject, discuss them and decide on action items.

We do not expect that the issues that get selected to be acted upon will be solved instantly. Sometimes they do, sometimes they don’t. But if they are relevant, keeping on talking about them helps to keep acting on them until they are no longer relevant, either because they were solved or they just lost meaning because the context changed.

Are our retrospectives always consequential? No. But that’s ok. We believe that cadence is very important. If we leave these things to “when they make sense”, they usually don’t happen. So we always do retrospectives: they can be longer or shorter, far-reaching or not, but it is always an opportunity for the team to talk. And that’s important.

Feedback and Iteration reviews

Iteration reviews are a *big deal* at OutSystems R&D. Iteration reviews are attended by people from all over OutSystems, always depending on the project’s needs, of course. The goal is to get different perspectives on the problems we are solving. The supporting slides and the meeting notes are always shared with the whole R&D, as it is a simple way for everyone to know what’s happening and… give feedback.

Sometimes receiving feedback is not easy. In general, it’s very important to know the person we’re talking to, but it’s even more important when giving feedback. The person giving feedback should empathize with the one getting the feedback, to increase the odds of getting the message across. For instance:

  • How does the person I’m giving feedback to usually reacts?
  • What is the best setting to give feedback?
  • How experienced is the person in the matter at hand?

In turn, the one receiving feedback should:

  • Be humble
  • Listen carefully
  • Reflect on the substance of the feedback and decide to act (or not) on it
  • Avoid pushbacks if the way the feedback was delivered was sub-optimal

Some project management data

JIRA allows us to get data reasonably easy, at least for simple analysis. Some numbers from our first completed project:

  • Project duration: ~3 months
  • # issues: 113
  • Average duration (cycle time) per issue: 2 days, 5 hours
  • # issue outliers (defined as deviating by more than 1 standard deviation): 10 (~9%)

(Click on the image to enlarge it)

The average duration per issue seems to be on the sweet spot: 2 days allows for each issue to have some meaning, but it also allows for inch-pebble steps. It’s not enough for the latter, but it’s a pre-condition for it.

We want to minimize outliers. Long cycle times usually mean lack of clarity, unbounded scope. They are also motivation killers. This project’s outliers have common root-causes that can be found in many other projects.

Initial research issues, if not time-boxed, can last longer than expected. In essence, if you’re learning something, you have to explore quite a bit. Ensure that exploration is bounded. This exploration is not BDUF. It’s more like exploring unchartered territory, challenging assumptions and knowing enough of the problem domain so we know what we want solve. These activities do not set the scope in stone. At OutSystems R&D we pin time, not scope.

Sometimes, when we start developing a feature we find some technical surprises. When that happens, time can fly. Given that we usually don’t create new issues in these situations, the issue can take longer than expected.

Now and then, the scope of an issue turns out to be more fuzzy than expected and that also increases cycle time. It’s worth pointing out that this also happens when the stories are broken down into tasks, based on past experiences! This fuzziness is not a good argument to define tasks.

Finally, automated system tests are hard. We have some of those and they have a tendency to take a lot of time to build and fine tune.

So, what have we learned until now? Almost too many things to count. In one of our retrospectives we actually made a list just to see how far we had come! For this article, we’re focusing on just a small, but hopefully relevant for the community at large, subset of those lessons.

What we want to keep doing

Ultimate Goal. At OutSystems every team has an Ultimate Goal, aiming to describe the team’s purpose in one short, ambitious, statement. The DevOps team ultimate goal is to build the OutSystems’ Platform features that:

“Decrease the time from change request to feature in production to 0 seconds, while providing full visibility into Development and Operations workflows”

This is probably the most important tool we have. It’s like a compass we use as a guide to validate we’re moving in the right direction. It also measures our success: after each project completion, are we closer to our Ultimate Goal? If not, something is wrong. Either the Ultimate Goal or the project.

Fixed time, variable scope. Again, this is an invaluable tool heavily used at OutSystems. Its advantages have been discussed many times in many places. But this approach ensures we’ll deliver something valuable in a reasonable time frame and forces to make difficult, but necessary choices.

As with many other simple statements, it’s easier to say it than to do it. Variable scope only works when features are built end-to-end: designed, coded and tested. If we consider a feature “done”, but there are still activities to be executed before the feature can be released to our users, then we are constraining our freedom to change the scope. If we leave it until the end of the project to execute all the remaining small activities for a large set of features, then we have a large amount of work to do when the deadline looms. For instance, in a large product like the OutSystems Platform, which must support many different technologies and configurations, each feature has a lot of overhead. It is very tempting to build a feature, test it in the easier configuration, kick the can down the road for the other configurations and say that the feature is (almost) done. That’s a recipe for bad surprises late in the project. Nonetheless, it is tempting to do it as it gives an appearance of fast progress. It requires a lot of self-discipline and good habits to build a feature until it’s “done done”.

Autonomy. Even though the team is young and somewhat inexperienced, we push as much responsibility as possible to everyone. We want to reach a point where anyone can do anything. Different people will solve the same problems in different ways. It’s not always easy to accept that people solve problems differently from you, especially if you’re more experienced or you’re the team leader.

Very important, people who are given autonomy must embrace it, including accepting the fact that they will make mistakes. People who relinquish control must accept those mistakes as part of the process.

That’s why we need...

Blameless culture. We’re not interested in pointing fingers. We are interested in steadily improving. A blameless culture is especially relevant in our team’s context. We are young, we’re going to make a lot of mistakes. We must ensure we don’t keep doing the same ones, so we have to be open about them.

For instance, a while ago, one of us commented: “It seems I’m responsible for most of the bugs!! :( ”. That was true in some ways - the commits were his - but it was false in all the relevant ones. He had written most of the code in that context, so it was always going to be more likely that he introduced bugs than anyone else. But we also do code reviews. We do automated testing. We have more experienced people in the team that are there to help. So we all contributed to the bugs. The important thing is what can we learn from it. Are the code reviews as effective as they could be? Are the tests asking the right questions? Do we have enough automated tests? Are we considering all the relevant scenarios when designing and coding the feature? Are we communicating effectively?

And if after a while he still feels he’s responsible for most of the bugs - and if they are more numerous than they should be - then the whole team failed in some way. For once, the football (soccer for the ones on the other side of the pond) coach constant talk on “the whole team wins, the whole team fails” has real meaning.

Chat rooms. HipChat, Slack, IRC. Choose one and just use it. They truly break down barriers and bring people closer together. OutSystems R&D department does not fit in just one room. And, as with any other organization, distance (even if very small) quickly introduces barriers to face-to-face communication. Chat rooms are a non-intrusive way to bring people together. For instance we have a channel, #help, whose main purpose is to help people who need to know something about the product’s internals to find answers. I’ve learned a lot about the product just by reading the #help channel messages. People who were reluctant to interrupt some of the more experienced engineers (usually busy people) found a way to post the questions in a non-intrusive way. If you leverage integrations, then you take the whole thing to a new level. These tools can really act as a communication and command center. ChatOps is not just hype.

Lunch and Learn (a.k.a. brown-bag lunches). We’re using the “Lunch and Learn” term because “brown-bag” was already taken: OutSystems R&D has regular, department-wide, brown-bags. Every week we see a presentation about topics that might interest us. The goal is not to immediately apply what we learn, although that would be a nice side effect. The goal is to learn for the long term.

What we’re working hard on improving

Better curation of our QA infrastructure. Our QA infrastructure is quite complex. The product supports many different tech stacks: different application servers; different relational database products; a wide array of desktop and mobile browsers. We have thousands of automated tests, ranging from super fast unit tests to complex end-to-end tests. Learning about our QA infrastructure is a large activity in itself. In the beginning we put all this effort in the shoulders of just one of us. Given the challenges we faced, it might be understandable, but far from ideal. We’ve been correcting this in the last couple of months. We bit the bullet and started to spread these activities around the whole team. Nowadays the team is mostly at ease with QA activities.

Smaller and faster tests. We did write lots of automated tests. But it was an arduous process and, in the beginning, not evenly distributed across the team: one of us wrote and curated most of the tests, as mentioned before. We write larger, slower tests than needed. I believe this comes down to two main factors. Writing fast, effective, unit tests is an art that takes time to learn. Believing that they can replace most of the end-to-end tests while providing the same quality assurance is difficult for most people, so the temptation to write more end-to-end tests than needed is sometimes too great. This is one of the aspects were working harder to improve, with some early, but encouraging, results.

Inch-pebbles planning. Fixed time, variable scope. Small, certain, steps forward. Easier said than done. In our current project, we (unintentionally) took some larger steps than we should have. We handled them, but not without some hard work. Our context presents some challenges regarding inch-pebbles planning. Inch-pebbles planning works easier in a web environment where you can release whenever you want. In that context, it’s more intuitive to do planning that way. When you’re planning for releases a few months away, it’s harder. But that doesn’t make inch-pebbles any less important. Arguably, it makes it more important. So we have to keep working to make it easier to build a feature from start to finish in a continuous stream of activities. We must consider all activities when building a feature and work out how to do them in that way. Our tests must run faster. The scope of our user stories must be more business facing and less technical. Above all, we must be more aggressive in cutting scope: ask why! Why do we need this feature? Is it really mission-critical?

Conclusion

And that’s it. It has been an enjoyable ride so far. Over the first months we focused on working out the best way to support our goal of becoming a self-organizing team. We need solid foundations to build on and people stuff must be first. Then we moved on to engineering aspects, mainly in streamlining our QA. That’s an ongoing process, but with lots of opportunities to increase our productivity and quality.

Special thanks to Helder Gregório, João Proença, Rita Tomé and the whole OutSystems R&D department.

About the Author

João Miranda started his career in 2000, at the height of the dot-com bubble. That enlightening experience led him to the conclusion that agile practices are the best way to respond to the business needs of almost all organizations. He currently is a principal software engineer at OutSystems, a RAD provider, where he helps to remove all friction that may hinder development teams’ fast pace.

Rate this Article

Adoption Stage
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread
Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Discuss

Login to InfoQ to interact with what matters most to you.


Recover your password...

Follow

Follow your favorite topics and editors

Quick overview of most important highlights in the industry and on the site.

Like

More signal, less noise

Build your own feed by choosing topics you want to read about and editors you want to hear from.

Notifications

Stay up-to-date

Set up your notifications and don't miss out on content that matters to you

BT